…but I guess currently the main difference is that the dot-com boom basically was the internet. It was huge but we thought it was small.
The AI boom is small, but we think it’s huge.
We think it’s going to replace, invade, and take over everything all at once. It’s not. The models take work to be constrained. The training data takes time to find and cleanse. The applications and use cases have to be married to good data sets and modeled to functional outcomes.
We thought the internet was just people with journals, blogs, and geocities pages… It turned out to be ebay, youtube, reddit, instagram, tiktok, Facebook, Amazon.
Right now we think AI will be in everything doing everyone’s jobs …but it will probably be a bunch of smaller tools. Translators, cancer finders, copy editors, face scanners, better security cameras, better search engines…
The applications are still uncertain, but seem kind of smaller than we first thought. Ubiquitous (probably) but not larger than life.
The dot com boom was larger than life.
But I think the biggest difference is that AI will most likely need the internet. The dot com thing WAS the internet.
“AI” is kind of subordinate, or seems smaller in some sense. It’s more like lots of small changes. They’ll each make life a little easier, but it will all feel separate rather than some giant face in the sky that speaks intelligence into society or controls the world.
And on the apparently contradictory “AGI” development, I’ll add the Big Tech kingpens like Altman/Zuck/Musk are hyping that while doing the exact opposite. They say that, turn around, cut fundamental research, and focus more on scaling plain transformers up or even making AI ‘products’ like Zuck is. And the models they deploy are extremely conservative, architecturally.
Going on public statements from all three, I think their technical knowledge is pretty poor and they may be drinking some of their own kool-aid. They might believe scaling up transformers infinitely will somehow birth more generalist ‘in the sky’ intelligence, when in reality it’s fundamentally a tool like you describe, no matter how big it is.
Depends what you mean by AI…
…but I guess currently the main difference is that the dot-com boom basically was the internet. It was huge but we thought it was small.
The AI boom is small, but we think it’s huge.
We think it’s going to replace, invade, and take over everything all at once. It’s not. The models take work to be constrained. The training data takes time to find and cleanse. The applications and use cases have to be married to good data sets and modeled to functional outcomes.
We thought the internet was just people with journals, blogs, and geocities pages… It turned out to be ebay, youtube, reddit, instagram, tiktok, Facebook, Amazon.
Right now we think AI will be in everything doing everyone’s jobs …but it will probably be a bunch of smaller tools. Translators, cancer finders, copy editors, face scanners, better security cameras, better search engines…
The applications are still uncertain, but seem kind of smaller than we first thought. Ubiquitous (probably) but not larger than life.
The dot com boom was larger than life.
But I think the biggest difference is that AI will most likely need the internet. The dot com thing WAS the internet.
“AI” is kind of subordinate, or seems smaller in some sense. It’s more like lots of small changes. They’ll each make life a little easier, but it will all feel separate rather than some giant face in the sky that speaks intelligence into society or controls the world.
This is spot on.
And on the apparently contradictory “AGI” development, I’ll add the Big Tech kingpens like Altman/Zuck/Musk are hyping that while doing the exact opposite. They say that, turn around, cut fundamental research, and focus more on scaling plain transformers up or even making AI ‘products’ like Zuck is. And the models they deploy are extremely conservative, architecturally.
Going on public statements from all three, I think their technical knowledge is pretty poor and they may be drinking some of their own kool-aid. They might believe scaling up transformers infinitely will somehow birth more generalist ‘in the sky’ intelligence, when in reality it’s fundamentally a tool like you describe, no matter how big it is.