This was a really interesting read! I noticed that your article focused primarily on ANI (Artificial Narrow Intelligence). I was wondering if you have any thoughts on AGI (Artificial General Intelligence) or ASI (Artificial Superintelligence), and how they might compare or relate to the topics you discussed.
Thank you for reading. That's a good question. Short answer - TBH, I don't know for sure.
Long answer - I think if AGI/ASI can do "everything" across several industries at once, that productivity line might be vertical so automation/productivity will be winning that demand-productivity across many industries. So the framework still works I think, but the results won't be pretty in those industries. We will need to find new things to do.
I think we will figure out a way. I genuinely believe in human creativity and potential. Even if AGI matches us on current tasks, we will discover new things and invent new things to do with such a technology.
What happens after death? How to make space travel possible? How do we live 200+ years? There's so much left to discover and build. We've never run out of questions before. I doubt we will run out of it now. We will have new questions when we answer the old ones.
In a recursive version of the bitter lesson, tech futurists like Negroponte flog their bespoke futures (remember OLPC?) when you would much rather let the general purpose discovery engine (aka the market) figure out what the future will bring. With AI, the human wannabe prophetβs existential dilemmas are even more poignant, and will likely be even more wrong. I would just build massive data/energy (over)capacity and let the world figure out the rest.
Great points. Both (1) let the general purpose discovery engine (aka the market) figure out what the future will bring and (2) build massive data/energy (over)capacity and let the world figure out the rest.
Fascinating read - there are so many great parallels to how tech has evolved in the past. I think we are still in the era where companies are trying to figure out how to use AI. Once we move beyond this phase where boards mandate AI features just to have AI features (without necessarily the most thoughtful approach on strategy/implementation), I think the naysayers will come around to understanding how powerful AI can be.
Thank you! Agree - asking to add AI for the sake of adding AI doesnβt make the product better. Hopefully we will see more thoughtful approaches rather than this checking a box kind of approach.
Thanks for writing this, it clarifies a lot. I particularly liked Negroponte quote from 1993; it's fascinating how some predictions were so close, others far off. Great parallel to AI today.
Great read Nowfal. shared with a few people I am having this debate with as we speak :)
Thank you π itβs an interesting debate worth having.
This was a really interesting read! I noticed that your article focused primarily on ANI (Artificial Narrow Intelligence). I was wondering if you have any thoughts on AGI (Artificial General Intelligence) or ASI (Artificial Superintelligence), and how they might compare or relate to the topics you discussed.
Thank you for reading. That's a good question. Short answer - TBH, I don't know for sure.
Long answer - I think if AGI/ASI can do "everything" across several industries at once, that productivity line might be vertical so automation/productivity will be winning that demand-productivity across many industries. So the framework still works I think, but the results won't be pretty in those industries. We will need to find new things to do.
I think we will figure out a way. I genuinely believe in human creativity and potential. Even if AGI matches us on current tasks, we will discover new things and invent new things to do with such a technology.
What happens after death? How to make space travel possible? How do we live 200+ years? There's so much left to discover and build. We've never run out of questions before. I doubt we will run out of it now. We will have new questions when we answer the old ones.
In a recursive version of the bitter lesson, tech futurists like Negroponte flog their bespoke futures (remember OLPC?) when you would much rather let the general purpose discovery engine (aka the market) figure out what the future will bring. With AI, the human wannabe prophetβs existential dilemmas are even more poignant, and will likely be even more wrong. I would just build massive data/energy (over)capacity and let the world figure out the rest.
Great points. Both (1) let the general purpose discovery engine (aka the market) figure out what the future will bring and (2) build massive data/energy (over)capacity and let the world figure out the rest.
Fascinating read - there are so many great parallels to how tech has evolved in the past. I think we are still in the era where companies are trying to figure out how to use AI. Once we move beyond this phase where boards mandate AI features just to have AI features (without necessarily the most thoughtful approach on strategy/implementation), I think the naysayers will come around to understanding how powerful AI can be.
Thank you! Agree - asking to add AI for the sake of adding AI doesnβt make the product better. Hopefully we will see more thoughtful approaches rather than this checking a box kind of approach.
Thanks for writing this, it clarifies a lot. I particularly liked Negroponte quote from 1993; it's fascinating how some predictions were so close, others far off. Great parallel to AI today.
Thank you!