AGI or artificial general intelligence, seems tantalisingly close.
As the large language models keep scaling, the sheer impressive output quality astounds everyone using it. Now, more than ever before, we as a species seem closer to obtaining a super intelligence that rivals our own.
However, it wasn't so long ago that scientists postulated the same thing with nuclear fusion. Wasn't we supposed to be on the cusp of limitless energy, or flying cars? Decades later, it seems just as far away.
Could the same be true of AGI?
It begs the question, will AGI be just as elusive as nuclear fusion? This could well be the case.
Perhaps it is hard to imagine, that the entire thing that makes us special, our own consciousness, can simply be simulated by an series of enormous matrices? Leading AI experts seem to suggest, we'll eventually get there as we scale up.
However, this might not be the truth. In fact, it may take a new paradigm shift, perhaps the invention of a new algorithm, to make that break through.
On the other hand, what if we ask the other obvious question? Perhaps true AGI might NEVER be possible.
If this is the case maybe, we might end up saving out species from a superior intelligence, that is to say, a mass extinction. Perhaps, this could be our saving grace. I don't know, but I'm hoping for a serious setback. I don't think we're ready for AGI, or the consequences. . . Hopefully, we might not get there.