We’ve been following Future Crunch for a while, and we always enjoy what they have to say, and share, especially when it comes to artificial intelligence (AI).
In Part Two of this series, we talk to Gus and Tanushree from Future Crunch about AI, and debunk some of the myths surrounding what artificial intelligence (AI) is going to look like in the future.
To give a bit of context, in our experience as a company focused on helping teams to use collaborative and communication software more effectively, we think a lot about the brain and how it works.
One of the topics that neuroscientists are currently investigating is cognitive load management, which is concerned with the way in which our brains are currently processing information, with all of the screens, devices and information that we interact with on a daily basis.
This is something that AI can help with, as being able to process gargantuan amounts of data and make summaries it is something that AI can be trained well to do, but what does this mean for people?
Question Two: What are your thoughts around AI and the future?
Gus’ perspective is that although we’ve learnt a lot about the brain in the last 10-20 years, there’s still a lot we don’t know about how it works. This includes cognitive load management, which is part of a much bigger picture of figuring out the human brain’s complexities:
As we start to understand our brains, we can start to interact with them differently, which includes brain machine interfaces. This allows us to think of the development of AI as augmentation. It becomes natural, we adapt and so does our capacity to handle information. It’s both terrifying and exciting – Gus
But what about the widespread fear of AI taking over? Many people, including many WNDYR team members are concerned about the potential of AI to become self-aware, to be able to gain consciousness and gain control over the human race. Many people have argued for this possibility, and it’s all we read about in the news, but there’s a different angle to this story.
What makes a human a human?- Tanushree
Philosophers have debated this for centuries, but we still don’t have a definitive answer, basically because the human experience is so complex, despite our attempts to make sense of it for thousands of years.
For AI to “become human”, we first need to know exactly what defines a human. From the perspective of the Future Crunch team, this is where things start to get a bit convoluted, mostly because the sensational accounts of a dystopian future dominated by AI aren’t written by the scientists or developers working on AI.
Does AI get it’s own consciousness? No. – Gus
And here are the reasons, in Gus’ words:
- “The human brain is not a computer. It’s a quantum level biological organism, the most complicated thing in the universe. We understand little about the brain and consciousness, and we’re not close to figuring it out what it is. The idea that we can replicate that is silly.”
- “Machine learning and neural networks are based on the brain and pattern recognition, but they’re not a recreation of the brain. AI can only perform tasks that are designed for it.”
- “The computer processing power required to run a hypothetical AI human recreation is also miles away. Also, this is not taking into account what’s going on on the quantum level. Fear about AI is promoted by people who aren’t working in the field.”
So there you have it, the idea of general purpose AI, or a machine that does things that you don’t have to train it to do, is not a concept based on actual evidence, but more on a fear of losing control over the technology we create.
The reality is that AI is good at many things, but the parameters always need to be defined. Take traditional AI for example, which uses maths to solve problems. This uses pattern recognition and being able to take every kind of mathematical calculation into account, like Deep Blue, the famous machine who was trained to beat a person at chess, and did.
Another consideration is that when we think about intelligence, we think about it from an outside perspective. In other words, we define intelligence according to observation, but we have no way of defining what intelligence really looks like from the inside out, so how could we possibly recreate it?
Intelligence is something that we observe, but we have no idea about the inner life of intelligence- Gus
So really, when it comes to how the brain works, and being able to recreate human consciousness, there’s really no way to make sense of how this might happen, when we can’t even make sense of it ourselves. In reality, we’re all just trying to figure out the intricacies of our own thoughts and where they come from.
Basically: when it comes to really understanding the brain and being able to recreate it, we have no idea.
What are your thoughts about AI and the future? Are you worried about AI taking over and displacing humans, or are you excited about the possibilities of AI to augment human thinking and development? Share your thoughts and ideas in the comments below.