Smartphones But in Thin Air | Future of Interface Evolution

Spoke about the future of our interface evolution @ WEEXPO India (India’s largest education conference) — if you want to really grasp the vision I’d rec watching this first, then reading the article👇🏾

We Are Lucky

What has pandemic life been like for you? I’m curious to know how the experience varies for different people & began researching about that this past weekend.

And somewhere down that internet rabbit hole I began looking into past pandemics and what pandemic life was like for the humans back then.

Back in 1918 during the Spanish Flu, not only were there no effective vaccines or antivirals, drugs that could treat the flu like today but the ability to converse with friends and family all over the world was far far off from possible.

Source

In fact with the lack of communication tools, people were reliant on local community updates through physical printing methods to be kept in the loop about pandemic progress. And if you got the virus there was no way to let anyone know, let alone ask for help.

In fact if you were smart at the time, you would use something like a white scarf, and wrap it around your door handle to let people know that you weren’t feeling well & they shouldn’t enter your room.

But with no way to connect & communicate that made the 24 months more dreadful from isolation and fear of the unknown.

Equivalent to the smartphone we use to call an ambulance now

Whereas today, during the coronavirus, the advancement of technology has made the transition & change more comfortable for many people.

With an estimate of over 5 billion people having access to a cell phone/computer which gives us the ability

  • to call & communicate with anyone across the world
  • enable video conferencing for educational sessions at schools
  • push the adoption of remote work

We’re far better off in terms of connection than those during the Spanish Flu.

Computer Interfaces

Our devices have become an external limb — we carry them everywhere, are holding them throughout the day, and if we leave the house with nothing else, our phone will at the least be in our pocket.

If we think about this progression — it started with computer interfaces. A huge innovation for computers were the mouse and keyboard additions which made using them more intuitive.

Okay well Homer Simpson may be one of the few exceptions to that

Hand x Computer Interfaces — Touch Screen

We then got rid of the keyboards & began controlling these devices like we do the rest of the objects around us — with our hands.

Simple touches that replaced the need for keyboards and mouses, leading us to the touchscreen phase. It’s so easy and natural that a child can control these devices with no instructions using just their fingers.

Tomorrow (?)

But what now? …..

It seems like a very obvious progression for these hardware devices to now disappear and for us to have the information that can be accessed using them available to us in thin air.

Smartphones have walked, now let’s try to run. You see what’s in front of Iron Man — that could be how you access everything on your phone w/o the phone😯

Currently virtual, mixed, augmented reality have been leveraged to try to make this a reality with initiatives like Project Aria by Facebook, however using XR in isolation, doesn’t seem like a promising bet for making this ubiquitous.

Let’s compare this — access to all information from a device in thin air instead — to smartphones, becoming ubiquitous.

According to the founder of Neurable there were 3 main stages to the iPhone becoming ubiquitous.

Category 1: Niche/Enterprise specialized category

The Palm Pilot phone falls under this, which was specifically for business people to organize their data.

Category 2: Consumer Specialized Category

The Palm Live Drive falls under this, which was a phone with WiFi and touchscreen & all these novel features that the industry hasn’t seen yet.

Category 3: Ubiquitous

The iPhone falls under the ubiquitous category. The interesting thing is that the Palm Live Drive which came out years earlier than the iPhone had more features.

🤫Secret: The difference between a consumer specialized product and a ubiquitous product is the interaction being undeniably natural aka the iPhone.

Brain Computer Interfaces aka… mind control?!

Current AR/VR methods are an unnatural alternative to the iPhone because they lack

  • depth
  • density
  • determination of user intent

*Enter: Brain computer interface magic🔮*

Brain-computer interfaces (BCIs) are systems that allow communication between the brain and various machines — and seemingly the next stage in this interface evolution.

An event-related potential (ERP) is the measured brain response that is the direct result of a specific sensory, cognitive, or motor event. It has been used to allow differently-abled individuals to type using their thoughts, hands-free.

The way that this works is that

  1. The patients sit in front of a screen.
  2. The screen displays a 5-by-12 grid and inside each square there’s a single letter, number, or the words ‘space’ ‘enter’, or ‘delete’ — similar to the keys on a regular keyboard.
  3. An EEG headset was used that recorded the patient’s brain data live and then sent it to the computer.
  4. Then using the code created, when the brain data was sent to the computer, the computer would look for specific markers to verify the key the patient intended to click and would then control the keyboard accordingly.
  5. The specific marker that the code was looking for was a blink. Through the non-invasive headset approach, a blink was considered an artifact which means that it’s so significant that it overtakes any other data recorded by the headset. And because a blink causes such a huge spike on the brain wave graph, it’s easy to identify.

6. If the patient blinked, the key would move to the right.

7. If the patient didn’t blink for three seconds then that key would be clicked on and typed.

8. There were also shortcut keys which let you go back to a specific row, or shuffle between them without having to click through each box individually.

Source

As this progresses & is made to work faster and more intuitively it could allow all of us to type using just our thoughts.

Leveraging such ERP for a variety of conscious ‘want’ detection & then using advancing ML tech to produce real-time, responsive actions in XR environment have been created — like thinking of wanting an orange on a table from the other side of the room and then the orange travelling to you.

This intersection of XR and BCIs could allow us to create an alternative to smartphones which becomes ubiquitous💡

Okay but can you explain how the magic trick works?

Three main steps:

  1. collecting brain signals
  2. interpreting them
  3. outputting commands to a connected machine according to the brain signal received.

The first step is measuring brain signals which can be done with three different approaches.

  1. Invasive method: micro-electrodes are placed directly into the cortex, measuring the activity of a single neuron.

💭So, imagine you are living in a different city and want to join your family for their dinner table conversation.

This invasive approach would be like asking all you family members to wear lavalier microphones on their collars while on call with you and you listen to the conversation using air pods — so you are able to get clear and crisp information (/audio) of what they are saying.

2. Semi-invasive method: electrodes are placed on the exposed surface of the brain and electrocorticography (or ECoG) data is collected by recording electrical activity from the cerebral cortex of the brain.

💭This would be like having a smartphone on the table to listen to your family’s conversation via WhatsApp audio, you can hear what they are saying but it could be crisper.

3. Non-invasive method: sensors are placed on the scalp to measure the electrical potentials produced by the brain also known as electroencephalogram (or EEG) data.

💭This is like having your phone in the kitchen and listening to the call while you clean up the living room — it’s harder to make sense of what they are saying, but you could listen carefully to understand it better.

⭐Drawing the parallel between the types of brain signal collection methods & the ‘listening to a call’ methods shows the accuracy & detail of data that can be collected by each.

The measured brain signals are then run through a software which identifies the different brain signals based on the activity performed.

For example if a theta wave is detected — which is when the brainwave has a frequency between 4 to 7 hertz that indicates the individual is sleeping.

Then machine learning is used to activate an output where a machine takes a certain action. The external device is controlled/responds according to how it was programmed to based on the brain signal detected.

But The Magic Trick Hasn’t Been Shared Widely…Yet

Currently the most practical applications of brain computer interfaces have been in the medical field🧬

According to the World Bank, 1 billion, or 15% of the world’s population are differently-abled and must rely on others to help them perform basic tasks like eating, walking, drinking water & bathing.

They lack the privilege of controlling their day to day actions and interacting with other people & technology the way fully-abled individuals can.

The previous example of typing using your thoughts is currently being used by LIS or Locked In Syndrome patients who cannot move any muscles in their body except for blinking their eyes.

Using BCIs researchers from Case Western Reserve University & Harvard Medical School have also been able to restore functional reach-to-grasp ability for a patient who had a severe spinal cord injury and was paralyzed from the shoulders down.

Restoring Functional Reach-to-Grasp using FES and ECoG
  1. They implanted brain recording electrodes & a functional electrical stimulation system in the patient.
  2. Took the electrical signals which represented the patients thoughts and used that to control stimulation of his arm & his hand.
  3. Bridged his spinal cord injury so the patient could then think about moving his arm and his arm would move.

There was another study that allowed paralyzed monkeys to walk.

  1. The team first mapped how electric signals are sent from the brain to leg muscles in healthy monkeys, walking on a treadmill.
  2. They also examined the lower spine, where electric signals from the brain arrive before being transmitted to muscles in the legs.
  3. Then they recreated those signals in monkeys with severed spinal cords, focusing on particular key points in the lower part of the spine.
  4. Microelectrode arrays implanted in the brain of the paralyzed monkeys picked up and decoded the signals that had earlier been associated with leg movement.
  5. Those signals were sent wirelessly to devices that generate electric pulses in the lower spine, which triggered muscles in the monkeys’ legs into motion.
  6. The problem when you’re paralyzed is usually you have a fissure in your spinal cord or some part of your spinal cord that doesn’t work so by transmitting the information over the internet, they bypassed the issue that was in the spine.
  7. The treatment has great potential for being applied to immobile human patients — and trials for the same have already begun.

Does anyone care about our magic trick?

A company that’s currently working on creating a world without limitations — where you can be someone who’s different abled or fully-abled but can interact or control anything using just your mind is Neurable.

The algorithm system’s goal is to understand user intent. So far they’ve created a virtual reality device and cap which records electrical signals from brain activity and interprets what you actually want to do from them.

They’ve created a software that allows you to control devices using just your mind — in both the real world and the digital world — essentially telekinetics.

The way that it works is that when new information is presented to you, your frontal lobe which is in charge of executive function communicates with your parietal lobe which helps with visual spatial processing.

They leverage those two areas of the brain to understand user intent and move external objects accordingly — which takes us to the world of brain-machine interfaces where interfaces meet robotics and smart objects🧠

Brain Chips by 2030s — Brains = Hacked?

As BCIs progress exponentially & they are being used for purposes other than helping those differently abled, neurological data from more and more people will become available, & we will be confronted with critical ethical questions.

This future isn’t as far away as it seems with Ray Kurzweil, used to the Head of Engineering at Google, who thinks that by 2030s we’re all likely to have brain chips.

To read the whole article

For example, the US Military is in clinical trials for a mood altering brain implant which would allow them to control how you feel. The intent is to help soldiers with depression or PTSD feel better. But if you think about it: that’s still a third party controlling how you feel🤔

Researchers have also been able to detect if you are laughing, smiling, running or jumping in a dream. And so if we could program these dreams — — which feel like real experiences — to create virtual realities of one’s mind that you or other people could step into, then one’s desires, secrets, and thoughts could be exposed and taken advantage of.

Today itself we are worried about the data that’s being collected on us based on external actions. Skin outward — which pictures we like, who we meet, what we eat etc.

But with brain chips predicted to become a norm, third party organizations could have access to what’s going on inside us — technically knowing us better than we know ourselves.

That relationship is far more dangerous, & if manipulated by economically or politically incentivized organizations with malicious intent 👉🏾 dangerous outcome.

But There’s A Solution

A solution proposed by Bryan Johnson, the founder of Kernel is that if we say that human data privacy is a right and

  1. we drop the data collected by these interfaces streaming our thoughts, dreams, secrets, and fantasies in real time into the blockchain
  2. then we can leverage these tools to recreate societal structures on trust and security.
  3. we could then rebuild the idea that a centralized government can keep society safe
  4. by having people build algorithms of security to hit your blockchain and to understand whether you are safe to, for example, enter a conference with hundreds of thousands of people.

🌍Future of Interface Evolution

As BCIs continue to rapidly develop future realities — brain to brain interfaces could transform our day to day interactions.

There was a study conducted where there was

  1. One mouse at Duke University in the United States & another mouse in Brazil.
  2. The mouse in the United States had two buttons in front of it and it could choose to press a series of buttons.
  3. The mouse has a brain chip and the information that was collected from it was sent over the internet directly into the brain chip in Brazil i.e. there were two mice with brain chips connected over the internet.
  4. The information of what series of buttons were pressed by the first mouse was sent over the internet and the other mouse then pressed the same series of buttons.

This is known as brain-to-brain communication & would allow us humans to communicate with each other — not by speaking, not by texting, but instead by simply thinking💭

This could allow us to share our knowledge, experiences, and opinions with each other non-verbally leading us to explore what it means to download knowledge and skill sets.

We spend over 20 years — about 1/5th — of the human lifetime in educational institutions acquiring knowledge. We learn what already exists, is already available, what others already know.

There was a Harvard study that showed that more students are likely to know where on the internet to find out information on something than the information itself.

What if we could save years by downloading the knowledge and skills we need at the moment & spend more of our time working on questioning it, and applying it opposed to just acquiring it.

There are researchers looking into how our consciousness are all somehow connected — shared consciousness. Where we could potentially experience someone else’s life by experiencing other people’s experiences in a dreamlike state.

This could potentially allow us to eliminate isolation and allow for true empathy — at the same time redefining what it means to be human.

Before we make telepathy, telekinetics, and getting rid of smartphones a reality. I have a quick question for you:

What does it actually mean to be human❓

🔗Connect

investing in moments