Is the Apple Vision Pro a Social and Behavioral Game Changer?
My first look at Apple's new "computer for your face" and how it might (or might not) change our lives
11 AM Sunday morning and I’m queuing outside the Scottsdale Fashion Mall Apple Store. It may be the nerdiest thing I’ve done in a while!
Apple’s long-awaited Vision Pro — an advanced VR/AR/XR/ headset, or a category-changing “Computer for your Face” depending on your point of view — hit the shelves last Friday. And I was queuing up to pick up one of the first devices to ship, courtesy of a couple of projects I’m working on around tech and the future.
I’ve been deeply curious about how Apple’s new device might change how users interact with virtual and real environments — and how this in turn might change them — ever since the Vision Pro was announced last June. And so I was itching to get my hands on a headset to begin exploring what it can do.
Since picking up the headset I’ve only had a few hours to play with it — but I did want to get some first impressions out on my thinking around how this technology might potentially impact our lives.
If you want the tl;dr I was blown away — unexpectedly so. I also combined wearing the headset with a ride in a driverless Waymo, which was pretty awesome — more below.
But that doesn’t mean that we don’t need to be thinking hard and deep about how this tech might impact our lives, and how to successfully navigate the technology transition Apple is ushering in.
Before I get to this though, its worth taking a quick diversion into what makes this headset interesting.
What makes the Apple Vision Pro different?
At first glance, Apple’s Vision Pro looks like a run of the mill VR headset that immerses you in a 3D computer-generated virtual world. But the Vision Pro sets out to make old-style virtual reality as we know it obsolete.
To understand something of the vision behind the Vision Pro (pun intended), it’s worth taking a second to think about the myriad science fiction visions of the future that seem to influence so much thinking around both virtual and augmented reality.
I was tempted to call out specific movies and programs, but gave up — simply because there are so many of them that assume a future where people interact with holographic-like projections into reality (OK, I couldn’t resist peppering a few in — Ready Player One of course, Star Trek, Blade Runner, various Black Mirror episodes, Minority Report, insert your own favorite here …). 3D adverts, screens that appear in mid-air, virtual pop-up phones, interactive virtual displays, holograms of friends and colleagues — we’ve been conditioned to take it for granted that one day we’ll live in a future where the real and the virtual seamlessly integrate together.
The trouble is, to achieve what sci-fi leads us to believe is possible, we’d have to be able to either project a virtual overlay directly into our eyes, or have a virtual feed beamed directly into our brains — something that, despite the dreams of companies like Elon Musk’s Neuralink, we’re not going to see in my lifetime.
Conventional VR was never going to achieve the seamless integration that TV and movies promise. Rather than blending the virtual with the real, conventional VR headsets create an artificial environment that, if anything, disconnects users from reality.
As an alternative, Google introduced consumers to an alternative approach to blending physical reality with a digital overlay in 2013 with Google Glass — the company’s then-ground breaking smart glasses.
Google Glass only scratched the surface of digital overlays though, and didn’t even qualify as what most people would consider to be augmented reality. The technology also hit an impasse as people decided they really didn’t like the technology, the privacy issues it raised, or the people who were using it!
Microsoft’s HoloLens (introduced in 2016 and now in its second iteration) provided a substantial step forward in augmented reality or “mixed reality”. By projecting virtual images onto a semi-transparent screen in front of the user’s eyes, the HoloLens allows optical “passthrough” where the user get to see their surrounding environment naturally, but with a digital overlay.
It was a step toward sci-fi like blended reality. But the field of view was (and is) relatively small, and the blending is far from seamless.
The problem is that for seamless integration between real and virtual environments that use optical passthrough, we’d either need immersive headsets that are as light and unobtrusive as a pair of glasses — which would be wonderful, but is still technologically out of reach — or contact lenses that indistinguishably fuse virtual images with real ones: Another pipe dream that seems to be slipping further from our grasp.
The long and short of this is that if Apple was going to move the needle on seamlessly and immersively blending real and virtual worlds, they would need to get creative.
And they did.
Rather than overlaying a virtual world over real life like the HoloLens, the Vision Pro overlays real life over a virtual world using a technique called “video passthrough”. The headset optically isolates the wearer from the environment around them. Then, using an impressive array of sensors and cameras, it provides each eye with a digital reconstruction of the surrounding environment.
Because the view of the real world a user sees using video passthrough is, in fact, a digital reconstruction, it can be augmented in some rather clever and compelling ways.
Apple aren’t the first company to use video passthrough in their headsets — Meta’s recently released Quest 3 headset for instance also includes it. But Apple have taken the technique to a whole new level of fidelity.
Using video passthrough, the Apple Vision Pro creates a user experience quite unlike anything else available at the moment. And it opens up opportunities that, I would argue, bring us closer to the sci-fi dream of living in a fully augmented reality that we’ve ever been.
High fidelity video passthrough allows headsets to replace reality with a version of the non-digital world that, if done well, is indistinguishable from the real thing. But there’s one important difference — this reconstructed reality can now be manipulated in ways that physical reality simply cannot.
With ultra high fidelity video passthrough, the possibilities are only limited by the creativity of developers and the compute power they have at their disposal.
Imagine, for instance, workspaces — or coffee shops — that are located on opposite sides of the world, yet have identical physical layouts. Now imagine the people in them wearing headsets that allow them to co-occupy both locations as if they were physically there. Then imagine that the melding of real and digital — including capturing facial expressions and body language — was so seamless that it viscerally felt that you were interacting in person with the people around you, irrespective of whether they were physically present or not.
This is just one example of what a blended reality future could look like. But even this small glimpse into the possible gives a taste of how high fidelity video passthrough could shake up notions of travel, physical engagement, and even geographical boundaries.
This is a concept that’s explored far more in written sci-fi than on the screen (Toby Weston’s Singularity’s Children series of novels is a great place to start here). And it is, if anything, potentially more transformative than much of the blended reality examples shown in movies and TV series.
Not surprisingly the technology isn’t there yet, and won’t be for some time. Impressive as it is (and I am impressed), Apple’s Vision Pro still barely scratches the surface of what could be possible in the future.
But we’re getting closer to scenarios where users could experience having coffee with someone half way around the world while feeling that they were in the same cafe, or remotely collaborating on projects as if they were in the same room.
How such capabilities might change how we live, how we interact with each other, and even our sense of who we are, remains to be seen. But the possibilities are profound enough to make the question of whether the Vision Pro is a social and behavioral game changer is one that’s worth asking.
Which is why I found myself outside my local Apple store queuing up to enter what Apple are optimistically calling the “era of spatial computing”.
First impressions
It is of course, impossible to write in detail about the Vision Pro after using it for just a few hours — that’ll come later. But given the context above, I was very curious to see what my first impressions were.
To make matters more interesting, I only had a couple of hours before picking the headset up and boarding a flight to San Francisco (and yes, I did bring the headset with me!)
Thankfully, when I picked the headset up in the Apple store I took the option to take the guided tour — which turned out to be a great way to get used to what the headset can do and how to use it.
I was not expecting how impressed I would be as I put the headset on. The video passthrough is quite incredible. It’s not perfect — there’s some pixellation with distant items, and there’s a bit of distortion. But I could talk with people, check my phone, and even read with the headset on.
The interface is also super-intuitive. Once the headset was calibrated it took just a minute or so to start pinching and swiping like a pro.
And the experience is quite amazingly immersive. I’ve already put in a couple of hours of using the headset, and it’s surprisingly easy to forget you’re wearing it. Even the weight didn’t bother me.
One of the first things I tried was reading a real paper and print book through the headset while sitting outside — not something you’d usually do, but a good test of the resolution of the system and how it operates in different environments.
You can get a sense of what it was like— albeit a very limited one compared to the full experience — in the video below (this is video captured from the Vision Pro). And yes, for readers who know something of the history of VR in science fiction, I’m both reading Ready Player One and watching the movie (which is sadly blacked out in the video capture).
Until you see the video screen, it’s hard to realize that what you are seeing is video passthrough and a completely virtual reconstruction of a real environment.
To emphasize that, I took the vide below as I was moving back inside — it’s not a masterpiece of moviemaking, but it does show how objects placed in the blended environment the Vision Pro creates stay where you put them!
At this point I needed to get ready to leave for the airport, and so I thought why not combine two cutting edge technologies and take a driverless Waymo while experiencing the ride in Apples blended reality.
With apologies for the scrappiness of the video below (it was edited in a hurry) and the rather dramatic sound track, here’s a taste of what the 30 minute ride was like:
I was wearing the headset for the full Waymo ride, and found the experience quite unlike any other I’ve had. There were occasional issues with the headset — as you’ll see at the end of the video it had problems tracking my movements at times. But apart from that it it quickly felt natural to blend digital and physical realities on the ride. And there’s something quite deliciously uncanny about placing a large movie screen right in front of your self-driving car.
It’s also worth noting that, despite being susceptible to motion sickness, I had no problems wearing the headset while driving.
Is the Vision Pro a social and behavioral game changer?
As you might guess from my first impressions above, I was impressed with the Vision Pro. The fidelity of the video passthrough, the immersiveness of the experience, the near-seamless blending of physical and digital realities, and the ease with which the operating system is navigated, are all quite compelling. Plus, the device is gorgeously designed and made.
The Vision Pro is, by any measure, a masterful melding of powerful compute capabilities, a vast array of high end sensors, and leading edge screen technology. More than anything, it demonstrates what can be achieved by merging cutting edge compute and sensors in innovative ways.
This, of course, is critical to the effective use of video passthrough and creating immersive blended realities. But it also underpins Apple’s concept of “spatial computing” and the idea that they are moving the interface between us and digital stuff from our desktops, laptops, tablets, and phones, to this new and infinitely malleable environment.
After just a few hours immersed in Apple’s spatial computing environment, I’m not sure I’m convinced — yet — that this is the social and behavioral breakthrough the company’s hoping for. The price for one thing is crazily prohibitive. But I certainly think it’s a major step toward a new type of technology and technological experience that could well change how we live our lives and engage with each other.
Despite my experience so far I’m still a IRL kind of person — I’m really quite content typing on my laptop, watching movies on a real TV, reading real books, and feeling connected with my environment and the people in it. But I was shocked at how well the Vision Pro emulated these experiences. And I can certainly see how, after a generation or so, this technology could become addictively immersive.
This is when I suspect it’s likely to become a true game changer — both for what it allows people to do, and how it potentially messes us up if we’re not prepared. But it’ll be a close call whether the Spatial Computing revolution is upon us now, or whether Apple have simply lit a long but hard to extinguish fuse.
Either way, it’s by no means clear what adjustments people will need to make before they live their lives in a future where blended realities are commonplace — or where the physiological and psychological challenges lie to achieving immersive nirvana.
Rather impressively, a group in Stanford University’s Virtual Human Interactions Lab managed to get a paper out on the “Psychological Implications of Passthrough Video Usage in Mixed Reality” just as the Vision Pro hit the streets.
In the paper (which is available now, but still in press with the journal Technology, Mind, and Behavior) the researchers wore Quest 3 headsets while doing a number of things — including eating, walking around in public, and even riding a bike! Even with video passthrough, the researchers experienced issues that included motion sickness, distortion, and a lack of “social connection”.
The paper concludes that the “passthrough experience can inspire awe and lends itself to many applications, but will also likely cause visual aftereffects, lapses in judgments of distance, induce simulator sickness, and interfere with social connection.” I can certainly attest to that.
The authors also suggest — as any good researcher should — that more research is needed before people start to wear passthrough headsets for long periods of time.
Even though the paper’s authors claim to have included the Apple Vision Pro in their tests, it’s hard to tell whether any of their conclusions are based on the very substantial differences between the Vision Pro and the Quest 3 that most of their work is based on.
I suspect that the Quest 3 experience dominated here — meaning that research using the Vision Pro may throw up different results. Nevertheless the challenges the authors highlight are likely to remain important if headsets like the Vision Pro are widely used.
Then there are the privacy issues. The Vision pro sucks up everything around it into a digitized stream — including what’s there, who’s there, and what’s going on. As Geoffrey Fowler wrote a couple of days ago in the Washington Post, “Imagine you’re in a waiting room, and someone sits next to you with four iPhones strapped to their forehead. You might swiftly relocate. Yet that is exactly what’s happening when someone straps on Apple’s new Vision Pro headset.”
While I’m sure Apple are working hard to address concerns here, the possibility of wearable tech being used to monitor the people around the user was a thorn in the side of Google Glass. Then there’s the account of Steve Mann who was allegedly assaulted back in 2012 for not removing a wearable computing system — in this case the attack was especially egregious as the hardware was physically attached to Mann’s face!
These are serious issues — my own take is that if wearable tech festooned with sensors is going to become socially acceptable, we’ll need some recalibration of how we collectively think about privacy in the future. And it may be that issues of privacy, usability, and how enthusiastic people are about spending their life strapped into a head-worn computer, will lead to the Vision Pro being Apple’s Google Glass.
But it’s also quite possible that the Vision Pro will mark a change of the blended reality guard as the tech becomes better and people are more accepting of it into their lives.
If this is the case, we’re facing interesting times ahead — possibly not so much with the Vision Pro (although I may well be wrong here), but with what comes after it — as we adjust as a society to living in a reality that is no longer constrained by … well … reality.
In the meantime, for all its flaws, I’m impressed with what Apple have achieved, and will be continuing to explore where this new “computer for our faces” takes us!
There's something about you driving in a driverless car, alone with your VR headset that is a tragically poignant commentary on all of this.
Really enjoyed your piece- the video showing the screen still hanging in your backyard as you walked away really demonstrated what Apple has created! Pretty amazing!