• Magic Beans
  • Posts
  • Is this what a budget Apple Vision Pro could look like?

Is this what a budget Apple Vision Pro could look like?

Plus Meta's AR glasses, new gen AI tools, & more!

Is this what a budget Apple Vision Pro could look like?

Apple Vision Pro is heavier than strapping a small child to your forehead and costs as much as a used car. But what if there’s a lighter, cheaper version on the horizon? Let’s explore how Apple could make it happen!

Hey everyone, it’s Cosmo. Let's face it: Apple Vision Pro is a groundbreaking piece of tech, but the exorbitant price and weight make this first generation product more of a dev kit, even though Apple would never admit it. 

Making a lighter and more affordable version of Vision Pro is necessary for the platform’s success. Today, I’m going to explore how they might go about achieving this. 

One of the most controversial rumors reported by Mark Gurman claims Apple is working on a tethered version of Vision Pro powered by your iPhone or Mac. Let’s call this product simply “Vision" – leaving "Vision Pro" for its more expensive sibling.

Here’s a mockup from Andrew Fox I like:

So this sounds a little crazy at first, but is it really? This could be the most viable path for Apple to bring spatial computing to the masses. Think about it – most Vision Pro users are already carrying an iPhone. We're talking about a potential market of over 130 million people who upgrade their iPhones annually. By offloading some of the processing power to the device already in your pocket, Apple could dramatically cut both the weight and price of Vision Pro.

This would create a multi-tiered product line, similar to what we see with iPhones, iPads, and Macs. The lower-end Vision becomes an accessory that supercharges your iPhone experience, while the Pro remains the standalone powerhouse with the most advanced technology.

Picture this: your iPhone goes from being a powerful computer in your pocket to a wearable holodeck. Suddenly, your apps aren't just icons – they're entire spaces you can walk through and interact with.

Now the weight savings from pulling the M2 out alone would be minimal. But that's just the tip of the iceberg. Apple could go further by switching from aluminum and glass to lighter materials like magnesium and polycarbonate. Now we're talking about a more significant weight reduction without compromising on that premium Apple feel.

Of course, there are challenges. Tethering to an iPhone raises questions about battery life, heat management, and performance. Can your iPhone really handle the power demands of spatial computing without bursting into flames in your pocket? Perhaps it could come with a battery case, but that seems extra clunky.

Then there’s a question of how to overcome added latency, which Darshan Shankar, creator of Bigscreen, points out on X.

He makes some excellent points!

Now, let's talk displays. The current Vision Pro boasts some extremely impressive screens, but they also make the device absurdly expensive. Apple has reportedly asked LG and Samsung about making slightly larger screens at about 2/3 the resolution of the current displays made by Sony. 

Just like their iPhone, Mac, and iPad product lines, screen quality is a likely element that Apple could use to differentiate the Vision lineup, but how much blur can our eyes tolerate? Apple's walking a tightrope here between affordability and that signature Apple polish we’ve come to expect. 

Are there elements Apple could remove entirely? Some folks online suggest removing EyeSight – the external display that shows your googly eyes to other people outside the virtual world? I'm not so sure about this. 

Yes, it adds weight and drains battery, but it's a key feature that sets Vision Pro apart from other mixed reality headsets and plays a crucial role in social acceptance. Apple went out of their way to engineer a fancy lenticular display, and I don’t see them compromising on their human-centered design philosophy. Without EyeSight, we might all look like we're cosplaying as dystopian cyclops.

At the end of the day, a lighter, more affordable Vision Pro is crucial if Apple wants to make spatial computing a mainstream reality. Can they pull it off without compromising the magic of the experience? The fate of Apple's next big thing depends on it.

What do you think? Is a tethered Vision the key to unlocking spatial computing for everyone? Or is it a compromise too far? 

Reminder: sign up for Vision Hack

We announced the first global visionOS hackathon last week. It’s taking place September 13-15th. We hope to see you there!

Magic Beans of the Week

One of the most controversial aspects of Apple Vision Pro is its lack of controllers, which have been a staple of VR headsets for many years. Relying on hand tracking has its benefits: you don’t need to charge anything, there’s less to learn, and you feel like you’re in Minority Report. But let’s be real—no motion controllers means Vision Pro isn’t compatible with most existing VR games, and let’s not forget the satisfying haptic feedback we’re all missing.

Enter Surreal Touch, ready to do what Apple won’t with their new motion controllers for Vision Pro. Think Meta’s Quest Touch meets Jony Ive—sleek, stylish, and oh-so on-brand! While it’s unlikely many developers will support these natively, they seem like a fantastic option for PCVR gamers who also own a Vision Pro.

Going hand tracking first was a clever move, but I bet Apple will release some kind of controller or peripheral in 2-4 years. Motion controllers offer a level of precision and haptic feedback that hand tracking alone can never provide. So, until Apple catches up, Surreal Touch might just be the fix we need.

Meta’s 3DGen represents a notable advancement in 3D asset creation. It significantly reduces the time needed to generate high-quality 3D shapes and textures, achieving results in under a minute. By integrating Meta 3D AssetGen and TextureGen, 3DGen offers a seamless and efficient solution that outperforms existing methods in both speed and quality.

This is particularly valuable for industries like gaming and XR, where quick turnaround and high fidelity are essential. I love seeing research like this progress!

Well, this is spicy! In a potentially game-changing move, Apple’s Phil Schiller is reportedly joining OpenAI’s board. This decision could grant Apple an insider’s view into the strategy and roadmap of one of AI’s key players.

With Microsoft already on the board, the dynamic could get a bit awkward. Imagine the board meetings—like a tech giant showdown! The collaboration between these powerhouses might shape the future of AI in unexpected ways, and we’ll be watching closely to see how this power play unfolds.

AR glasses are the most sought after yet elusive gadget in technology. They promise to unlock a new paradigm for developers and potentially replace our smartphones. But so far, they’ve been hampered by low field of view and poor battery life.

Zuckerberg has been teasing Meta’s AR glasses for years, but it sounds like they’ve achieved some kind of breakthrough. Could we see this unveiled at Connect in September? If Meta’s cracked the code, this could be the moment AR glasses finally step out of sci-fi and into our daily lives. Keep your eyes peeled—this could be the start of something big.

Generative video has improved immensely over the past few years, but it’s still a bit of a wild card for storytellers who need consistent characters and worlds. Enter Odyssey, a newly-announced venture-backed company founded by Oliver Cameron and Jeff Hawke.

Here’s what sets Odyssey apart from tools like Dream Machine and Gen-3: “Instead of training one model that restricts you to a single input and a single, non-editable output, we're training four powerful generative models that enable fine-tuned control over each major layer of visual storytelling. Specifically, models capable of generating high-quality geometry, photorealistic materials, stunning lighting, and controllable motion.”

I love this approach because it mirrors the philosophy behind MovieBot, emphasizing the ability to edit what the AI produces. It’s a game-changer for anyone looking to create intricate, consistent narratives with the flexibility to tweak every detail.

Video of the Week

A cop pulled a driverless Waymo car over in Phoenix, only to get tech support instead. Surreal.

Thank you for reading. Till next week! 😊

Best,
Cosmo