Facebook founder Zuckerberg has been on a mission to pursue metaverse technologies to reinvent how we meet, greet, socialize and work. But it has been taking a massive toll on his company—Meta. The erosion of Meta’s $750 billion market value is news. Similarly, the bankruptcy of Kodak and the sudden fall of Nokia are also intriguing. The underlying reason has been the rise of reinvention waves unleashing creative destruction. The company suffered from bankruptcy as Kodak management did not pursue self-recreation by reinventing the film camera into a digital one. Nokia suffered from a similar fate due to the rise of the multi-touch-based iPhone’s smartphone reinvention wave. Contrary to Kodak and Nokia, Meta is on the way to suffering from blister due to pursuing self-reinvention through creative destruction. Hence, speculation is on the rise: whether Meta has been on a death spiral due to seeking reinvention from Metaverse technology.
Meta’s Facebook has been a dominating platform for meeting and greeting friends and socializing. Due to the network effect, out of the growth of smartphone and mobile internet diffusion, Facebook grew to an astronomical state, making Meta a $1 trillion company. The technology core behind Facebook’s innovation comprises Software apps, cloud platforms, databases, smartphones, and mobile internet. It has been a generic technology core being developed by numerous firms. Hence, the threat of imitation and innovation out of this generic technology core is high. Besides the network effect, there appears to be no barrier to imitating and eroding Facebook’s customer base.
Furthermore, as this technology core has been advancing, competitors have growing options to respond with innovation. Hence, the rise of TikTok due to its ease of editing and uploading videos is no surprise. Hence, Zuckerberg has been after metaverse for reinventing how we meet, greet, play and work.
History of metaverse technology and applications:
Although the metaverse is high-end digital technology, it started the journey as a mechanical tool. The underlying science of stereoscopic vision was discovered in 1832 by English physicist Sir Charles Wheatstone. Subsequently, Sir David Brewster improved it in 1849. The stereograph became especially popular after Queen Victoria expressed interest in it in 1851. However, the modern form of metaverse technology has a root in the development of head-mounted displays for showing the overlay of graphics objects on images of 2D terrains in 1968. During the 1970s-1990s, AR/VR was mainly limited to military and space applications. Researchers also developed augmented reality interfaces for telerobotics. However, complementary technologies such as hepatic interface and simulated sound were added along the way.
Commercial applications started in 2008 with the development of AR/VR applications for advertisement. Subsequently, the decade of the 2010s began to witness its applications in games, training, entertainment, and educational purposes.
Metaverse technology is the fusion of refined versions of many component technologies. A few essential members are (i) display and optics, (ii) graphics, (iii) camera, (iv) speech synthesis and recognition, (vi) image processing and computer vision, (vii) hepatic interfaces, (viii) tactile, smell, motion, position, orientation, and other sensors, (ix) wireless connectivity, and (x) internet. To grow the metaverse as the creative force of destruction, a growing number of technologies are fused with existing ones. Furthermore, each candidate technology needs to be refined far more to meet the demand.
Metaverse technology basics
The metaverse’s basic technology is a stereo vision out of real-life electronic or synthetic images. Images captured by our digital cameras produce 2D projections of our 3D world. Similarly, graphics or synthetic images created and visualized on computer displays are also 2D projections. But we visualize the real world in 3D space. Each of our eyes sees the surrounding around us from a slightly different vintage point. Our brain fudges these two images to develop a 3D perspective. In virtual reality, we take advantage of this functionality.
With two cameras (left and right), we capture images (or videos) of the same object or environment from slightly two different vintage points. Upon doing so, we alternatively project each of the frames to the corresponding eyes, creating an illusion of immersive 3D vision. We can form those images either by capturing them with two cameras or synthesizing them as graphics. We can also fuse real-life and graphics images creating an augmented reality effect. Hence, we need head-mounted displays to project images to our eyes.
In addition to giving the illusion of submerging viewers in the 3D world, by attaching haptic interfaces, we can provide feel and touch sensation to the users. Furthermore, through speech synthesis and recognition, it’s also feasible to let actual human beings have a conversation with synthetic characters or avatars. For distant communication, we need high-speed and low-latency connectivity. Besides, for untethered communication with haptic interfaces, head-mounted displays, and real-life objects, we need a high-speed, low-latency wireless network. Furthermore, different sensors and actuators to simulate real-life attributes like smell and touch simulate further reality. Besides making sense of images, computer vision plays a vital role in detecting and recognizing objects.
Cost and performance challenges of metaverse technology
First of all, head-mounted displays should be thin and light. They should be like a pair of plastic frame goggles for extended and intuitive usage. The accommodation of high computational power, battery, wireless interfaces, retina displays, and high-resolution cameras in such compact physical space is a daunting challenge.
The 2nd one is about the field of view (FOV). State-of-the-art AR/VR devices have a FOV of up to 90 degrees. This is far less than the 190 degrees horizontal and 120 degrees vertical for normal human vision. For creating the immersive experiences Meta or others aim to create, they must capture as much of the FOV as possible.
For the human eye (brain) to make the experience more immersive, that challenge is to make FOV larger. But that makes devices like headsets to be bulky. With the current situation, the size makes prolonged usage of these devices unlikely and uncomfortable. Hence, the vision of intuitive socializing and working in the virtual environment is not feasible now. In addition to FOV and weight of AR/VR devices, brightness, display quality, and latency demand advancement to address user experience issues.
For realizing the vision of Metaverse of meta, where a billion or more people will simultaneously interact with real and virtual objects as individuals and groups, internet speed and latency would be a big issue. Some expert suggests the need for 1,000 times improvement of what we have today. Such a challenge raises questions about the physical limits of many key technologies. How far is it feasible to extend the wireless interface by going to a higher frequency than 5G? On the other hand, such a gigantic leap will demand significant improvement in semiconductor devices, far beyond 2nm dimension. Besides, there are sensory and computer vision issues.
Motion sickness challenge could be a show stopper
Issues pertaining to the human body’s response to the projection of stereo images alternatively to the eyes are formidable. We call it virtual reality or VR sickness, similar to motion sickness. Symptoms include general discomfort, eye strain, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy. Some of the common causes are frame rate, input lag, and the vergence-accommodation-conflict. Among them, vergence appears to be a highly difficult one. It occurs due to simultaneous movement of the pupils of the eyes towards or away from one another during focusing. Among many other performance issues, VR sickness poses a threat to be a roadblock. To overcome it, it may be necessary to bypass the eyes to project images to the brain.
R&D and end-user device cost issues
From the college dormitory, Facebook got birth out of the individuals’ laptops and a few thousand dollars for the server. It needed only a few million to roll out and start generating revenue. But the R&D cost challenge for Metaverse is far more than that. Upon investing more than $35 billion in R&D, Meta does not see any light on generating revenue from showing advertisements in the virtual space. In 2021 alone, Meta invested $10 billion in Reality labs for metaverse-related R&D (Verge). Whatever little revenue Meta generates comes from the sale of AR/VR physical devices at a loss. Such a staggering investment in R&D is to capitalize on metaverse market potential, estimated somewhere between $5 – $13 trillion. According to some pundits, AR/VR is likely the next big innovation frontier in the information technology space.
On the other hand, customers find high-end AR/VR gear, costing $2000+, not much affordable. It’s worth noting that more than 2 billion users’ access to Facebook does not require purchasing any special devices. Consumers use their smartphones purchased for other purposes to socialize over Facebook. Hence, any additional dollar needed to access the metaverse will work as a diffusion barrier.
Metaverse technology challenges: for pursuing reinvention for fueling creative destruction
Some surveys indicate that more than 50 percent of respondents are referring to the high manufacturing cost and low performance of AR/VR devices. The next concern is the limitation of consumer technologies like smartphones and watches to interfaces with AR/VR gear. Almost 40 percent of respondents are pointing out the form factor for integrating AR/VR devices with daily activities. Besides, mobile connectivity and applications system appears to be high barriers to almost 30 percent of respondents.
Irrespective of underlying potential, all great technology possibilities have a history of emerging in primitive form. There have been numerous examples, from LED light bulbs to digital cameras. Hence, the challenge has been to keep advancing the underlying technology core to cross the threshold level. In addition to technology uncertainty, innovators face uncertainty in R&D investment and the time needed. Furthermore, managing long and uncertain journeys often demands significant personal traits and management capability.
Despite the possibility, not all missions of pursuing reinvention succeed as a creative destruction force. For example, the investment of $80 billion in R&D for the autonomous vehicle is yet to meet expectations. Sometimes, there might be a need for a Nobel Prize-winning scientific barrier to overcome the technology barrier. In the case of the metaverse, in addition to digital technologies, there is a physical hurdle about motion sickness. Furthermore, each component technology is to be refined quite far to exploit the latent potential of immersion. Hence, the metaverse technology barrier appears to be substantial enough to raise questions about how Meta will succeed in fueling creative destruction in the space of how we communicate, socialize, play, and work.