Back in 1958, General Motors envisioned a future where autonomous vehicles were the norm. Decades have passed, yet self-driving cars seem a long way from mainstream utility. Prominent setbacks from Uber and ArgoAI support the idea of a waning market.
Yet the global market for autonomous cars is expected to hit $1,191.8 billion. According to McKinsey, by 2030 vehicles that are fully autonomous will be able to operate “anywhere, anytime” with Level 5 technology, the highest standard of autonomy available.
How do we handle these conflicting signs? Which signals should engineers pay the most attention to? And, more importantly, how can engineers avoid previous mistakes to pave the way forward to an autonomous future?
This article will walk through the state of autonomous vehicles in 2023. Read on to see how the industry is charting a path forward toward mainstream utility and adoption.
In many respects, fully autonomous vehicles seem futuristic. So how did the stuff of science fiction end up as a viable business model? Here’s a brief summary of the breakthroughs that got us from there to here.
Robert Whitehead’s invention of a self-propelling torpedo proved to be a game-changer for naval fleets. While torpedo guidance would evolve greatly from there, it was a major step forward in autonomy.
Extended air travel forced the invention of autopilot systems for long-range aircraft. Mechanical Mike was a prototype used during a 13,000-mile, transglobal flight in 1933. The foundation of the technology was gyroscopic, a major component of autonomous cars even today.
A familiar feature of the modern vehicle was based on a simple mechanical throttle that could set the vehicle’s speed. The invention became commercially available in 1958.
The Space Race is in full swing. James Adams invented the precursor to a remote control lunar rover that would not have to wait for commands to reach the 2.5 second delay from Earth to the Moon. This technology was based on cameras that could detect and autonomously follow a solid white line on the ground. Cameras, of course, play a vital role in modern autonomous technology.
Japan-based firm Tsukuba iterated on the Stanford Cart to build a fully autonomous car that could recognize street markings while traveling at nearly 20 miles per hour.
German engineer Ernst Dickmanns equipped a sedan with a bank of cameras and 60 microprocessors to detect objects on the road, both in front and behind. The key innovation was “dynamic vision,” which enabled the imaging system to filter out “noise” and focus only on relevant objects.
For 20 years, the Predator has been piloting over global hotspots for 14 hours at a time. It’s indicative of one of the most impactful classes of autonomous vehicles: drones.
The U.S. Department of Defense’s research arm, DARPA, sponsored a series of challenges to push development of autonomous technologies. In 2004, they held a competition to challenge vehicles to self-navigate 150 miles of desert roadway; no car succeeded that first year. But in 2007, four cars were able to complete the route in the allotted time, signaling a major leap forward in autonomy.
We can’t go without mentioning Tesla’s Autopilot, which enabled hands-free control for highway and freeway driving. Most notably, this feature came not through a new model or hardware installation, but a single software update to Model S owners overnight.
Reading that brief history, you may have been wondering: wait, I didn’t realize that the lunar rover was technically considered autonomous?
That may be because “autonomy” is a wide-ranging term encompassing different degrees of capability. Here are the six commonly accepted autonomous vehicle levels that engineers use to measure their progress.
The first level is the most obvious: complete manual control. The human being provides the “dynamic driving task.” Even though there may be tools to help them—like an emergency braking system—these tools don’t technically drive the vehicle. As such, they’re not considered to be autonomous.
The lowest level of automation features a single automated system for driver assistance. Adaptive cruise control, where the vehicle can be kept at a safe distance behind the next car, is a good example of Level 1. This is because the human driver monitors other aspects of driving like steering and braking.
Level 2 includes advanced driver assistance systems (ADAS) which control both steering and acceleration. The automation only falls short of self-driving because a human sits in the driver’s seat and can take control of the car at any time. Both the Tesla Autopilot and Cadillac Super Cruise qualify as Level 2.
From a functionality perspective, the jump from Level 2 to Level 3 is substantial, even if it’s difficult for a human to tell the difference. Level 3 vehicles have “environmental detection” capabilities which enable them to make informed decisions for themselves. However, they still require human override - the driver must remain alert and ready to take control if the system cannot complete the task.
An example of this was the 2019 Audi A8L, the world’s first production of a Level 3 vehicle. The model featured Traffic Jam Pilot, which combined a LiDAR scanner with advanced sensor fusion and processing power.
The main difference between Level 3 and Level 4 is that the latter can intervene if things go wrong or if there’s a system failure. While the human has the option to manually override, they don’t need to.
Although Level 4 capabilities allow for self-driving, current legislation and infrastructure only enable them to do so within a limited area. That is an urban environment where top speeds only reach 30 mph. For this reason, most Level 4 vehicles are geared toward ridesharing use cases.
The final step in autonomy eliminates the “dynamic driving task.” Level 5 vehicles won’t have steering wheels or acceleration or brake pedals. They will be free to go anywhere or do anything that an experienced human driver can do. As yet, none are available to the general public.
Now let’s take a look at the current state of autonomy in 2023, and some of the major autonomous car companies in the market right now. Note that these companies are a mixed bag in terms of success.
Perhaps the most notable example of an autonomous vehicle startup - particularly one whose time has come and gone - is ArgoAI. Founded in 2016, Argo built cloud infrastructure including maps, software, and hardware for the autonomous vehicle sector. $3.6B in investments later, the company closed in 2022.
Founded in Stockholm Sweden in 2016, Enride received $652.3M in debt financing, and has had a five-year search growth of 26%. In 2021, they received a massive $110 Series B round. Their self-driving Einride Pod is geared specifically to the freight hauling and trucking sector, and they serve clients like Michelin, Coca-Cola, and others.
Closing $111M in Series C funding just last year, May Mobility targets the development of autonomous vehicles in cities like Arlington, Ann Arbor, Hiroshima, Detroit, and Grand Rapids. Since their first route in 2019, they’ve built over 320,000 public rides and have made major commitments to scale up in the future. In 2021, Bridgestone took a minority stake in the company.
Fresh off an announcement of groundbreaking immersive lidar technology, AEye works to provide “military-grade performance” for autonomous vehicles, with technology that mimics human perception for vehicles. In 2021, they went public, boasting $314.1M in equity.
Boasting nearly $1B in funding, Zoox is a subsidiary of Amazon focused entirely on developing vehicles for the robotaxi market. When Amazon made the acquisition in 2020, the company was valued at $1.2B.
With $5.5B in funding, Alphabet’s subsidiary Waymo began all the way back in 2009. Now, the company operates self-driving fleets in Phoenix and San Francisco and shows now sign of slowing down, especially because of the powerhouse that is Alphabet/Google behind them.
Although hopes for the autonomous vehicle sector are high, the track record in companies that reached IPO isn’t encouraging. But why do autonomous vehicle companies struggle? And, more importantly, what can other companies do to avoid them?
Autonomous capabilities are complicated to develop. Although early movers expected great complexity in this space, they did not anticipate how complex things would become—and how large the operational design domain (ODD) is.
There are a near-infinite number of situations autonomous vehicles could be exposed to. The problem is that physical testing sites can only account for a small portion of these potentials. As such, when engineers limit their testing to physical capabilities, it's difficult to train the vehicle to respond like a human being would.
Related to the problem mentioned above, but most autonomous vehicle manufacturers and engineers focus their testing efforts exclusively in physical environments. Physical testing provides significant data about how the vehicle will behave in that environment and those very similar, but not much else.
The problem is that when it comes to autonomy, both public opinion and the legal system don’t tolerate any serious values. The backlash from Uber’s famous 2018 autonomous vehicle accident - that resulted in a fatality - demonstrates this fact.
Although there’s no such thing as perfection, autonomous vehicles have to get pretty close to perfect in order to be viable. When testing capabilities are limited, engineers’ hands are tied behind their back and can often struggle to develop a product that the public—and the courts—will accept.
One of the major questions regarding complexity: it’s impossible to account for every single potential scenario & reduce the odds of a collision or other accident to zero. Depending on the tools and testing platforms at their disposal, getting closer to zero can be a challenging situation.
General consensus is that zero is impossible, so the more common standard is to get the vehicle to be “better than a human driver.” The problem, however, arises: how do you define that? Without a clear definition for the problem, it’s hard to know where the finish line is.
To date, there hasn't been clear guidance on what federal, state, and local governments deem as a “safe” autonomous vehicle. Some of this is due to the nascency of the technology, and regulators are notorious for falling behind major technological leaps.
Another problem is that, currently, most regulations occur on the state level. This means for U.S.-based manufacturers, there aren’t uniform standards across the country.
Despite complex challenges, things aren’t all bad for the autonomous vehicle sector. According to a recent article published on Forbes, the death of self-driving cars has been highly exaggerated. Specifically, here are some of the “signs of life” indicated in that article:
Keep in mind that for the amount of time that autonomous vehicles have been talked about, significant advances into Levels 4 and 5 are relatively recent. What’s more, advancement of virtual modeling and testing has only recently become advanced enough to support the complex iterations that autonomous vehicle engineers require.
So here we are, 500 years after the idea of an autonomous vehicle was first recorded by a human being. Although we’ve made much progress and the technology continues to develop, there’s still a long road before we make it to mainstream utility.
That said, one of the biggest tools that can move the industry forward is virtual modeling & testing. By creating models that can be tested in millions of environments and scenarios over the course of hours, autonomous engineers can create smarter, more adaptive systems and increase vehicle safety significantly.
Learn more about Collimator’s unique, AI-powered modeling & testing platform here. Curious to learn more? Schedule a demo with an engineer