In the wave of accelerated development of global automotive intelligence, Tesla has long been regarded as a benchmark enterprise in autonomous driving technology. However, the Robotaxi project, which will be launched in Austin, Texas in June 2025, has caused controversy due to several operational failures and safety incidents, and the National Highway Traffic Safety Administration (NHTSA) has officially stepped in to investigate. This is not only about Tesla's technical route and brand reputation, but also a wake-up call for the development direction of the entire autonomous driving industry chain.
1. The "bet" of a pure vision system: Tesla's path selection
Unlike mainstream autonomous driving companies such as Waymo and Cruise, which use multi-sensor fusion solutions such as lidar, millimeter-wave radar, and cameras, Tesla has firmly followed a perception path that relies almost entirely on pure vision. Its FSD (Full Self-Driving) system is based on 8 surround-view cameras, and uses neural network algorithms supported by high-computing power chips for image recognition and path planning, in order to achieve a low-cost and mass-produced autonomous driving solution.
This "de-lidarization" approach stems from Tesla CEO Elon Musk's belief that human driving mainly relies on the eyes, and cameras simulating human vision are sufficient. He has stressed on several public occasions that lidar is an "expensive and redundant crutch." But this path of "replacing hardware with algorithms" is now encountering a collision of reality.
2. Robotaxi Safety Alert: The Vision System in Action
According to a test video shot by Rob Maurer, a well-known podcast host, in actual operation, Tesla's Robotaxi tried to drive into the wrong lane several times and even showed speeding behavior — driving at 35 mph on a road with a speed limit of 30 mph. Although the system was corrected in a timely manner, such failures raised questions about the reliability of its vision system in handling complex road conditions.
This type of problem is not an isolated case. Over the past few years, U.S. regulators and consumer organizations have repeatedly warned that Tesla's FSD system is unstable in scenarios such as night driving, rain and snow, and road sign recognition. In 2024, Tesla was forced to recall more than 2 million vehicles for software updates due to multiple collisions caused by vehicles accelerating through intersections.
3. Comparison: Waymo's "hardware moat" with Cruise
In opposition to Tesla are Waymo, owned by Google's parent company Alphabet, and Cruise, which is invested by General Motors (GM). Both companies use multi-sensor fusion technology, including lidar, millimeter-wave radar, and high-definition cameras, complemented by powerful real-time mapping and processing capabilities.
Waymo, for example, has a fifth-generation LiDAR system that can achieve a detection range of up to 300 meters with centimeter-level accuracy, and can recognize the intent of cones, cyclists and pedestrians. Waymo currently has more than 1,500 Robotaxis deployed in four U.S. cities (Phoenix, San Francisco, Los Angeles and Austin), based on the Jaguar I-Pace platform and fully driverless in operation. According to data disclosed by Waymo, its Level 4 autonomous driving service has reduced the accident rate by about 40% compared to human driving in the past year.
In San Francisco's urban operations, Cruise's autonomous vehicle accident rate is about 20%-30% lower than that of Tesla, and its ability to perceive night and rainy weather has been improved through lidar and millimeter-wave radar. This gap fully reflects the impact of hardware configuration on the stability of autonomous driving systems.
Figure: The "visual illusion" of the camera: Can Tesla Robotaxi use algorithms to combat hardware shortcomings?
4. Classification of technical grades: Tesla's FSD is still L2
Although Tesla defines its Robotaxi as "fully self-driving," it is generally accepted in the industry that it is still SAE Level 2 autonomous driving. Level 2 systems can only be partially automated in limited scenarios, and the driver still needs to be attentive. Waymo and Cruise, on the other hand, have received Level 4 autonomous driving permits from the California Public Utilities Commission (CPUC) to provide operational services without safety personnel.
Tesla's current risk control measures include employees sitting in the passenger seat ready to intervene, remote operations centers monitoring trips, and setting "pull over" and "emergency stop" buttons. Although these measures have a mitigating effect, they are essentially a patch to the current vision system's insufficient capabilities, and cannot replace the perception capabilities on the hardware.
5. Physical limitations of camera systems: blind spots in a two-dimensional world
From the perspective of physical principles, the camera system mainly relies on image recognition for environmental modeling, and it is difficult to achieve high-precision spatial depth measurement. In adverse weather conditions, such as rain, snow, fog, or at night, the camera image will be blurry, reflective, and other problems. According to a study by the Massachusetts Institute of Technology (MIT), the recognition accuracy of cameras decreases by about 38% on average in heavy rain, and the response delay to high-speed objects increases by more than 20 milliseconds.
In contrast, LiDAR has the ability to actively perceive, and can directly measure and image through the laser beam, which is not affected by light changes. Millimeter-wave radar still has good penetration and object recognition capabilities in low-visibility scenarios. The fusion of these sensors has become a mainstream trend in the autonomous driving industry.
6. Cost considerations and scale dilemmas
It is undeniable that one of the motivations for Tesla's "de-lidarization" is cost control. In the past, the price of a single lidar was as high as thousands of dollars, and even the solid-state radar, which has been reduced to several hundred dollars after localization, is still a sensitive item in the BOM cost of the whole vehicle in mass production.
However, with the evolution of technology, the price of lidar continues to fall, and the products of domestic manufacturers such as Hesai Technology and Suteng Juchuang have entered the vehicle specification level to support large-scale delivery. At the same time, the maturity of automotive-grade SoC chips (such as NVIDIA Drive Orin and Huawei MDC) and perception fusion algorithms has also cleared the technical barriers for the large-scale deployment of multi-sensor fusion systems.
This means that multi-sensor systems, which were once considered "uneconomical", are becoming mainstream solutions, and Tesla's "algorithmic illusion" built by relying on the camera's single perception ability is being questioned by the reality of the industry.
7. Regulatory level: safety and standards need to be implemented urgently
The trial operation accident of Tesla Robotaxi in Austin not only raised doubts at the technical level, but also exposed the lack of supervision. NHTSA and CPUC are stepping up the review process for autonomous driving incidents, but there is still a lack of unified accident assessment standards, data reporting mechanisms, and software and hardware verification systems.
In the future, regulators should accelerate the legislative process of autonomous driving level certification mechanism, mandatory standards for perception ability, and road test qualification review. Especially in the commercialization stage of unmanned driving, how to ensure passenger safety, data security and network protection also urgently needs to improve laws and regulations.
8. Conclusion: Algorithms are not the master key, security is the cornerstone of the industry
The problems exposed by Tesla's Robotaxi project are a microcosm of the challenges facing the entire visual perception route. Although the pure vision solution has certain advantages in terms of cost, integration and scalability, it still has shortcomings in terms of security, robustness and adaptability to complex environments.
In today's accelerated commercialization of autonomous driving, technological innovation should not come at the expense of safety. In the future, if Tesla wants to truly participate in the competition in the L4+ autonomous driving market, it must re-examine its perception strategy, make up for the shortcomings of hardware, and introduce a multi-sensor fusion mechanism. At the same time, the upstream and downstream of the industrial chain, regulators and end users should also work together to promote the formation of a more rigorous and credible development path for autonomous driving.
Can Tesla "bet on the future with cameras"? At present, it is still up in the air. However, it is certain that only a technical system based on safety is the only way for autonomous driving to truly drive to commercial landing.