Skip to content

Porting Tier IV Edge.Auto to AVerMedia D135

This blog shares our first hands-on experience with Tier IV’s Edge.Auto perception framework. After validating the perception modules in the CARLA simulator, we took the next step by deploying the same pipeline on the AVerMedia D135 embedded platform. This journey represents a practical attempt to bridge the gap between simulation and real-world deployment, helping us better understand how to run ROS 2-based perception logic on edge hardware.

Introduction

We validated Edge.Auto’s perception pipeline in CARLA simulation and deployed it on the D135 platform.

This effort proves that simulation-validated perception logic can transition seamlessly to edge hardware — a crucial milestone in accelerating autonomous system deployment.

System Architecture

The following diagram shows how different modules interact between the CARLA simulator and the real-time stack on the AVerMedia D135 platform:

Edge.Auto Architecture Overview

Explanation:

  • Simulator (CARLA) provides virtual camera and LiDAR sensor input

  • D135 receives sensor data and runs perception modules such as Sensing and Perception

  • Dashed boxes represent optional modules (e.g., Localization, MAP, Planning, Control) that are planned for future integration

  • Visualization tools (e.g., RViz2) are used to verify runtime outputs such as images, point clouds, and detections

Dashed blocks indicate unimplemented modules reserved for upcoming phases.


Project Goal

  • Framework: ROS 2-based Tier IV Edge.Auto perception modules

  • Target Platform: AVerMedia D135 (NVIDIA Jetson Orin NX-based)

  • Sensor Inputs (simulation only): Camera and LiDAR data generated by CARLA simulator

  • Objective: Demonstrate the portability and real-time execution of a modular ROS 2 perception pipeline on the AVerMedia D135, using simulated sensor data as input.

  • Workflow: Develop and test in CARLA → Reuse the same launch and topic structure on D135 → Visualize outputs in RViz2


Validation on D135

After validating the perception modules in simulation, we ported the same ROS 2 pipeline to the AVerMedia D135 platform for testing under real-time conditions.

Although physical sensors were not yet connected, the system successfully received simulated sensor data (camera and LiDAR) from CARLA and processed them using Edge.Auto's original perception stack.

We reused the launch file from Tier IV’s reference setup, adapting it to work with CARLA’s simulated sensor outputs. No changes were made to the core perception node configurations.


Observed Results:

  • Perception modules ran in real-time on D135 with stable FPS

  • RViz2 successfully visualized the incoming image and point cloud topics

  • Frame transformations (base_link, camera_frame, etc.) were correctly resolved

  • Minimal effort was needed to adapt simulation data to an embedded runtime

Note: All sensor inputs were simulated; real hardware integration is planned for future phases.


Demo Video:

The following video shows the perception pipeline running on the D135, visualizing camera and LiDAR outputs in RViz2.


Lessons Learned

This exercise taught us several key lessons about simulation-driven development and hardware migration:

Aspect Takeaway
Launch reuse Original Tier IV launch files worked with only minimal topic remapping.
Simulator fidelity CARLA provided sufficiently realistic data for pipeline validation.
ROS 2 flexibility Standardized topic and frame conventions made cross-platform reuse easy.

The ability to run the same perception logic across simulator and embedded hardware increased our confidence in the modularity and portability of the Edge.Auto stack.


Next Steps

We plan to integrate real hardware sensors into the pipeline, including stereo cameras and LiDAR, and evaluate the system under real-world conditions. Future work will also explore mapping, localization, and planning modules within the Edge.Auto or Autoware stack to support full autonomy.


References