Skip to content

NVIDIA

Benchmark SUPER mode of NVIDIA Jetson Orin NX

In 2025, AI is undoubtedly the hottest topic today, and there are many application scenarios that require localized deployment, such as smart surveillance systems, intelligent retail stores, and small-scale robots with LLM/VLM.

The NVIDIA Jetson Orin NX is a compact, high-performance AI computing module designed for edge applications such as robotics, smart cameras, and industrial automation. It delivers up to 157 TOPS of AI performance using the NVIDIA Ampere architecture, making it ideal for running complex AI models locally with low latency and high efficiency.

To unlock its full potential, the AVerMedia D133S Carrier Board provides a robust and versatile platform tailored for the Orin NX. It supports super mode for enhanced performance and offers a rich set of I/O.

Here, we are going to introduce and benchmark these two powerful standard carrier boards(D133 and D133S) from AVerMedia, which offer rich I/O options such as camera inputs, multiple Ethernet ports and a GPU, making them especially suitable for AI edge computing applications.

Accelerate VLM Development with AI Fusion Kit

AI Fusion Kit

The first barrier in any multimodal LLM project is often not about the model itself, but about the hardware. Looking for a powerful computing platform, a high-quality camera, and a sensitive microphone can take a lot of time and effort. What's worse, these components may not work well together, leading to a tangled web of driver issues, compatibility conflicts, and frustrating debugging sessions before your real work even begins.

The AI Fusion Kit is designed to eliminate these challenges entirely. It is a complete, out-of-the-box solution where every component works seamlessly together.

AI Fusion Kit Quick Start Guide

AVerMedia AI Fusion Kit is an all-in-one solution for LLM/VLM developers. It consists of a powerful AI box PC, a 4K camera, and an AI speakerphone, allowing you to easily build your own multimodal AI applications. This guide will walk you through the steps to get started with the AI Fusion Kit.

Porting Tier IV Edge.Auto to AVerMedia D135

This blog shares our first hands-on experience with Tier IV’s Edge.Auto perception framework. After validating the perception modules in the CARLA simulator, we took the next step by deploying the same pipeline on the AVerMedia D135 embedded platform. This journey represents a practical attempt to bridge the gap between simulation and real-world deployment, helping us better understand how to run ROS 2-based perception logic on edge hardware.

Time to First Token

Time to First Token (TTFT) refers to the latency between a user hit the Enter key and the appearance of the first character shows on the screen. Excessive TTFT can greatly diminish the overall user experience.

TTFT is a crucial response time indicator for an online interactive application powered by a large language model (LLM), as it reflects how quickly users can catch the first character from the model through a web page.

Here, we will explore two simple ways to get the latency of first token from a language model.

Installing Isaac ROS on Jetson

This article explains how to set up and install NVIDIA Isaac ROS on Jetson platforms, including Docker configuration, SSD integration, developer environment setup, and compatibility between different JetPack and Isaac ROS versions.