jetsonopencvcudajetpackedgeaicomputer vision

OpenCV with CUDA on Jetson: build from source for JetPack 5 and 6

Andres Campos ·

Key Insights

  • apt install python3-opencv gives you a CPU-only build — it was compiled without CUDA flags and will not use the GPU for any operation, even on Jetson
  • CUDA architecture (CUDA_ARCH_BIN) must match your Jetson module — Orin is 8.7, Xavier is 7.2, Nano is 5.3; wrong value = silent fallback to CPU
  • Build time ranges from 20 minutes (Orin AGX) to 2 hours (Nano) — add swap space before building on lower-end modules or the compiler will OOM and abort
  • JetPack 5 and 6 have the same cmake flags — the difference is in the pre-installed CUDA version (11.4 vs 12.x) and the available CUDA architectures
  • Uninstall the apt package before building — conflicts will cause import errors even if the build succeeds

Why the apt package doesn’t use CUDA

This is the most common Jetson computer vision confusion we encounter. Run this on a fresh Jetson with the apt-installed OpenCV:

import cv2
print(cv2.cuda.getCudaEnabledDeviceCount())  # Returns 0
print(cv2.getBuildInformation())              # CUDA: NO

The NVIDIA apt repository ships OpenCV built without CUDA because the correct CUDA_ARCH_BIN value is hardware-specific. Jetson Orin uses compute capability 8.7; Xavier uses 7.2; Nano uses 5.3. A single generic apt package can’t encode all of these, so NVIDIA ships the CPU fallback and expects you to build from source.

The result is that every Jetson project doing computer vision needs a from-source OpenCV build at some point. Here’s the exact sequence that works.

Step 1: Remove the apt package and add swap

sudo apt remove python3-opencv libopencv-dev libopencv-contrib-dev -y
sudo apt autoremove -y

Add swap space before building. Without it, the compiler will OOM on modules with less than 16GB RAM:

sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Verify swap is active:

free -h

Step 2: Install build dependencies

sudo apt update
sudo apt install -y \
    build-essential cmake git pkg-config \
    libjpeg-dev libpng-dev libtiff-dev \
    libavcodec-dev libavformat-dev libswscale-dev \
    libgtk2.0-dev libcanberra-gtk* \
    python3-dev python3-numpy \
    libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \
    libv4l-dev v4l-utils

Step 3: Clone OpenCV and opencv_contrib

cd ~
git clone --depth 1 --branch 4.9.0 https://github.com/opencv/opencv.git
git clone --depth 1 --branch 4.9.0 https://github.com/opencv/opencv_contrib.git

Use matching version tags for both repos. Mismatched versions will fail at compile time.

Step 4: Build with CUDA flags

Find your CUDA architecture. Check your module first:

cat /etc/nv_tegra_release | head -1

Then use the right value:

Jetson moduleCUDA_ARCH_BIN
AGX Orin, Orin NX, Orin Nano8.7
AGX Xavier, Xavier NX7.2
TX2, TX2 NX6.2
Nano (original 2019)5.3
cd ~/opencv
mkdir build && cd build

cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D CMAKE_INSTALL_PREFIX=/usr/local \
      -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
      -D WITH_CUDA=ON \
      -D CUDA_ARCH_BIN=8.7 \
      -D WITH_CUDNN=ON \
      -D OPENCV_DNN_CUDA=ON \
      -D WITH_GSTREAMER=ON \
      -D WITH_LIBV4L=ON \
      -D BUILD_opencv_python3=ON \
      -D BUILD_TESTS=OFF \
      -D BUILD_PERF_TESTS=OFF \
      -D BUILD_EXAMPLES=OFF \
      -D INSTALL_PYTHON_EXAMPLES=OFF \
      ..

Replace 8.7 with your module’s value from the table above.

Review the cmake output before building. Look for:

--   NVIDIA CUDA:                   YES (ver 12.2, CUFFT CUBLAS FAST_MATH)
--     NVIDIA GPU arch:             87
--   cuDNN:                         YES
--   GStreamer:                     YES

If CUDA shows NO, the CUDA toolkit path isn’t set correctly. On Jetson, CUDA is at /usr/local/cuda — verify with ls /usr/local/cuda/bin/nvcc.

Step 5: Compile and install

make -j$(nproc)
sudo make install
sudo ldconfig

On Jetson Nano this will take 90–120 minutes. On Orin AGX, closer to 25 minutes.

Step 6: Verify

import cv2

# Check CUDA is available
print("CUDA devices:", cv2.cuda.getCudaEnabledDeviceCount())

# Check build info for CUDA section
info = cv2.getBuildInformation()
cuda_start = info.find("NVIDIA CUDA")
print(info[cuda_start:cuda_start+200])

# Quick functional test
img = cv2.imread("test.jpg")
gpu_img = cv2.cuda_GpuMat()
gpu_img.upload(img)
gpu_gray = cv2.cuda.cvtColor(gpu_img, cv2.COLOR_BGR2GRAY)
result = gpu_gray.download()
print("GPU round-trip OK, shape:", result.shape)

getCudaEnabledDeviceCount() returning 1 confirms CUDA is active. If it returns 0 after a successful build, you have a Python path conflict — the old apt package is still being imported. Verify with python3 -c "import cv2; print(cv2.__file__)" to confirm the right .so is loading.

JetPack 5 vs JetPack 6 differences

The cmake flags are identical between JetPack 5 and 6. What changes:

JetPack 5.xJetPack 6.x
CUDA version11.412.x
cuDNN version8.x9.x
Python default3.83.10
Supported modulesOrin, XavierOrin only

For JetPack 5 on Xavier, the CUDA_ARCH_BIN is 7.2. For JetPack 6 on Orin, it’s 8.7. Everything else in the build sequence is the same.

If you’re also working out which JetPack version to target for your project, the JetPack versions and L4T compatibility table has a full breakdown. For deploying models once OpenCV is working, the TensorRT and EdgeAI deployment service covers production-grade inference pipelines.

Frequently Asked Questions

Why does apt install python3-opencv not include CUDA on Jetson?

The apt package is compiled without CUDA support to keep it generic. It runs entirely on the CPU. NVIDIA doesn’t ship a CUDA-enabled OpenCV via apt because the correct CUDA_ARCH_BIN depends on the specific Jetson module. You have to build from source.

How do I verify that OpenCV is using CUDA on Jetson?

Run python3 -c "import cv2; print(cv2.cuda.getCudaEnabledDeviceCount())". If this returns 1 or more, CUDA is available. Also check cv2.getBuildInformation() — look for NVIDIA CUDA: YES in the output.

How long does it take to build OpenCV from source on Jetson?

Jetson Orin AGX (12 cores): 20–30 minutes. Xavier NX (6 cores): 45–60 minutes. Jetson Nano (4 cores): 90–120 minutes. Add swap space before building on lower-end modules.

What CUDA_ARCH_BIN value should I use for my Jetson?

Jetson Orin (AGX, NX, Nano): 8.7. Jetson Xavier (AGX, NX): 7.2. Jetson TX2: 6.2. Jetson Nano (original 2019): 5.3. Using the wrong value means CUDA code may not run or may run slowly.

Do I need to uninstall the apt OpenCV before building from source?

Yes. Remove it first with sudo apt remove python3-opencv libopencv-dev. Otherwise the apt version gets imported instead of your build, even if the build succeeded. Verify which file is being loaded with python3 -c "import cv2; print(cv2.__file__)".


OpenCV with CUDA is usually the first step in a production Jetson AI pipeline. If you’re building something on top of it — model inference, camera pipelines, real-time detection — we’ve done this on Jetson at production scale.