Integrating a new Sensor
This guide provides instructions for adding support for a new sensor to Nebula. We provide a
template implementation (nebula_sample) and reusable components to simplify the process.
Architecture overview
Overall package structure
Nebula is organized into core packages and vendor-specific packages:
graph TB
%% --- GRAPH STRUCTURE ---
subgraph Nebula["<b>Nebula framework</b>"]
direction TB
subgraph Common["<b>Common packages</b>"]
direction LR
C1["<b>nebula_core_common</b><br/>Base types, status codes"]
C2["<b>nebula_core_decoders</b><br/>Decoder interfaces"]
C3["<b>nebula_core_hw_interfaces</b><br/>UDP/TCP sockets"]
C4["<b>nebula_core_ros</b><br/>ROS 2 utils"]
end
subgraph Vendors["<b>Vendor packages</b>"]
direction LR
Hesai["<b>nebula_hesai</b><br/>hesai_common<br/>hesai_decoders<br/>hesai_hw_interfaces<br/>ROS wrapper"]
Velodyne["<b>nebula_velodyne</b><br/>velodyne_common<br/>velodyne_decoders<br/>velodyne_hw_interfaces<br/>ROS wrapper"]
Sample["<b>nebula_sample</b><br/>sample_common<br/>sample_decoders<br/>sample_hw_interfaces<br/>ROS wrapper"]
end
end
- Core packages provide reusable functionality (UDP sockets, point types, etc.)
- Vendor packages implement vendor-specific logic (packet parsing, calibration)
- Vendor packages are typically split into
common,decoders,hw_interfacesandROS wrapper: common: Vendor-specific types, internal interfaces and utilitiesdecoders: Packet parsing and conversion to point cloudshw_interfaces: Network communicationROS wrapper: ROS 2 node, parameters, publishers, orchestration- Vendor packages depend on core packages but not on each other
Except for the ROS wrappers, no package should depend on ROS 2. This allows users to run parts of Nebula as a library e.g. in ML pipelines without ROS 2.
Warning
Existing vendor implementations do not strictly follow this architecture as the project evolved over time. New implementations should follow the architecture described here.
Data Flow
nebula_sample supports two runtime modes controlled by launch_hw:
flowchart LR
subgraph Online["launch_hw:=true (online)"]
direction LR
S[Sensor]
HW[Hardware Interface]
DC[Decoder]
RW[ROS Wrapper]
PKT[(packets / NebulaPackets)]
S-->|UDP| HW
HW -->|data packet| DC
HW -->|data packet| RW
DC -->|pointcloud| RW
RW -->|decoded scan's raw packets| PKT[(packets / NebulaPackets)]
end
subgraph Offline["launch_hw:=false (offline replay)"]
direction LR
PKT2[(packets / NebulaPackets)]
DC2[Decoder]
RW2[ROS Wrapper]
PKT2 -->|replayed packets| DC2
DC2 -->|pointcloud| RW2
end
PTS[(points / PointCloud2)]
RW -->|decoded scan| PTS
RW2 -->|decoded scan| PTS
The ROS wrapper owns the runtime mode resources and wiring:
- Online mode: hardware interface + packet publisher
- Offline mode: packet subscriber
Component Library
Nebula provides reusable components to simplify sensor integration. Use these components to reduce boilerplate code and ensure consistent behavior across sensors and vendors.
UDP socket handling
#include "nebula_core_hw_interfaces/connections/udp.hpp"
What it provides:
- UDP socket with easy builder-style configuration
- Threaded packet reception with user-defined callback
- Metadata (socket timestamp, buffer overflow count) for each packet
Usage:
using nebula::drivers::connections::UdpSocket;
auto socket = UdpSocket::Builder("192.168.1.10", 9000)
.set_socket_buffer_size(10'000'000)
.limit_to_sender("192.168.10.20", 7000)
.bind();
socket.subscribe([](std::vector<uint8_t> & data, const UdpSocket::RxMetadata & metadata) {
(void)data;
(void)metadata;
// Process received data and metadata here. This callback will be executed in the socket's
// receive thread.
});
socket.unsubscribe();
Expected
#include "nebula_core_common/util/expected.hpp"
What it provides:
For operations that can fail, but should not crash the program, return values via
nebula::util::expected<T, E>. Avoid sentinel return values (e.g., nullptr, -1) and
exceptions. This keeps APIs explicit about what can fail:
Usage:
enum class ValidationError : uint8_t {
MissingField,
OutOfRange,
};
nebula::util::expected<std::monostate, ValidationError> validate(const Config & config)
{
if (config.required_field.empty()) {
return ValidationError::MissingField;
}
if (config.percentage_field < 0 || config.percentage_field > 100) {
return ValidationError::OutOfRange;
}
return std::monostate{};
}
Tip
If the happy path has no meaningful return value, use std::monostate (a type with no data) as
T:
nebula::util::expected<std::monostate, ErrorCode> do_something();
See CPP Reference for more details on the
std::expected type of C++23, which nebula::util::expected is based on.
Point cloud types
#include "nebula_core_common/point_types.hpp"
What it provides:
NebulaPoint- Standard point type with x, y, z, intensity, timestamp, return_type, channel, azimuth, elevation, distanceNebulaPointCloud- Point cloud of NebulaPoints- Conversion utilities to ROS/Autoware point types
Nebula point types are designed for compatibility with and efficiency of downstream tasks. Since Nebula is the official sensor driver framework for Autoware, using Nebula point types ensures seamless integration with Autoware components.
Usage:
auto cloud = std::make_shared<drivers::NebulaPointCloud>();
cloud->reserve(2);
nebula::drivers::NebulaPoint point{};
point.x = 1.0F;
point.y = 2.0F;
point.z = 3.0F;
point.intensity = 10U;
point.return_type = static_cast<uint8_t>(nebula::drivers::ReturnType::STRONGEST);
point.channel = 5U;
point.azimuth = 0.0F;
point.elevation = 0.1F;
point.distance = 3.74F;
point.time_stamp = 42U;
cloud->push_back(point);
const auto cloud_xyzir = nebula::drivers::convert_point_xyzircaedt_to_point_xyzir(*cloud);
auto cloud_ros = nebula::ros::to_ros_msg(cloud_xyzir);
// my_publisher->publish(cloud_ros);
Angle utilities
#include "nebula_core_common/nebula_common.hpp"
#include "nebula_core_decoders/angles.hpp"
What it provides:
deg2rad(),rad2deg()- Angle conversionsangle_is_between()- Angle checks with wrap-around support- Angle normalization functions
Usage:
const float azimuth_deg = 45.0F;
const auto azimuth_rad = deg2rad(azimuth_deg);
const float fov_min_rad = 0.0F;
const auto fov_max_rad = deg2rad(90.0F);
assert(angle_is_between(fov_min_rad, fov_max_rad, azimuth_rad));
// Normalize any angle to [0, 360) or [0, 2pi) (depending on the chosen max_angle):
const float azimuth_deg_norm = normalize_angle(azimuth_deg, 360.0F);
assert(azimuth_deg_norm >= 0.0F && azimuth_deg_norm < 360.0F);
const float azimuth_rad_norm = normalize_angle(azimuth_rad, 2 * M_PI);
assert(azimuth_rad_norm >= 0.0F && azimuth_rad_norm < 2 * M_PI);
Logger integration
#include "nebula_core_common/loggers/console_logger.hpp"
#include "nebula_core_common/loggers/logger.hpp"
What it provides:
- Unified logging interface via
Logger - ROS 2 logger wrapper (
RclcppLogger) - Macros:
NEBULA_LOG_STREAM(logger->info, "message")
This is a dependency-injection pattern: non-ROS (ROS-independent) modules log through the generic
loggers::Logger interface, while the ROS wrapper can provide a RclcppLogger implementation so
those modules still log into ROS 2.
Usage:
class SomeModule
{
private:
std::shared_ptr<Logger> logger_;
public:
explicit SomeModule(const std::shared_ptr<Logger> & logger) : logger_(logger) {}
void do_something() { logger_->info("Doing something important"); }
};
int main()
{
auto logger = std::make_shared<ConsoleLogger>("MyApp");
SomeModule module(logger->child("SomeModule"));
module.do_something();
return 0;
}
// Output:
// [MyApp.SomeModule][INFO] Doing something important
Diagnostic integration
#include "nebula_core_ros/diagnostics/liveness_monitor.hpp"
#include "nebula_core_ros/diagnostics/rate_bound_status.hpp"
What it provides:
- Diagnostic task helpers used across Nebula (e.g.,
RateBoundStatus,LivenessMonitor) - Consistent patterns for publish-rate and liveness monitoring
Usage:
class MyNode : public rclcpp::Node
{
public:
MyNode() : Node("diagnostic_integration_usage_example")
{
updater_.add(publish_rate_diag_);
updater_.add(liveness_diag_);
}
void on_packet_received(const std::array<uint8_t, 1500> & packet)
{
liveness_diag_.tick();
if (packet[0] == 1 /* some condition indicating we should publish */) {
// publish something here
publish_rate_diag_.tick();
}
}
private:
diagnostic_updater::Updater updater_{this, 0.1};
RateBoundStatus publish_rate_diag_{
this, RateBoundStatusParam{9, 11}, RateBoundStatusParam{8, 12}};
LivenessMonitor liveness_diag_{"packet_receive", this, 10ms};
};
Integration workflow
Step 1: Clone and rename template
Copy the nebula_sample directory:
cd src
cp -r nebula_sample nebula_myvendor
Step 2: Rename components
Rename all occurrences of "sample"/"Sample" to your vendor name:
- Directories:
nebula_sample_*→nebula_myvendor_* - Files:
sample_*.{cpp,hpp}→myvendor_*.{cpp,hpp} - Classes: Rename wrapper / decoder / HW interface classes (e.g.,
SampleRosWrapper,SampleDecoder,SampleHwInterface) - Namespaces: Update in all files
- CMake/package: Update
CMakeLists.txtandpackage.xml
Tip: Use find-and-replace tools:
find nebula_myvendor -type f -exec sed -i 's/sample/myvendor/g' {} +
find nebula_myvendor -type f -exec sed -i 's/Sample/MyVendor/g' {} +
Step 3: Implement components
See Implementation details below.
Step 4: Verify
See Verification below.
Implementation details
The nebula_sample package provides a template implementation. Its source files contain
comments and concise Implement: markers at key extension points to guide implementation.
The sample decoder is intentionally minimal: it tracks packet counts, marks a scan complete every 10 packets, and emits an empty pointcloud. It does not model real sensor geometry or packet layout.
Some parts might not be relevant for your sensor, e.g. calibration handling. Implement only what is necessary for your sensor.
For IP-based sensors, the sample keeps the common host_ip, sensor_ip, and data_port
parameters and uses sensor_ip as a UDP sender filter in the sample hardware interface.
About SDK integration
If you are a sensor vendor with your own SDK, you might be able to replace parts of the decoder
and HW interface with calls to the SDK. Integrate your SDK either through Git submodules, or
by adding it to build_depends.repos.
Warning
Nebula is licensed under the Apache 2.0 license and has a strict no-binaries policy. Ensure that your SDK source code is public, and ensure that your SDK license allows shipping as part of an Apache 2.0 project.
Please ensure that the SDK is fit for automotive use cases (real-time, safety, reliability). Nebula interfaces like
- pointcloud output topic (the sample template uses
/pointswithPointCloud2) - packet publish/replay topic (the sample template uses
/packetswithNebulaPackets) /diagnostics- launch patterns
must be implemented correctly.
Required behaviors
Your sensor integration must implement these behaviors correctly.
Startup sequence
Order of operations:
- Parameter loading: Declare and read ROS parameters
- Configuration validation: Validate IP addresses, ports, ranges
- Publisher creation: Create ROS publishers
- Decoder initialization: Create decoder and register pointcloud callback
- Runtime mode setup:
launch_hw:=true: create HW interface + packet publisherlaunch_hw:=false: create packet subscriber for replay- Packet forwarding:
- online: register HW callback, decode packets, publish accumulated
NebulaPacketsper scan - offline: decode packets received from
NebulaPacketssubscription - Stream start (online only): call
sensor_interface_start()
Reconfiguration (optional)
When parameters change at runtime:
- Validate new parameters: Check if new values are valid
- Update configuration: Apply new parameter values
- Reinitialize driver: Create new driver with updated config
Connection loss handling
Detect and handle sensor disconnection:
- Liveness timeout: Configure
diagnostics.packet_liveness.timeout_ms - Diagnostic update: Tick
LivenessMonitoron each packet path (online and replay) - Optional: Add explicit connection-loss logs/recovery if required by your sensor
Shutdown sequence
Prefer RAII-based shutdown: sockets/threads/buffers should be owned by objects whose destructors stop/join/close automatically, so the wrapper does not require sensor-specific resource cleanup.
Order of operations:
- Stop stream explicitly (e.g.,
sensor_interface_stop()) - Use RAII to clean up resources (sockets, threads, buffers)
- Make sure that dependent components are still valid during shutdown (e.g., decoder should not be destroyed before HW interface stops)
Diagnostic reporting
Required diagnostic information:
- Publish rate: Use
custom_diagnostic_tasks::RateBoundStatus - Liveness: Use
nebula::ros::LivenessMonitor - Debug timings: Publish receive/decode/publish durations for profiling
Verification
This section contains basic commands you can use to verify your implementation.
Build and test
Build and test both have to succeed without warnings or errors.
colcon build --packages-up-to nebula_myvendor
# This tests all myvendor packages
colcon test --packages-select-regex nebula_myvendor
colcon test-result --verbose
Live testing
For testing with a real sensor, launch Nebula with your sensor model:
# From your package
ros2 launch nebula_myvendor nebula_myvendor.launch.xml sensor_model:=mysensor
# From the `nebula` package
ros2 launch nebula nebula_launch.py sensor_model:=mysensor
To record a rosbag of Nebula packet output for later replay:
ros2 bag record /packets -o mysensor_packets
Replay testing
You can test Nebula offline with the rosbag recorded above:
# Set launch_hw:=false to use rosbag replay
ros2 launch nebula_myvendor nebula_myvendor.launch.xml launch_hw:=false
# In another terminal, play the recorded ROS 2 bag
ros2 bag play mysensor_packets
Integration checklist
Basic setup
- Copied and renamed
nebula_sampletonebula_myvendor - Renamed all occurrences of "sample"/"Sample" to your vendor name
- Updated
package.xmls with author and package info - Updated copyright headers
Implementation tasks
- Implemented
SensorConfigurationstruct - Mapped ROS parameters in Wrapper
- Implemented
unpack()method in Decoder - Implemented scan completion detection
- Implemented communication in HW Interface
- Implemented startup sequence
- Implemented shutdown sequence
- Added diagnostic reporting
- Added packet liveness diagnostics
Verification tasks
- Verified build succeeds without warnings
- Verified point cloud publishes
- Verified scan rate matches expected
- Verified diagnostics report correctly
- Tested with real sensor
- Tested rosbag recording of Nebula packet output
- Tested rosbag replay with
launch_hw:=false
Additional resources
- Hesai implementation:
src/nebula_hesai- Full reference implementation - Velodyne implementation:
src/nebula_velodyne- Alternative reference - Core components:
src/nebula_core- Reusable building blocks - Point types: See
docs/point_types.md - Parameters: See
docs/parameters.md
Getting help
- Check existing sensor implementations for examples
- Consult the API reference
- Ask questions in GitHub Issues