TIER IV API
Tier4
Database class for T4 dataset to help query and retrieve information from the database.
Load database and creates reverse indexes and shortcuts.
Parameters:
-
version
str
) –Directory name of database json files.
-
data_root
str
) –Path to the root directory of dataset.
-
verbose
bool
, default:True
) –Whether to display status during load.
Examples:
>>> from t4_devkit import Tier4
>>> t4 = Tier4("annotation", "data/tier4")
======
Loading T4 tables in `annotation`...
Reverse indexing...
Done reverse indexing in 0.010 seconds.
======
21 category
8 attribute
4 visibility
31 instance
7 sensor
7 calibrated_sensor
2529 ego_pose
1 log
1 scene
88 sample
2529 sample_data
1919 sample_annotation
0 object_ann
0 surface_ann
0 keypoint
1 map
Done loading in 0.046 seconds.
======
get_table
Return the list of dataclasses corresponding to the schema table.
Parameters:
-
schema
str | SchemaName
) –Name of schema table.
Returns:
-
list[SchemaTable]
–List of dataclasses.
get
Return a record identified by the associated token.
Parameters:
-
schema
str | SchemaName
) –Name of schema.
-
token
str
) –Token to identify the specific record.
Returns:
-
SchemaTable
–Table record of the corresponding token.
get_idx
Return the index of the record in a table in constant runtime.
Parameters:
-
schema
str | SchemaName
) –Name of schema.
-
token
str
) –Token of record.
Returns:
-
int
–The index of the record in table.
get_sample_data_path
get_sample_data
get_sample_data(
sample_data_token: str,
*,
selected_ann_tokens: list[str] | None = None,
as_3d: bool = True,
as_sensor_coord: bool = True,
future_seconds: float = 0.0,
visibility: VisibilityLevel = VisibilityLevel.NONE,
) -> tuple[str, list[BoxType], CamIntrinsicType | None]
Return the data path as well as all annotations related to that sample_data
.
Note that output boxes is w.r.t base link or sensor coordinate system.
Parameters:
-
sample_data_token
str
) –Token of
sample_data
. -
selected_ann_tokens
list[str] | None
, default:None
) –Specify if you want to extract only particular annotations.
-
as_3d
bool
, default:True
) –Whether to return 3D or 2D boxes.
-
as_sensor_coord
bool
, default:True
) –Whether to transform boxes as sensor origin coordinate system.
-
visibility
VisibilityLevel
, default:NONE
) –If
sample_data
is an image, this sets required visibility for only 3D boxes.
Returns:
get_semantic_label
get_semantic_label(
category_token: str,
attribute_tokens: list[str] | None = None,
) -> SemanticLabel
Return a SemanticLabel instance from specified category_token
and attribute_tokens
.
Parameters:
-
category_token
str
) –Token of
Category
table. -
attribute_tokens
list[str] | None
, default:None
) –List of attribute tokens.
Returns:
-
SemanticLabel
–Instantiated SemanticLabel.
get_box3d
get_box2d
get_box3ds
Rerun a list of Box3D classes for all annotations of a particular sample_data
record.
It the sample_data
is a keyframe, this returns annotations for the corresponding sample
.
Parameters:
-
sample_data_token
str
) –Token of
sample_data
. -
future_seconds
float
, default:0.0
) –Future time in [s].
Returns:
get_box2ds
box_velocity
Return the velocity of an annotation. If corresponding annotation has a true velocity, this returns it. Otherwise, this estimates the velocity by computing the difference between the previous and next frame. If it is failed to estimate the velocity, values are set to np.nan.
Parameters:
-
sample_annotation_token
str
) –Token of
sample_annotation
. -
max_time_diff
float
, default:1.5
) –Max allowed time difference between consecutive samples.
Returns:
-
VelocityType
(VelocityType
) –Velocity in the order of (vx, vy, vz) in m/s.
TODO
Currently, velocity coordinates is with respect to map, but if should be each box.
project_pointcloud
project_pointcloud(
point_sample_data_token: str,
camera_sample_data_token: str,
min_dist: float = 1.0,
*,
ignore_distortion: bool = False,
) -> tuple[NDArrayF64, NDArrayF64, NDArrayU8]
Project pointcloud on image plane.
Parameters:
-
point_sample_data_token
str
) –Sample data token of lidar or radar sensor.
-
camera_sample_data_token
str
) –Sample data token of camera.
-
min_dist
float
, default:1.0
) –Distance from the camera below which points are discarded.
-
ignore_distortion
bool
, default:False
) –Whether to ignore distortion parameters.
Returns:
-
tuple[NDArrayF64, NDArrayF64, NDArrayU8]
–Projected points [2, n], their normalized depths [n] and an image.
render_scene
render_scene(
scene_token: str,
*,
max_time_seconds: float = np.inf,
future_seconds: float = 0.0,
save_dir: str | None = None,
show: bool = True,
) -> None
Render specified scene.
Parameters:
-
scene_token
str
) –Unique identifier of scene.
-
max_time_seconds
float
, default:inf
) –Max time length to be rendered [s].
-
future_seconds
float
, default:0.0
) –Future time in [s].
-
save_dir
str | None
, default:None
) –Directory path to save the recording.
-
show
bool
, default:True
) –Whether to spawn rendering viewer.
render_instance
render_instance(
instance_token: str | Sequence[str],
*,
future_seconds: float = 0.0,
save_dir: str | None = None,
show: bool = True,
) -> None
Render particular instance.
Parameters:
render_pointcloud
render_pointcloud(
scene_token: str,
*,
max_time_seconds: float = np.inf,
ignore_distortion: bool = True,
save_dir: str | None = None,
show: bool = True,
) -> None
Render pointcloud on 3D and 2D view.
Parameters:
-
scene_token
str
) –Scene token.
-
max_time_seconds
float
, default:inf
) –Max time length to be rendered [s].
-
save_dir
str | None
, default:None
) –Directory path to save the recording.
-
ignore_distortion
bool
, default:True
) –Whether to ignore distortion parameters.
-
show
bool
, default:True
) –Whether to spawn rendering viewer.
TODO
Add an option of rendering radar channels.