TIER IV API
Tier4
Database class for T4 dataset to help query and retrieve information from the database.
Load database and creates reverse indexes and shortcuts.
Parameters:
-
version
(str
) –Directory name of database json files.
-
data_root
(str
) –Path to the root directory of dataset.
-
verbose
(bool
, default:True
) –Whether to display status during load.
Examples:
>>> from t4_devkit import Tier4
>>> t4 = Tier4("annotation", "data/tier4")
======
Loading T4 tables in `annotation`...
Reverse indexing...
Done reverse indexing in 0.010 seconds.
======
21 category
8 attribute
4 visibility
31 instance
7 sensor
7 calibrated_sensor
2529 ego_pose
1 log
1 scene
88 sample
2529 sample_data
1919 sample_annotation
0 object_ann
0 surface_ann
0 keypoint
1 map
Done loading in 0.046 seconds.
======
get_table
Return the list of dataclasses corresponding to the schema table.
Parameters:
-
schema
(str | SchemaName
) –Name of schema table.
Returns:
-
list[SchemaTable]
–List of dataclasses.
get
Return a record identified by the associated token.
Parameters:
-
schema
(str | SchemaName
) –Name of schema.
-
token
(str
) –Token to identify the specific record.
Returns:
-
SchemaTable
–Table record of the corresponding token.
get_idx
Return the index of the record in a table in constant runtime.
Parameters:
-
schema
(str | SchemaName
) –Name of schema.
-
token
(str
) –Token of record.
Returns:
-
int
–The index of the record in table.
get_sample_data_path
Return the file path to a raw data recorded in sample_data
.
Parameters:
-
sample_data_token
(str
) –Token of
sample_data
.
Returns:
-
str
–File path.
get_sample_data
get_sample_data(sample_data_token: str, selected_ann_tokens: list[str] | None = None, *, as_3d: bool = True, visibility: VisibilityLevel = VisibilityLevel.NONE) -> tuple[str, list[BoxType], CamIntrinsicType | None]
Return the data path as well as all annotations related to that sample_data
.
Parameters:
-
sample_data_token
(str
) –Token of
sample_data
. -
selected_ann_tokens
(list[str] | None
, default:None
) –Specify if you want to extract only particular annotations.
-
as_3d
(bool
, default:True
) –Whether to return 3D or 2D boxes.
-
visibility
(VisibilityLevel
, default:NONE
) –If
sample_data
is an image, this sets required visibility for only 3D boxes.
Returns:
-
tuple[str, list[BoxType], CamIntrinsicType | None]
–Data path, a list of boxes and 3x3 camera intrinsic matrix.
get_semantic_label
get_semantic_label(category_token: str, attribute_tokens: list[str] | None = None, name_mapping: dict[str, str] | None = None, *, update_default_mapping: bool = False) -> SemanticLabel
Return a SemanticLabel instance from specified category_token
and attribute_tokens
.
Parameters:
-
category_token
(str
) –Token of
Category
table. -
attribute_tokens
(list[str] | None
, default:None
) –List of attribute tokens.
-
name_mapping
(dict[str, str] | None
, default:None
) –Category name mapping.
-
update_default_mapping
(bool
, default:False
) –Whether to update default category name mapping.
Returns:
-
SemanticLabel
–Instantiated SemanticLabel.
get_box3d
Return a Box3D class from a sample_annotation
record.
Parameters:
-
sample_annotation_token
(str
) –Token of
sample_annotation
.
Returns:
-
Box3D
–Instantiated Box3D.
get_box2d
Return a Box2D class from a object_ann
record.
Parameters:
-
object_ann_token
(str
) –Token of
object_ann
.
Returns:
-
Box2D
–Instantiated Box2D.
get_box3ds
Rerun a list of Box3D classes for all annotations of a particular sample_data
record.
It the sample_data
is a keyframe, this returns annotations for the corresponding sample
.
Parameters:
-
sample_data_token
(str
) –Token of
sample_data
.
Returns:
-
list[Box3D]
–List of instantiated Box3D classes.
get_box2ds
Rerun a list of Box2D classes for all annotations of a particular sample_data
record.
It the sample_data
is a keyframe, this returns annotations for the corresponding sample
.
Parameters:
-
sample_data_token
(str
) –Token of
sample_data
.
Returns:
-
list[Box2D]
–List of instantiated Box2D classes.
box_velocity
Return the velocity of an annotation. If corresponding annotation has a true velocity, this returns it. Otherwise, this estimates the velocity by computing the difference between the previous and next frame. If it is failed to estimate the velocity, values are set to np.nan.
Parameters:
-
sample_annotation_token
(str
) –Token of
sample_annotation
. -
max_time_diff
(float
, default:1.5
) –Max allowed time difference between consecutive samples.
Returns:
-
VelocityType
(VelocityType
) –Velocity in the order of (vx, vy, vz) in m/s.
TODO
Currently, velocity coordinates is with respect to map, but if should be each box.
project_pointcloud
project_pointcloud(point_sample_data_token: str, camera_sample_data_token: str, min_dist: float = 1.0, *, ignore_distortion: bool = False) -> tuple[NDArrayF64, NDArrayF64, NDArrayU8]
Project pointcloud on image plane.
Parameters:
-
point_sample_data_token
(str
) –Sample data token of lidar or radar sensor.
-
camera_sample_data_token
(str
) –Sample data token of camera.
-
min_dist
(float
, default:1.0
) –Distance from the camera below which points are discarded.
-
ignore_distortion
(bool
, default:False
) –Whether to ignore distortion parameters.
Returns:
-
tuple[NDArrayF64, NDArrayF64, NDArrayU8]
–Projected points [2, n], their normalized depths [n] and an image.
render_scene
render_scene(scene_token: str, *, max_time_seconds: float = np.inf, save_dir: str | None = None, show: bool = True) -> None
Render specified scene.
Parameters:
-
scene_token
(str
) –Unique identifier of scene.
-
max_time_seconds
(float
, default:inf
) –Max time length to be rendered [s].
-
save_dir
(str | None
, default:None
) –Directory path to save the recording.
-
show
(bool
, default:True
) –Whether to spawn rendering viewer.
render_instance
Render particular instance.
Parameters:
-
instance_token
(str
) –Instance token.
-
save_dir
(str | None
, default:None
) –Directory path to save the recording.
-
show
(bool
, default:True
) –Whether to spawn rendering viewer.
render_pointcloud
render_pointcloud(scene_token: str, *, max_time_seconds: float = np.inf, ignore_distortion: bool = True, save_dir: str | None = None, show: bool = True) -> None
Render pointcloud on 3D and 2D view.
Parameters:
-
scene_token
(str
) –Scene token.
-
max_time_seconds
(float
, default:inf
) –Max time length to be rendered [s].
-
save_dir
(str | None
, default:None
) –Directory path to save the recording.
-
ignore_distortion
(bool
, default:True
) –Whether to ignore distortion parameters.
-
show
(bool
, default:True
) –Whether to spawn rendering viewer.
TODO
Add an option of rendering radar channels.