Skip to content

Record

Frequency #

__init__(records, target_column=None, row_filter=None) #

Construct an instance.

Parameters:

Name Type Description Default
records RecordsInterface

records to calculate frequency.

required
target_column str | None

Column name of timestamps used in the calculation, by default None If None, the first column of records is selected.

None
row_filter Callable | None

Filter function to select rows to calculate period, by default None. Example:

def row_filter(record: RecordInterface) -> bool:
    start_column = 'timestamp1'
    end_column = 'timestamp2'
    if (record.data.get(start_column) is not None
            and record.data.get(end_column) is not None):
        return True
    else:
        return False

None

to_records(interval_ns=1000000000, base_timestamp=None, until_timestamp=None, converter=None) #

Calculate frequency records.

Parameters:

Name Type Description Default
interval_ns int

Interval used for frequency calculation, by default 1000000000 [ns]. The number of timestamps that exist in this time interval is counted.

1000000000
base_timestamp int | None

Initial timestamp used for frequency calculation, by default None. If None, earliest timestamp is used.

None
until_timestamp int | None

End time of measurement. If None, oldest timestamp is used.

None
converter ClockConverter | None

Converter to simulation time.

None

Returns:

Type Description
RecordsInterface

frequency records. Columns - {timestamp_column} - {frequency_column}

Latency #

__init__(records, start_column=None, end_column=None) #

Construct an instance.

Parameters:

Name Type Description Default
records RecordsInterface

records to calculate latency.

required
start_column str | None

Column name of start timestamps used in the calculation, by default None If None, the first column of records is selected.

None
end_column str | None

Column name of end timestamps used in the calculation, by default None If None, the last column of records is selected.

None

to_records(converter=None) #

Calculate latency records.

Parameters:

Name Type Description Default
converter ClockConverter | None

Converter to simulation time.

None

Returns:

Type Description
RecordsInterface

latency records. Columns - {start_timestamp_column} - {latency_column}

Period #

__init__(records, target_column=None, row_filter=None) #

Construct an instance.

Parameters:

Name Type Description Default
records RecordsInterface

records to calculate period.

required
target_column str | None

Column name of timestamps used in the calculation, by default None If None, the first column of records is selected.

None
row_filter Callable | None

Filter function to select rows to calculate period, by default None. Example:

def row_filter(record: RecordInterface) -> bool:
    start_column = 'timestamp1'
    end_column = 'timestamp2'
    if (record.data.get(start_column) is not None
            and record.data.get(end_column) is not None):
        return True
    else:
        return False

None

to_records(converter=None) #

Calculate period records.

Parameters:

Name Type Description Default
converter ClockConverter | None

Converter to simulation time.

None

Returns:

Type Description
RecordsInterface

period records. Columns - {timestamp_column} - {period_column}

Range #

Class that calculates minimum/maximum timestamps from a list of records.

get_range() #

Get minimum and maximum timestamps.

Returns:

Type Description
tuple[int, int]

Minimum and Maximum timestamps.

Notes

The first column is system time for now. The other columns could be other than system time. Only the system time is picked out here.

RecordInterface #

Interface for Record class.

This behavior is similar to the dictionary type. To avoid conflicts with the pybind metaclass, ABC is not used.

columns: set[str] abstractmethod property #

Get column names.

Returns:

Type Description
set[str]

Column names.

data: dict[str, int] abstractmethod property #

Convert to dictionary.

Returns:

Name Type Description
data dict[str, int]:

dictionary data.

add(key, stamp) abstractmethod #

Add(Update) column value.

Parameters:

Name Type Description Default
key str

key name to set.

required
stamp int

key value to set.

required

change_dict_key(old_key, new_key) abstractmethod #

Change columns name.

Parameters:

Name Type Description Default
old_key str

column name to be changed.

required
new_key str

new column name.

required

drop_columns(columns) abstractmethod #

Drop columns method.

Parameters:

Name Type Description Default
columns list[str]

columns to be dropped.

required

equals(other) abstractmethod #

Compare record.

Parameters:

Name Type Description Default
other RecordInterface

comparison target.

required

Returns:

Type Description
bool

True if record data is same, otherwise false.

get(key) abstractmethod #

Get value for specific column.

Parameters:

Name Type Description Default
key str

key name to get.

required

Returns:

Type Description
int

Value for selected key.

get_with_default(key, v) abstractmethod #

Get value for specific column.

Parameters:

Name Type Description Default
key str

key name to get.

required
v int

default value.

required

Returns:

Type Description
int

Value for selected key.

merge(other) abstractmethod #

Merge record.

Parameters:

Name Type Description Default
other RecordInterface

merge target.

required

Returns:

Type Description
Record

Merged record class if inplace = false, otherwise None.

RecordsInterface #

Interface for Record class.

To avoid conflicts with the pybind metaclass, ABC is not used.

columns: list[str] abstractmethod property #

Get column names.

Returns:

Type Description
list[str]

Columns.

data: Sequence[RecordInterface] abstractmethod property #

Get records list.

Returns:

Type Description
Sequence[RecordInterface]

Records list.

append_column(column, values) abstractmethod #

Append column to records.

Parameters:

Name Type Description Default
column ColumnValue

column

required
values list[int]

values

required

bind_drop_as_delay() abstractmethod #

Convert the dropped points to records converted as delay.

clone() abstractmethod #

Get duplicated records.

Returns:

Type Description
RecordsInterface

deep-copied records.

concat(other) abstractmethod #

Concat records.

Parameters:

Name Type Description Default
other RecordsInterface

records to be concatenated.

required

drop_columns(columns) abstractmethod #

Drop columns.

Parameters:

Name Type Description Default
columns list[str]

columns to be dropped.

required

equals(other) abstractmethod #

Equals method.

Parameters:

Name Type Description Default
other RecordsInterface

comparison target.

required

Returns:

Type Description
bool

true if record data is same, otherwise false.

filter_if(f) abstractmethod #

Get filtered records.

Parameters:

Name Type Description Default
f Callable[[RecordInterface], bool]

condition function.

required

groupby(columns) abstractmethod #

Split based on the value of the given column name.

Parameters:

Name Type Description Default
columns list[str]

columns name list.

required

Returns:

Type Description
dict[tuple[int, ...], RecordsInterface]

deep-copied records.

merge(right_records, join_left_key, join_right_key, columns, how) abstractmethod #

Merge records by key match.

Parameters:

Name Type Description Default
right_records RecordsInterface

merge target.

required
join_left_key str

Key to use for matching.

required
join_right_key str

Key to use for matching.

required
columns list[str]

columns

required
how str

merge type. [inner/right/left/outer]

required

Returns:

Type Description
RecordsInterface

Examples:

>>> left_records = Records([
    Record({'join_key': 1, 'left_other': 1}),
    Record({'join_key': 2, 'left_other': 2}),
])
>>> right_records = Records([
    Record({'join_key': 2, 'right_other': 3}),
    Record({'join_key': 1, 'right_other': 4}),
])
>>> expected = Records([
    Record({'join_key': 1, 'left_other': 1, 'right_other': 4}),
    Record({'join_key': 2, 'left_other': 2, 'right_other': 3}),
])
>>> left_records.merge(right_records, 'join_key').equals(expected)
True

merge_sequential(right_records, left_stamp_key, right_stamp_key, join_left_key, join_right_key, columns, how) abstractmethod #

Merge chronologically contiguous records.

Merge left_records[left_key] and the right_records[right_key] that occurred immediately after it. If join_key is set, left_records[join_key]==right_records[join_key] is added as condition.

Parameters:

Name Type Description Default
right_records RecordsInterface

merge target.

required
left_stamp_key str

left records key name to use for comparison in time series merge.

required
right_stamp_key str

right records key name to use for comparison in time series merge.

required
join_left_key str | None

join key name to use equal condition.

required
join_right_key str | None

join key name to use equal condition.

required
columns list[str]

columns

required
how str

merge type. [inner/right/left/outer]

required

Returns:

Type Description
RecordsInterface

Merged records.

Examples:

>>> left_records = Records([
    Record({'join_key': 1, 'left_stamp_key': 0}),
    Record({'join_key': 2, 'left_stamp_key': 3})
])
>>> right_records = Records([
    Record({'join_key': 2, 'right_stamp_key': 5}),
    Record({'join_key': 1, 'right_stamp_key': 6})
])
>>> expected = Records([
    Record({'join_key': 1, 'left_stamp_key': 0, 'right_stamp_key': 6}),
    Record({'join_key': 2, 'left_stamp_key': 3, 'right_stamp_key': 5}),
])
>>> left_records.merge_sequential(
    right_records, 'left_stamp_key', 'right_stamp_key', 'join_key', 'inner'
).equals(expected)
True

merge_sequential_for_addr_track(source_stamp_key, source_key, copy_records, copy_stamp_key, copy_from_key, copy_to_key, sink_records, sink_stamp_key, sink_from_key, columns) abstractmethod #

Merge for tracking addresses when copying occurs.

Parameters:

Name Type Description Default
source_stamp_key str

key name indicating time stamp for source records

required
source_key str

Key name indicating the address of the copy source for source records.

required
copy_records Recordsinterface

copy records

required
copy_stamp_key str

key name indicating time stamp for copy records

required
copy_from_key str

Key name indicating the address of the copy source for source records.

required
copy_to_key str

Key name indicating the address of the copy destination

required
sink_records RecordsInterface

sink-side records

required
sink_stamp_key str

key_name indicating time stamp for copy records

required
sink_from_key str

Key name indicating the address of the copy destination

required
columns list[str]

columns

required

Returns:

Type Description
RecordsInterface

Merged records.

Examples:

>>> source_records = Records([
    Record({'source_key': 1, 'source_stamp': 0}),
])
>>> copy_records = Records([
    Record({'copy_from_key': 1, 'copy_to_key': 11, 'copy_stamp_key': 1})
])
>>> sink_records = Records([
    Record({'sink_from_key': 11, 'sink_stamp': 2}),
    Record({'sink_from_key': 1, 'sink_stamp': 3}),
])
>>> expected = Records([
    Record({'source_stamp':0, 'sink_stamp':3, 'source_key':1}),
    Record({'source_stamp':0, 'sink_stamp':2, 'source_key':1}),
])
>>> source_records.merge_sequential_for_addr_track(
    'source_stamp', 'source_key', copy_records, 'copy_stamp_key', 'copy_from_key',
    'copy_to_key', sink_records, 'sink_stamp', 'sink_from_key'
).equals(expected)
True

reindex(columns) #

Reindex columns.

Parameters:

Name Type Description Default
columns list[str]

columns

required

rename_columns(columns) abstractmethod #

Rename columns.

Parameters:

Name Type Description Default
columns dict[str, str]

rename params. same as dataframe rename.

required

sort(key, sub_key=None, ascending=True) abstractmethod #

Sort records.

Parameters:

Name Type Description Default
key str

key name to used for sort.

required
sub_key str | None

second key name to used for sort.

None
ascending bool

ascending if True, descending if false.

True

sort_column_order(ascending=True, put_none_at_top=True) abstractmethod #

Sort records by ordered columns.

Parameters:

Name Type Description Default
ascending bool

ascending if True, descending if false.

True
put_none_at_top bool

put none at top

True

to_dataframe() abstractmethod #

Convert to pandas dataframe.

Returns:

Type Description
DataFrame

Records data.

ResponseTime #

Class which calculates response time.

Examples:

>>> from caret_analyze import Application, Architecture, Lttng
>>> from caret_analyze.record import ResponseTime
>>> # Load results
>>> arch = Architecture('yaml', '/path/to/yaml')
>>> lttng = Lttng('/path/to/ctf')
>>> app = Application(arch, lttng)
>>> # Select target instance
>>> node = app.get_node('node_name')
>>> callback = node.callbacks[0]
>>> callback.summary.pprint()
>>> # Calculate response time
>>> records = callback.to_records()
>>> response = ResponseTime(records)
>>> response_time_records = response.to_best_case_records()
>>> response_df = response_time_records.to_dataframe()
>>> path = app.get_path('path_name')
>>> records = path.to_records()
>>> response = ResponseTime(records)
>>> response_time_records = response.to_best_case_records()
>>> response_df = response_time_records.to_dataframe()

__init__(records, *, columns=None) #

Construct an instance.

Parameters:

Name Type Description Default
records RecordsInterface

records to calculate response time.

required
columns list[str] | None

List of column names to be used in return value. If None, only first and last columns are used.

None

to_all_records(converter=None) #

Calculate the data of all records for response time.

This represents the response time for all cases of message flows with the same output.

Returns:

Type Description
RecordsInterface

Records of the all response time.

Parameters:

Name Type Description Default
converter ClockConverter | None

Converter to simulation time.

Columns - {columns[0]} - {'response_time'}

None

to_all_stacked_bar() #

Calculate records for stacked bar.

Returns:

Type Description
RecordsInterface

Records of the all response time.

Columns - {columns[0]} - {columns[1]} - {...} - {columns[n-1]}

to_best_case_records(converter=None) #

Calculate data of the best case records for response time.

This represents the response time for the newest case in the message flow with the same output.

Parameters:

Name Type Description Default
converter ClockConverter | None

Converter to simulation time.

None

Returns:

Type Description
RecordsInterface

Records of the best cases response time.

Columns - {columns[0]} - {'response_time'}

to_best_case_stacked_bar() #

Calculate records for stacked bar.

Returns:

Type Description
RecordsInterface

Records of the best cases response time.

Columns - {columns[0]} - {columns[1]} - {...} - {columns[n-1]}

to_worst_case_records(converter=None) #

Calculate data of the worst case records for response time.

This represents the response time for the oldest case in the message flow with the same output.

Parameters:

Name Type Description Default
converter ClockConverter | None

Converter to simulation time.

None

Returns:

Type Description
RecordsInterface

Records of the worst cases response time.

Columns - {columns[0]} - {'response_time'}

to_worst_case_stacked_bar() #

Calculate records for stacked bar.

Returns:

Type Description
RecordsInterface

Records of the worst-with-external-latency cases response time.

Columns - {columns[0]} - {columns[1]} - {...} - {columns[n-1]}

to_worst_with_external_latency_case_records(converter=None) #

Calculate data of the worst-with-external-latency case records for response time.

This represents the response time for the oldest case in the message flow with the same output as well as delays caused by various factors such as lost messages.

Parameters:

Name Type Description Default
converter ClockConverter | None

Converter to simulation time.

None

Returns:

Type Description
RecordsInterface

Records of the worst-with-external-latency cases response time.

Columns - {columns[0]} - {'response_time'}

to_worst_with_external_latency_case_stacked_bar() #

Calculate records for stacked bar.

Returns:

Type Description
RecordsInterface

The best and worst cases are separated into separate columns.

Columns - {columns[0]}_min - {columns[0]}_max - {columns[1]} - {...} - {columns[n-1]}

StackedBar #

__init__(records, converter=None) #

Generate records for stacked bar.

Parameters:

Name Type Description Default
records RecordsInterface

Records of response time.

required
converter ClockConverter | None

Converter to simulation time.

None

Raises:

Type Description
ValueError

Error occurs if the records are empty.

to_dict() #

Get stacked bar dict data.

Returns:

Type Description
dict[str, list[int]]

Stacked bar dict data.