smarts.core.agent_interface module

class smarts.core.agent_interface.Accelerometer[source]

Requires detection of motion changes within the agents vehicle.

class smarts.core.agent_interface.AgentInterface(debug: bool = False, event_configuration: ~smarts.core.agent_interface.EventConfiguration = <factory>, done_criteria: ~smarts.core.agent_interface.DoneCriteria = <factory>, max_episode_steps: int | None = None, neighborhood_vehicle_states: ~smarts.core.agent_interface.NeighborhoodVehicles | bool = False, waypoint_paths: ~smarts.core.agent_interface.Waypoints | bool = False, road_waypoints: ~smarts.core.agent_interface.RoadWaypoints | bool = False, drivable_area_grid_map: ~smarts.core.agent_interface.DrivableAreaGridMap | bool = False, occupancy_grid_map: ~smarts.core.agent_interface.OGM | bool = False, top_down_rgb: ~smarts.core.agent_interface.RGB | bool = False, lidar_point_cloud: ~smarts.core.agent_interface.Lidar | bool = False, action: ~smarts.core.controllers.action_space_type.ActionSpaceType | None = None, vehicle_type: str = '', vehicle_class: str = 'generic_sedan', accelerometer: ~smarts.core.agent_interface.Accelerometer | bool = True, lane_positions: ~smarts.core.agent_interface.LanePositions | bool = True, signals: ~smarts.core.agent_interface.Signals | bool = False, occlusion_map: ~smarts.core.agent_interface.OcclusionMap | bool = False, custom_renders: ~typing.Tuple[~smarts.core.agent_interface.CustomRender, ...] = ())[source]

Configure the interface between an Agent and the Environment. Choose the action space and sensors to enable.

accelerometer: Accelerometer | bool = True

Enable acceleration and jerk observations.

action: ActionSpaceType | None = None

The choice of action space; this also decides the controller that will be enabled and the chassis type that will be used.

custom_renders: Tuple[CustomRender, ...] = ()

Add custom renderer outputs.

debug: bool = False

Enable debug information for the various sensors and action spaces.

done_criteria: DoneCriteria

Configurable criteria of when to mark this actor as done. Done actors will be removed from the environment and may trigger the episode to be done.

drivable_area_grid_map: DrivableAreaGridMap | bool = False

Enable the DrivableAreaGridMap sensor, a grid is provided where each cell signals whether the corresponding area of the map is a drivable surface.

event_configuration: EventConfiguration

Configurable criteria of when to trigger events

static from_type(requested_type: AgentType, **kwargs) AgentInterface[source]

Instantiates from a selection of agent_interface presets

Parameters:
  • requested_type – Select a prefabricated AgentInterface from an AgentType

  • max_episode_steps – The total number of steps this interface will observe before expiring

lane_positions: LanePositions | bool = True

Enable lane-relative position reporting.

property lidar

Deprecated. Use lidar_point_cloud instead.

lidar_point_cloud: Lidar | bool = False

Enable the LIDAR point cloud sensor.

max_episode_steps: int | None = None

If set, agents will become “done” after this many steps. set to None to disable.

neighborhood_vehicle_states: NeighborhoodVehicles | bool = False

Enable the Neighborhood Vehicle States sensor, vehicles around the ego vehicle will be provided.

property neighborhood_vehicles

Deprecated. Use neighborhood_vehicle_states instead.

occlusion_map: OcclusionMap | bool = False

Enable the OcclusionMap for the current vehicle. This image represents what the current vehicle can see.

occupancy_grid_map: OGM | bool = False

Enable the OGM (Occupancy Grid Map) sensor, a grid is provided where each cell signals whether that area in space is occupied.

property ogm

Deprecated. Use occupancy_grid_map instead.

replace(**kwargs) AgentInterface[source]

Clone this AgentInterface with the given fields updated >>> interface = AgentInterface(action=ActionSpaceType.Continuous) .replace(waypoint_paths=True) >>> interface.waypoint_paths Waypoints(…)

property requires_rendering

If this agent interface requires a renderer.

property rgb

Deprecated. Use top_down_rgb instead.

road_waypoints: RoadWaypoints | bool = False

Enable the Road Waypoints sensor, waypoints along all lanes (oncoming included) of the road the vehicle is currently on will be provided even if these waypoints do not match the current mission.

signals: Signals | bool = False

Enable the signals sensor.

top_down_rgb: RGB | bool = False

Enable the RGB camera sensor, a top down color image is provided.

vehicle_class: str = 'generic_sedan'

The choice of vehicle classes from the vehicle definition list.

vehicle_type: str = ''

The choice of vehicle type.

waypoint_paths: Waypoints | bool = False

Enable the Waypoint Paths sensor, a list of valid waypoint paths along the current mission.

property waypoints

Deprecated. Use waypoint_paths instead.

class smarts.core.agent_interface.AgentType(value)[source]

Used to select a standard configured agent interface.

Boid = 9

Controls multiple vehicles.

Buddha = 0

Agent sees nothing and does nothing.

Direct = 12

Agent sees neighbor vehicles and performs actions based on direct updates to (acceleration, angular_velocity).

Full = 1

All observations and continuous action space.

Laner = 3

Agent sees waypoints and performs lane actions.

LanerWithSpeed = 7

Agent sees waypoints and performs speed and lane action.

Loner = 4

Agent sees waypoints and performs continuous actions.

MPCTracker = 10

Agent performs trajectory tracking using model predictive control.

Standard = 2

Minimal observations for dealing with waypoints and other vehicles and continuous action space.

StandardWithAbsoluteSteering = 6

Agent sees waypoints, neighbor vehicles and performs continuous action.

Tagger = 5

Agent sees waypoints, other vehicles, and performs continuous actions.

Tracker = 8

Agent sees waypoints and performs target position action.

TrajectoryInterpolator = 11

Agent performs linear trajectory interpolation.

class smarts.core.agent_interface.AgentsAliveDoneCriteria(minimum_ego_agents_alive: int | None = None, minimum_total_agents_alive: int | None = None, agent_lists_alive: List[AgentsListAlive] | None = None)[source]

Multi-agent requirements used to determine if an agent should be removed from an episode.

agent_lists_alive: List[AgentsListAlive] | None = None

A termination criteria based on the ids of agents. If set, triggers the agent to be done if any list of agents fails to meet its specified minimum number of alive agents. Example:

[
AgentsListAlive(
    agents_list=['agent1','agent2'], minimum_agents_alive_in_list=1
),
AgentsListAlive(
    agents_list=['agent3'], minimum_agents_alive_in_list=1
),
]

This agent’s done event would be triggered if both ‘agent1’ and ‘agent2’ is done or ‘agent3’ is done.

minimum_ego_agents_alive: int | None = None

If set, triggers the agent to be done if the total number of alive ego agents falls below the given value.

minimum_total_agents_alive: int | None = None

If set, triggers the agent to be done if total number of alive agents falls below the given value.

class smarts.core.agent_interface.AgentsListAlive(agents_list: List[str], minimum_agents_alive_in_list: int)[source]

Describes agents that are active in the simulation.

agents_list: List[str]

The list of agents to check whether they are alive

minimum_agents_alive_in_list: int

Triggers the agent to be done if the number of alive agents in agents_list falls below the given value

class smarts.core.agent_interface.CustomRender(name: str, fragment_shader_path: str | Path, dependencies: Tuple[RenderDependencyBase, ...], width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

Utility render option that allows for a custom render output.

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m map but a 64x64 image representation you would do CustomRender(width=64, height=64, resolution=100/64)

dependencies: Tuple[RenderDependencyBase, ...]

Inputs used by the fragment program.

fragment_shader_path: str | Path

The path string to the fragment shader.

height: int = 256

The number of pixels for the range of v.

name: str

The name used to generate the camera.

resolution: float = 0.1953125

Scales the resolution to the given size. Resolution 1 matches width and height.

width: int = 256

The number of pixels for the range of u.

class smarts.core.agent_interface.CustomRenderBufferDependency(buffer_dependency_name: str | BufferID, variable_name: str, target_actor: str = '_SELF')[source]

Base for referencing an observation buffer (other than a camera).

buffer_dependency_name: str | BufferID

The identification of buffer to reference.

property name: str

The name of this dependency.

target_actor: str = '_SELF'
variable_name: str

The variable name inside the shader.

class smarts.core.agent_interface.CustomRenderCameraDependency(camera_dependency_name: str | CameraSensorID, variable_name: str, target_actor: str = '_SELF')[source]

Provides a uniform texture access to an existing camera.

camera_dependency_name: str | CameraSensorID

The name of the camera (type) to target.

is_self_targetted()[source]

If the dependency is drawing from one of this agent’s cameras.

property name: str

The name of this dependency.

target_actor: str = '_SELF'

The target actor to target. Defaults to targeting this vehicle.

variable_name: str

The name of the camera texture variable.

class smarts.core.agent_interface.CustomRenderVariableDependency(value: int | float | bool | ndarray | list | tuple, variable_name: str)[source]

Base for renderer value to pass directly to the shader.

property name: str

The name of this dependency.

value: int | float | bool | ndarray | list | tuple

The value to pass to the shader.

variable_name: str

The uniform name inside the shader.

class smarts.core.agent_interface.DoneCriteria(collision: bool = True, off_road: bool = True, off_route: bool = True, on_shoulder: bool = False, wrong_way: bool = False, not_moving: bool = False, agents_alive: AgentsAliveDoneCriteria | None = None, interest: InterestDoneCriteria | None = None)[source]

Toggle conditions on which cause removal of an agent from the current episode.

property actors_alive

Deprecated. Use interest.

agents_alive: AgentsAliveDoneCriteria | None = None

If set, triggers the ego agent to be done based on the number of active agents for multi-agent purposes.

collision: bool = True

End the episode when the agent collides with another vehicle.

interest: InterestDoneCriteria | None = None

If set, triggers when there are no interest vehicles left existing in the simulation.

not_moving: bool = False

End the episode when the agent is not moving for n seconds or more. To account for controller noise not moving means <= 1 meter of displacement within n seconds.

off_road: bool = True

End the episode when the agent drives off the road.

off_route: bool = True

End the episode when the agent drives off the specified mission route.

on_shoulder: bool = False

End the episode when the agent drives on the road shoulder.

wrong_way: bool = False

End the episode when the agent drives in the wrong direction, even though it may be driving on the mission route.

class smarts.core.agent_interface.DrivableAreaGridMap(width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m DrivableAreaGridMap but a 64x64 image representation you would do DrivableAreaGridMap(width=64, height=64, resolution=100/64)

height: int = 256
resolution: float = 0.1953125
width: int = 256
class smarts.core.agent_interface.EventConfiguration(not_moving_time: float = 60, not_moving_distance: float = 1)[source]

Configure the conditions in which an Event is triggered.

not_moving_distance: float = 1

How many meters the agent’s actor has to move under which the agent is considered not moving

not_moving_time: float = 60

How long in seconds to check the agent after which it is considered not moving

class smarts.core.agent_interface.InterestDoneCriteria(actors_filter: Tuple[str, ...] = (), include_scenario_marked: bool = True, strict: bool = False)[source]

Require scenario marked interest actors to exist.

actors_filter: Tuple[str, ...] = ()

Interface defined interest actors that should exist to continue this agent.

property actors_pattern: Pattern

The expression match pattern for actors covered by this interface specifically.

include_scenario_marked: bool = True

If scenario marked interest actors should be considered as interest vehicles.

strict: bool = False

If strict the agent will be done instantly if an actor of interest is not available immediately.

class smarts.core.agent_interface.LanePositions[source]

Computation and reporting of lane-relative RefLine (Frenet) coordinates for all vehicles.

class smarts.core.agent_interface.Lidar(sensor_params: SensorParams = SensorParams(start_angle=0, end_angle=6.283185307179586, laser_angles=array([-0.06981317, -0.06482652, -0.05983986, -0.05485321, -0.04986655, -0.0448799, -0.03989324, -0.03490659, -0.02991993, -0.02493328, -0.01994662, -0.01495997, -0.00997331, -0.00498666, 0., 0.00498666, 0.00997331, 0.01495997, 0.01994662, 0.02493328, 0.02991993, 0.03490659, 0.03989324, 0.0448799, 0.04986655, 0.05485321, 0.05983986, 0.06482652, 0.06981317, 0.07479983, 0.07978648, 0.08477314, 0.08975979, 0.09474645, 0.0997331, 0.10471976, 0.10970641, 0.11469307, 0.11967972, 0.12466638, 0.12965303, 0.13463969, 0.13962634, 0.144613, 0.14959965, 0.15458631, 0.15957296, 0.16455962, 0.16954627, 0.17453293]), angle_resolution=1, max_distance=20, noise_mu=0, noise_sigma=0.078))[source]

Lidar point cloud observations.

sensor_params: SensorParams = SensorParams(start_angle=0, end_angle=6.283185307179586, laser_angles=array([-0.06981317, -0.06482652, -0.05983986, -0.05485321, -0.04986655,        -0.0448799 , -0.03989324, -0.03490659, -0.02991993, -0.02493328,        -0.01994662, -0.01495997, -0.00997331, -0.00498666,  0.        ,         0.00498666,  0.00997331,  0.01495997,  0.01994662,  0.02493328,         0.02991993,  0.03490659,  0.03989324,  0.0448799 ,  0.04986655,         0.05485321,  0.05983986,  0.06482652,  0.06981317,  0.07479983,         0.07978648,  0.08477314,  0.08975979,  0.09474645,  0.0997331 ,         0.10471976,  0.10970641,  0.11469307,  0.11967972,  0.12466638,         0.12965303,  0.13463969,  0.13962634,  0.144613  ,  0.14959965,         0.15458631,  0.15957296,  0.16455962,  0.16954627,  0.17453293]), angle_resolution=1, max_distance=20, noise_mu=0, noise_sigma=0.078)
class smarts.core.agent_interface.NeighborhoodVehicles(radius: float | None = None)[source]

Detection of nearby vehicles and configuration for filtering of the vehicles.

radius: float | None = None

The distance within which neighborhood vehicles are detected. None means vehicles will be detected within an unlimited distance.

class smarts.core.agent_interface.OGM(width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m OGM but a 64x64 image representation you would do OGM(width=64, height=64, resolution=100/64)

height: int = 256
resolution: float = 0.1953125
width: int = 256
class smarts.core.agent_interface.OcclusionMap(width: int = 256, height: int = 256, resolution: float = 0.1953125, surface_noise: bool = True)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m OcclusionMap but a 64x64 image representation you would do OcclusionMap(width=64, height=64, resolution=100/64)

height: int = 256
resolution: float = 0.1953125
surface_noise: bool = True
width: int = 256
class smarts.core.agent_interface.RGB(width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m RGB but a 256x256 image representation you would do RGB(width=256, height=256, resolution=100/256)

height: int = 256
resolution: float = 0.1953125
width: int = 256
class smarts.core.agent_interface.RenderDependencyBase[source]

Base class for render dependencies

abstract property name: str

The name of this dependency.

class smarts.core.agent_interface.RoadWaypoints(horizon: int = 20)[source]

RoadWaypoints give you waypoints along all lanes of the road an agent is currently on (including oncoming lanes).

RoadWaypoint observations will be returned in a mapping from lane id to the waypoints along that lane.

horizon: int = 20

The distance in meters to include waypoints for (both behind and in front of the agent)

class smarts.core.agent_interface.Signals(lookahead: float = 100.0)[source]

Reporting of traffic signals (lights) state in the lanes ahead.

lookahead: float = 100.0

The distance in meters to look ahead of the vehicle’s current position.

class smarts.core.agent_interface.Waypoints(lookahead: int = 32)[source]

The number of waypoints we want to look ahead by. The “distance” of this depends on the waypoint spacing set. The project default for that is one meter.

lookahead: int = 32