smarts.core.agent_interface module

class smarts.core.agent_interface.Accelerometer[source]
class smarts.core.agent_interface.AgentBehavior(aggressiveness: int = 0)[source]

Agent behavior configuration.

aggressiveness: int = 0

The aggressiveness affects the waypoints of a mission task, in terms of the trigger timing.

class smarts.core.agent_interface.AgentInterface(debug: bool = False, done_criteria: smarts.core.agent_interface.DoneCriteria = <factory>, max_episode_steps: Optional[int] = None, neighborhood_vehicles: Union[smarts.core.agent_interface.NeighborhoodVehicles, bool] = False, waypoints: Union[smarts.core.agent_interface.Waypoints, bool] = False, road_waypoints: Union[smarts.core.agent_interface.RoadWaypoints, bool] = False, drivable_area_grid_map: Union[smarts.core.agent_interface.DrivableAreaGridMap, bool] = False, ogm: Union[smarts.core.agent_interface.OGM, bool] = False, rgb: Union[smarts.core.agent_interface.RGB, bool] = False, lidar: Union[smarts.core.agent_interface.Lidar, bool] = False, action: Optional[smarts.core.controllers.ActionSpaceType] = None, vehicle_type: str = 'sedan', accelerometer: Union[smarts.core.agent_interface.Accelerometer, bool] = True, agent_behavior: Optional[smarts.core.agent_interface.AgentBehavior] = None)[source]

Configure the interface between an Agent and the Environment. Choose the action space and sensors to enable.

accelerometer: Union[smarts.core.agent_interface.Accelerometer, bool] = True

Enable acceleration and jerk observations.

action: Optional[smarts.core.controllers.ActionSpaceType] = None

The choice of action space, this action space also decides the controller that will be enabled.

property action_space
agent_behavior: smarts.core.agent_interface.AgentBehavior = None

The behavior configuration of the agent.

debug: bool = False

Enable debug information for the various sensors and action spaces.

done_criteria: smarts.core.agent_interface.DoneCriteria

Configurable criteria of when to mark this actor as done. Done actors will be removed from the environment and may trigger the episode to be done.

drivable_area_grid_map: Union[smarts.core.agent_interface.DrivableAreaGridMap, bool] = False

Enable the DrivableAreaGridMap sensor, a grid is provided where each cell signals whether the corresponding area of the map is a drivable surface.

static from_type(requested_type: smarts.core.agent_interface.AgentType, **kwargs)[source]

Instantiates from a selection of agent_interface presets

Parameters
  • requested_type – Select a premade AgentInterface from an AgentType

  • max_episode_steps – The total number of steps this interface will observe before expiring

lidar: Union[smarts.core.agent_interface.Lidar, bool] = False

Enable the LIDAR point cloud sensor.

max_episode_steps: Optional[int] = None

If set, agents will become “done” after this many steps. set to None to disable.

neighborhood_vehicles: Union[smarts.core.agent_interface.NeighborhoodVehicles, bool] = False

Enable the Neighborhood Vehicle States sensor, vehicles around the ego vehicle will be provided.

ogm: Union[smarts.core.agent_interface.OGM, bool] = False

Enable the OGM (Occupancy Grid Map) sensor, a grid is provided where each cell signals whether that area in space is occupied.

replace(**kwargs)[source]

Clone this AgentInteface with the given fields updated >>> interface = AgentInterface(action=ActionSpaceType.Continuous) .replace(waypoints=True) >>> interface.waypoints Waypoints(…)

rgb: Union[smarts.core.agent_interface.RGB, bool] = False

Enable the RGB camera sensor, a top down color image is provided.

road_waypoints: Union[smarts.core.agent_interface.RoadWaypoints, bool] = False

Enable the Road Waypoints sensor, waypoints along all lanes (oncoming included) of the road the vehicle is currently on will be provided even if these waypoints do not match the current mission.

vehicle_type: str = 'sedan'

The choice of vehicle type.

waypoints: Union[smarts.core.agent_interface.Waypoints, bool] = False

Enable the Waypoint Paths sensor, a list of valid waypoint paths along the current mission.

class smarts.core.agent_interface.AgentType(value)[source]

An enumeration.

Boid = 9

Controls multiple vehicles

Buddha = 0

Agent sees nothing and does nothing

Full = 1

All observations and continuous action space

Laner = 3

Agent sees waypoints and performs lane actions

LanerWithSpeed = 7

Agent sees waypoints and performs speed and lane action

Loner = 4

Agent sees waypoints and performs continuous actions

MPCTracker = 10

Agent performs trajectory tracking using model predictive control.

Standard = 2

Minimal observations for dealing with waypoints and other vehicles and continuous action space.

StandardWithAbsoluteSteering = 6

Agent sees waypoints, neighbor vehicles and performs continuous action

Tagger = 5

Agent sees waypoints, other vehicles, and performs continuous actions

Tracker = 8

Agent sees waypoints and performs target position action

class smarts.core.agent_interface.DoneCriteria(collision: bool = True, off_road: bool = True, off_route: bool = True, on_shoulder: bool = False, wrong_way: bool = False, not_moving: bool = False)[source]

Toggleable conditions on which to trigger episode end.

collision: bool = True

End the episode when the agent collides with another vehicle.

not_moving: bool = False

End the episode when the agent is not moving for 60 seconds or more. To account for controller noise not moving means <= 1 meter of displacement within 60 seconds.

off_road: bool = True

End the episode when the agent drives off the road.

off_route: bool = True

End the episode when the agent drives off the specified mission route.

on_shoulder: bool = False

End the episode when the agent drives on the road shoulder.

wrong_way: bool = False

End the episode when the agent drives in the wrong direction, even though it may be driving on the mission route.

class smarts.core.agent_interface.DrivableAreaGridMap(width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m DrivableAreaGridMap but a 64x64 image representation you would do DrivableAreaGridMap(width=64, height=64, resolution=100/64)

height: int = 256
resolution: float = 0.1953125
width: int = 256
class smarts.core.agent_interface.Lidar(sensor_params: smarts.core.lidar_sensor_params.SensorParams = SensorParams(start_angle=0, end_angle=6.283185307179586, laser_angles=array([- 0.06981317, - 0.06482652, - 0.05983986, - 0.05485321, - 0.04986655, - 0.0448799, - 0.03989324, - 0.03490659, - 0.02991993, - 0.02493328, - 0.01994662, - 0.01495997, - 0.00997331, - 0.00498666, 0.0, 0.00498666, 0.00997331, 0.01495997, 0.01994662, 0.02493328, 0.02991993, 0.03490659, 0.03989324, 0.0448799, 0.04986655, 0.05485321, 0.05983986, 0.06482652, 0.06981317, 0.07479983, 0.07978648, 0.08477314, 0.08975979, 0.09474645, 0.0997331, 0.10471976, 0.10970641, 0.11469307, 0.11967972, 0.12466638, 0.12965303, 0.13463969, 0.13962634, 0.144613, 0.14959965, 0.15458631, 0.15957296, 0.16455962, 0.16954627, 0.17453293]), angle_resolution=1, max_distance=20, noise_mu=0, noise_sigma=0.078))[source]

Lidar point cloud observations.

sensor_params: smarts.core.lidar_sensor_params.SensorParams = SensorParams(start_angle=0, end_angle=6.283185307179586, laser_angles=array([-0.06981317, -0.06482652, -0.05983986, -0.05485321, -0.04986655,        -0.0448799 , -0.03989324, -0.03490659, -0.02991993, -0.02493328,        -0.01994662, -0.01495997, -0.00997331, -0.00498666,  0.        ,         0.00498666,  0.00997331,  0.01495997,  0.01994662,  0.02493328,         0.02991993,  0.03490659,  0.03989324,  0.0448799 ,  0.04986655,         0.05485321,  0.05983986,  0.06482652,  0.06981317,  0.07479983,         0.07978648,  0.08477314,  0.08975979,  0.09474645,  0.0997331 ,         0.10471976,  0.10970641,  0.11469307,  0.11967972,  0.12466638,         0.12965303,  0.13463969,  0.13962634,  0.144613  ,  0.14959965,         0.15458631,  0.15957296,  0.16455962,  0.16954627,  0.17453293]), angle_resolution=1, max_distance=20, noise_mu=0, noise_sigma=0.078)
class smarts.core.agent_interface.NeighborhoodVehicles(radius: float = None)[source]
radius: float = None
class smarts.core.agent_interface.OGM(width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m OGM but a 64x64 image representation you would do OGM(width=64, height=64, resolution=100/64)

height: int = 256
resolution: float = 0.1953125
width: int = 256
class smarts.core.agent_interface.RGB(width: int = 256, height: int = 256, resolution: float = 0.1953125)[source]

The width and height are in “pixels” and the resolution is the “size of a pixel”. E.g. if you wanted 100m x 100m RGB but a 256x256 image representation you would do RGB(width=256, height=256, resolution=100/256)

height: int = 256
resolution: float = 0.1953125
width: int = 256
class smarts.core.agent_interface.RoadWaypoints(horizon: int = 20)[source]

RoadWaypoints give you waypoints along all lanes of the road an agent is currently on (including oncoming lanes).

RoadWaypoint observations will be returned in a mapping from lane id to the waypoints along that lane.

horizon: int = 20
class smarts.core.agent_interface.Waypoints(lookahead: int = 32)[source]

The number of waypoints we want to look ahead by. The “distance” of this depends on the waypoint spacing set. The project default for that is one meter.

lookahead: int = 32