在CARLA中手动开车,添加双目相机stereo camera,激光雷达Lidar

CARLA的使用逻辑:

首先创建客户端

设置如果2秒没有从服务器返回任何内容,则终止

python 复制代码
client = carla.Client("127.0.0.1", 2000)
client.set_timeout(2.0)

从客户端中get world

python 复制代码
world = client.get_world()

设置setting并应用

这里使用固定时间步长的异步模式

python 复制代码
settings = world.get_settings()
settings.synchronous_mode = False    # Enables synchronous mode
settings.fixed_delta_seconds = None
delta = 0.05
settings.fixed_delta_seconds = delta
world.apply_settings(settings)

设置traffic manager

python 复制代码
traffic_manager = client.get_trafficmanager(8000)
traffic_manager.set_synchronous_mode(False)
traffic_manager.set_random_device_seed(0)
random.seed(0)

创建一个观察视角

spectator是我们观察的视角,在后面的循环中,我们让视角始终跟随车,显示从上往下看的车的俯视图

python 复制代码
spectator = world.get_spectator()

从map中获得所有可行的出生点

spawn_points是所有可行的出生点,在后面生成车的时候要指定一个出生点。

python 复制代码
spawn_points = world.get_map().get_spawn_points()

carla中添加任何东西(车辆,相机,激光雷达等)的逻辑是:

以车为例,添加车的逻辑是:先从world中get所有东西的blueprint库,从所有blueprint库中filter车的blueprint,再将车的blueprint与出生点结合,生成一个车。

python 复制代码
blueprint_library = world.get_blueprint_library()
vehicle_bp = blueprint_library.filter("model3")[0]
vehicle = world.try_spawn_actor(vehicle_bp, spawn_points[0])

blueprint_library是所有blueprint的库,vehicle_bp是车的blueprint,与一个出生点结合,生成车的实例vehicle。

相机也是同样的逻辑:

python 复制代码
camera_init_trans = carla.Transform(carla.Location(x=1, y=-1, z=2), carla.Rotation(pitch=0))
camera_bp = blueprint_library.find('sensor.camera.rgb')
camera_bp.set_attribute("image_size_x", str(1280))
camera_bp.set_attribute("image_size_y", str(720))
camera_bp.set_attribute("fov", str(105))
camera = world.spawn_actor(camera_bp, camera_init_trans, attach_to=vehicle)

相机要设置附着在车上面,carla.Location是固定在车身的随车坐标系,原点的高度在车轮与地面接触的平面,俯视图看原点在车的中心。x轴正方向指向车头,y轴指向车辆右舷,z轴指向天空,单位是米。

相机监听回调函数:

python 复制代码
# Get camera dimensions
image_w = camera_bp.get_attribute("image_size_x").as_int()
image_h = camera_bp.get_attribute("image_size_y").as_int()

# Instantiate objects for rendering and vehicle control
renderObject = RenderObject(image_w, image_h)
controlObject = ControlObject(vehicle)

camera.listen(lambda image: pygame_callback1(image, renderObject))

carla中没有双目相机,需要自己添加两个rgb相机实现双目相机的效果。

如何理解CARLA中相机的焦距的概念?

定义相机的时候,没有焦距这个概念,首先定义窗口的大小,也就是相当于相机的传感器横竖各有多少像素,再定义fov视角范围,单位是度,定义好这两个概念,相机的焦距f根据这两个数来算,如果双目相机恢复深度,需要这样去算f。假如相机的归一化平面,即距离相机z=1米的位置(这个z是slam相机坐标系,不是carla坐标系)有一根横棒,那么多长的横棒可以正好横向填满相机传感器,根据视角范围fov解相似三角形,求出slam中定义的相机内参概念f,窗口大小的一般,也就是slam中相机内参的c

键盘控制器:

见完整代码 ControlObject类

每一步循环中:

python 复制代码
world.tick() # 获取服务器结果
# 获取观察视角
transform = vehicle.get_transform()  # we get the transform of vehicle
spectator.set_transform(carla.Transform(transform.location + carla.Location(z=50),
                                                    carla.Rotation(pitch=-90)))

for event in pygame.event.get():  # 获取键盘操作

完整代码:

python 复制代码
import carla
import random
import time
import sys
import pygame
import datetime
import numpy as np
import open3d as o3d
from matplotlib import cm


VIDIDIS = np.array(cm.get_cmap("plasma").colors)
VID_RANGE = np.linspace(0.0, 1.0, VIDIDIS.shape[0])

# Render object to keep and pass the PyGame surface
class RenderObject(object):
    def __init__(self, width, height):
        init_image = np.random.randint(0,255,(height, width, 3),dtype='uint8')
        self.surface = pygame.surfarray.make_surface(init_image.swapaxes(0,1))


# Control object to manage vehicle controls
class ControlObject(object):
    def __init__(self, veh):

        # Conrol parameters to store the control state
        self._vehicle = veh
        self._steer = 0
        self._throttle = False
        self._brake = False
        self._steer = None
        self._steer_cache = 0
        # A carla.VehicleControl object is needed to alter the
        # vehicle's control state
        self._control = carla.VehicleControl()

    # Check for key press events in the PyGame window
    # and define the control state
    def parse_control(self, event):
        if event.type == pygame.KEYDOWN:
            if event.key == pygame.K_RETURN:
                self._vehicle.set_autopilot(True)
            if event.key == pygame.K_UP:
                self._throttle = True
            if event.key == pygame.K_DOWN:
                self._brake = True
            if event.key == pygame.K_RIGHT:
                self._steer = 1
            if event.key == pygame.K_LEFT:
                self._steer = -1
        if event.type == pygame.KEYUP:
            if event.key == pygame.K_UP:
                self._throttle = False
            if event.key == pygame.K_DOWN:
                self._brake = False
                self._control.reverse = False
            if event.key == pygame.K_RIGHT:
                self._steer = None
            if event.key == pygame.K_LEFT:
                self._steer = None

    # Process the current control state, change the control parameter
    # if the key remains pressed
    def process_control(self):
        if self._throttle:
            self._control.throttle = min(self._control.throttle + 0.01, 1)
            self._control.gear = 1
            self._control.brake = False
        elif not self._brake:
            self._control.throttle = 0.0

        if self._brake:
            # If the down arrow is held down when the car is stationary, switch to reverse
            if self._vehicle.get_velocity().length() < 0.01 and not self._control.reverse:
                self._control.brake = 0.0
                self._control.gear = 1
                self._control.reverse = True
                self._control.throttle = min(self._control.throttle + 0.1, 1)
            elif self._control.reverse:
                self._control.throttle = min(self._control.throttle + 0.1, 1)
            else:
                self._control.throttle = 0.0
                self._control.brake = min(self._control.brake + 0.3, 1)
        else:
            self._control.brake = 0.0

        if self._steer is not None:
            if self._steer == 1:
                self._steer_cache += 0.03
            if self._steer == -1:
                self._steer_cache -= 0.03
            min(0.7, max(-0.7, self._steer_cache))
            self._control.steer = round(self._steer_cache,1)
        else:
            if self._steer_cache > 0.0:
                self._steer_cache *= 0.2
            if self._steer_cache < 0.0:
                self._steer_cache *= 0.2
            if 0.01 > self._steer_cache > -0.01:
                self._steer_cache = 0.0
            self._control.steer = round(self._steer_cache,1)

        # Ápply the control parameters to the ego vehicle
        self._vehicle.apply_control(self._control)


def generate_lidar_bp(blueprint_library, delta):
    """
    To get lidar bp
    :param blueprint_library: the world blueprint_library
    :param delta: update rate(s)
    :return: lidar bp
    """
    lidar_bp = blueprint_library.find("sensor.lidar.ray_cast")
    lidar_bp.set_attribute("dropoff_general_rate", "0.0")
    lidar_bp.set_attribute("dropoff_intensity_limit", "1.0")
    lidar_bp.set_attribute("dropoff_zero_intensity", "0.0")

    lidar_bp.set_attribute("upper_fov", str(15.0))
    lidar_bp.set_attribute("lower_fov", str(-25.0))
    lidar_bp.set_attribute("channels", str(64.0))
    lidar_bp.set_attribute("range", str(100.0))
    lidar_bp.set_attribute("rotation_frequency", str(1.0 / delta))
    lidar_bp.set_attribute("points_per_second", str(500000))

    return lidar_bp


def lidar_callback(point_cloud, point_list):
    # We need to convert point cloud(carla-format) into numpy.ndarray
    data = np.copy(np.frombuffer(point_cloud.raw_data, dtype = np.dtype("f4")))
    data = np.reshape(data, (int(data.shape[0] / 4), 4))

    intensity = data[:, -1]
    intensity_col = 1.0 - np.log(intensity) / np.log(np.exp(-0.004 * 100))
    int_color = np.c_[
        np.interp(intensity_col, VID_RANGE, VIDIDIS[:, 0]),
        np.interp(intensity_col, VID_RANGE, VIDIDIS[:, 1]),
        np.interp(intensity_col, VID_RANGE, VIDIDIS[:, 2])]

    points = data[:, :-1] # we only use x, y, z coordinates
    points[:, 1] = -points[:, 1] # This is different from official script
    point_list.points = o3d.utility.Vector3dVector(points)
    point_list.colors = o3d.utility.Vector3dVector(int_color)


# Camera sensor callback, reshapes raw data from camera into 2D RGB and applies to PyGame surface
def pygame_callback1(data, obj):
    img = np.reshape(np.copy(data.raw_data), (data.height, data.width, 4))
    img = img[:, :, :3]
    img = img[:, :, ::-1]
    # msg = bridge.cv2_to_imgmsg(img, encoding="bgr8")

    obj.surface = pygame.surfarray.make_surface(img.swapaxes(0, 1))
    # pub.publish(msg)

    print("taking photos1")


def pygame_callback2(data, obj):
    img = np.reshape(np.copy(data.raw_data), (data.height, data.width, 4))
    img = img[:, :, :3]
    img = img[:, :, ::-1]
    # msg = bridge.cv2_to_imgmsg(img, encoding="bgr8")

    # pub2.publish(msg)

    print("taking photos2")


client = carla.Client("127.0.0.1", 2000)
client.set_timeout(2.0)
world = client.get_world()

settings = world.get_settings()
settings.synchronous_mode = False    # Enables synchronous mode
settings.fixed_delta_seconds = None
delta = 0.05
settings.fixed_delta_seconds = delta
world.apply_settings(settings)

traffic_manager = client.get_trafficmanager(8000)
traffic_manager.set_synchronous_mode(False)
traffic_manager.set_random_device_seed(0)
random.seed(0)

# We will set up the spectator so we can see what we do
spectator = world.get_spectator()
# Retrieve the map's spawn points
spawn_points = world.get_map().get_spawn_points()

blueprint_library = world.get_blueprint_library()
vehicle_bp = blueprint_library.filter("model3")[0]
vehicle = world.try_spawn_actor(vehicle_bp, spawn_points[0])
vehicle.set_autopilot(False)

# Initialise the camera floating behind the vehicle
camera_init_trans = carla.Transform(carla.Location(x=1, y=-1, z=2), carla.Rotation(pitch=0))
camera_bp = blueprint_library.find('sensor.camera.rgb')
camera_bp.set_attribute("image_size_x", str(1280))
camera_bp.set_attribute("image_size_y", str(720))
camera_bp.set_attribute("fov", str(105))
camera = world.spawn_actor(camera_bp, camera_init_trans, attach_to=vehicle)
# Start camera with PyGame callback

camera2_init_trans = carla.Transform(carla.Location(x=1, y=1, z=2), carla.Rotation(pitch=0))
camera2_bp = blueprint_library.find('sensor.camera.rgb')
camera2_bp.set_attribute("image_size_x", str(1280))
camera2_bp.set_attribute("image_size_y", str(720))
camera2_bp.set_attribute("fov", str(105))
camera2 = world.spawn_actor(camera2_bp, camera2_init_trans, attach_to=vehicle)

# Get camera dimensions
image_w = camera_bp.get_attribute("image_size_x").as_int()
image_h = camera_bp.get_attribute("image_size_y").as_int()

# Instantiate objects for rendering and vehicle control
renderObject = RenderObject(image_w, image_h)
controlObject = ControlObject(vehicle)

camera.listen(lambda image: pygame_callback1(image, renderObject))

lidar_bp = generate_lidar_bp(blueprint_library, delta)
lidar_transform = carla.Transform(carla.Location(x=-0.5, z=1.8))
lidar = world.spawn_actor(lidar_bp, lidar_transform, attach_to=vehicle)

point_list = o3d.geometry.PointCloud()

lidar.listen(lambda data: lidar_callback(data, point_list))

vis = o3d.visualization.Visualizer()
vis.create_window(
    window_name= "Display Point Cloud",
    width= 960,
    height= 540,
    left= 480,
    top= 270)

vis.get_render_option().background_color = [0.05, 0.05, 0.05]
vis.get_render_option().point_size = 1
vis.get_render_option().show_coordinate_frame = True

frame = 0
dt0 = datetime.datetime.now()

# Initialise the display
pygame.init()
gameDisplay = pygame.display.set_mode((image_w,image_h), pygame.HWSURFACE | pygame.DOUBLEBUF)
# Draw black to the display
gameDisplay.fill((0,0,0))
gameDisplay.blit(renderObject.surface, (0,0))
pygame.display.flip()

crashed = False
while not crashed:
    # if frame == 2:
    #     vis.add_geometry(point_list)
    #
    # vis.update_geometry(point_list)
    # vis.poll_events()
    # vis.update_renderer()
    # time.sleep(0.005)

    world.tick()

    transform = vehicle.get_transform()  # we get the transform of vehicle
    spectator.set_transform(carla.Transform(transform.location + carla.Location(z=50),
                                                    carla.Rotation(pitch=-90)))

    # Update the display
    gameDisplay.blit(renderObject.surface, (0, 0))
    pygame.display.flip()

    controlObject.process_control()
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            crashed = True

        controlObject.parse_control(event)

    process_time = datetime.datetime.now() - dt0
    sys.stdout.write("\r" + "FPS: " + str(1.0 / process_time.total_seconds()) + "Current Frame: " + str(frame))
    sys.stdout.flush()
    dt0 = datetime.datetime.now()

    frame += 1

# Stop camera and quit PyGame after exiting game loop
camera.stop()
camera2.stop()
pygame.quit()

这里有一个bug,我的代码里分别有Lidar点云可视化的代码和相机照片可视化的代码,可是这两个可视化不能共存,如果Lidar的点云可视化更新,相机图片可视化就不更新,相机更新,Lidar就不更新。不知道为什么,知道的大神欢迎留言。

相关推荐
古希腊掌管学习的神1 小时前
[搜广推]王树森推荐系统——矩阵补充&最近邻查找
python·算法·机器学习·矩阵
LucianaiB2 小时前
探索CSDN博客数据:使用Python爬虫技术
开发语言·爬虫·python
PieroPc4 小时前
Python 写的 智慧记 进销存 辅助 程序 导入导出 excel 可打印
开发语言·python·excel
梧桐树04298 小时前
python常用内建模块:collections
python
Dream_Snowar8 小时前
速通Python 第三节
开发语言·python
蓝天星空9 小时前
Python调用open ai接口
人工智能·python
jasmine s9 小时前
Pandas
开发语言·python
郭wes代码9 小时前
Cmd命令大全(万字详细版)
python·算法·小程序
leaf_leaves_leaf10 小时前
win11用一条命令给anaconda环境安装GPU版本pytorch,并检查是否为GPU版本
人工智能·pytorch·python
夜雨飘零110 小时前
基于Pytorch实现的说话人日志(说话人分离)
人工智能·pytorch·python·声纹识别·说话人分离·说话人日志