realsenseD455相机录制bag转为TUM数据集

发布于:2025-03-05 ⋅ 阅读:(12) ⋅ 点赞:(0)
本文参考
文章https://blog.csdn.net/m0_60355964/article/details/129518283?ops_request_misc=%257B%2522request%255Fid%2522%253A%252211559cdf09f5ff02d4b1d97f2b0744ee%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=11559cdf09f5ff02d4b1d97f2b0744ee&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-2-129518283-null-null.142^v101^pc_search_result_base9&utm_term=RealSense%20tum&spm=1018.2226.3001.4187
文章https://blog.csdn.net/neptune4751/article/details/137183817?ops_request_misc=&request_id=&biz_id=102&utm_term=RealSense%20tum&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-4-137183817.142^v101^pc_search_result_base9&spm=1018.2226.3001.4187

1.录制视频

打开Intel RealSense Viewer

设置Depth Stream以及Color Stream的图像分辨率为640 × 480

设置采集帧率为30 fps

点击左上角的Record按钮即可进行录制,开始录制后,点击左上角的Stop按钮即可结束录制并保存录制结果。

若点击Record按钮后出现以下报错,则更改一下保存路径。

点击右上角的齿轮图标,选择Settings,然后改变存储路径,之后点击ApplyOK

 结束录制后,在相应存储路径下即生成.bag文件。

2.提取rgb和depth图片,以及时间戳

第一步,进入/catkin_ws/src文件夹下,进入终端,克隆项目

git clone https://github.com/kinglintianxia/bag2tum.git

第二步,bag文件的位置新建image文件夹,然后在image文件夹下新建depth和rgb文件夹

修改bag2tum.launch文件中的:save_folder, rgb_topic 和depth_topic参数:

 <param name="save_folder" value="/home/nv/zoe/bag2tum/image" />
 <param name="rgb_topic" value="/camera/color/image_raw" />
 <param name="depth_topic" value="/camera/aligned_depth_to_color/image_raw" />

注:通过查看rostopic的信息,来获取rgb_topic以及depth_topic的信息:

rosbag info xxx.bag
topic如下图所示:
在这里插入图片描述
在这里插入图片描述
第三步骤,在bag2tum下创建build文件夹
  1. cmake ..

  2. make

第四步骤,启动roslaunch

  1. source devel/setup.bash 

  2. roslaunch bag2tum bag2tum.launc

 第五步骤,play bag文件:

rosbag play XX.bag

然后会在image文件夹下自动生成生成深度图以及RGB图以及时间戳:

第六步骤,对齐时间戳

由于深度图及彩色图的时间戳并非严格一一对齐,存在一定的时间差,因此需将深度图及彩色图按照时间戳最接近原则进行两两配对。将associate.py脚本文件存储至image文件夹下,如图所示:

associate.py脚本文件:

"""
The RealSense provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.

For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""

import argparse
import sys
import os
import numpy


def read_file_list(filename):
    """
    Reads a trajectory from a text file.

    File format:
    The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
    and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp.

    Input:
    filename -- File name

    Output:
    dict -- dictionary of (stamp,data) tuples

    """
    file = open(filename)
    data = file.read()
    lines = data.replace(",", " ").replace("\t", " ").split("\n")
    list = [[v.strip() for v in line.split(" ") if v.strip() != ""] for line in lines if
            len(line) > 0 and line[0] != "#"]
    list = [(float(l[0]), l[1:]) for l in list if len(l) > 1]
    return dict(list)


def associate(first_list, second_list, offset, max_difference):
    """
    Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim
    to find the closest match for every input tuple.

    Input:
    first_list -- first dictionary of (stamp,data) tuples
    second_list -- second dictionary of (stamp,data) tuples
    offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
    max_difference -- search radius for candidate generation

    Output:
    matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))

    """
    first_keys = first_list.keys()
    second_keys = second_list.keys()
    potential_matches = [(abs(a - (b + offset)), a, b)
                         for a in first_keys
                         for b in second_keys
                         if abs(a - (b + offset)) < max_difference]
    potential_matches.sort()
    matches = []
    for diff, a, b in potential_matches:
        if a in first_keys and b in second_keys:
            first_keys.remove(a)
            second_keys.remove(b)
            matches.append((a, b))

    matches.sort()
    return matches


if __name__ == '__main__':

    # parse command line
    parser = argparse.ArgumentParser(description='''
    This script takes two data files with timestamps and associates them   
    ''')
    parser.add_argument('first_file', help='first text file (format: timestamp data)')
    parser.add_argument('second_file', help='second text file (format: timestamp data)')
    parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
    parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',
                        default=0.0)
    parser.add_argument('--max_difference',
                        help='maximally allowed time difference for matching entries (default: 0.02)', default=0.02)
    args = parser.parse_args()

    first_list = read_file_list(args.first_file)
    second_list = read_file_list(args.second_file)

    matches = associate(first_list, second_list, float(args.offset), float(args.max_difference))

    if args.first_only:
        for a, b in matches:
            print("%f %s" % (a, " ".join(first_list[a])))
    else:
        for a, b in matches:
            print("%f %s %f %s" % (a, " ".join(first_list[a]), b - float(args.offset), " ".join(second_list[b])))

 

在该路径下打开终端并通过执行如下命令生成配对结果associate.txt

python associate.py rgb.txt depth.txt > associate.txt

至此,数据集制作完成。