View my account

exploring equivalent API in python for rs2::software_device::create_matcher(RS2_MATCHER_DLR_C) defined in c++

Comments

19 comments

  • Lalatendu Das

    Infact upon further debugging it is found that following APIs are not implemented too. Am I missing something very obvious here because I see many APIs are missing. Ofcourse I am trying to map to c++ implementation and assuming a equivalent in python wrappers.  In case you need to look at the complete code I can paste that too. Please do let me know in the reply. Just trying to keep the post short and precise.

    ----snip------ 
    cprof = color_sensor.add_video_stream(color_vs)
    dev.create_matcher(RS2_MATCHER_DLR_C) <<<< This is not present too.
    camera_syncer = rs.syncer()

    depth_sensor.open(depth_vs) <<< This is not present
    color_sensor.open(color_vs) <<< This is not present
    depth_sensor.start(camera_syncer) <<< This is not present
    color_sensor.start(camera_syncer) <<< This is not present

    depth_vs.register_extrinsics_to(color_vs, { { 1,0,0,0,1,0,0,0,1 },{ 0,0,0 } }) <<< This is not present too...
    frame_number = 0

    while(frame_number <= 3):
    depth_sensor.on_video_frame({depth_frame.get_data(), # Frame pixels from capture API
    depth_frame.x * depth_frame.bpp,
    depth_frame.bpp, # Stride and Bytes-per-pixel
    frame_number * 16,
    RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, # Timestamp
    frame_number, # Frame# for potential camera_syncer services
    depth_vs})

    m_color_sensor.on_video_frame({color_frame.get_data(), # Frame pixels from capture API
    color_frame.x * color_frame.bpp,
    color_frame.bpp, # Stride and Bytes-per-pixel
    frame_number * 16,
    RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, # Timestamp
    frame_number, # Frame# for potential camera_syncer services
    color_vs})

    fset = camera_syncer.wait_for_frames()
    rs2_depth = fset.first_or_default(RS2_STREAM_DEPTH)
    rs2_color = fset.first_or_default(RS2_STREAM_COLOR)
    ----snip-----
    0
    Comment actions Permalink
  • Lalatendu Das

    Added the complete python code for brevity. Commented many APIs which should be needed but for the shake of knowing what all APIs are not present in case of python binding I keep commenting them to see all the failures. for e.g.

    towards the end even these api are also not present for color and depth frames.

      cframe.set_profile(cprof)
      cframe.set_pixels(color_frame.get_data())

    #!/usr/bin/python3
    import numpy as np                        # fundamental package for scientific computing
    import matplotlib.pyplot as plt           # 2D plotting library, publication quality figures
    import pyrealsense2 as rs                 # Intel RealSense cross-platform open-source API
    print("Environment Ready")

    pipe = rs.pipeline()
    cfg = rs.config()
    profile = pipe.start()
    # Skip 5 first frames to give the Auto-Exposure time to adjust
    for x in range(5):
      pipe.wait_for_frames()
    frame_number = 0
    while(frame_number <= 3):
      # Store next frameset for later processing:
      frameset = pipe.wait_for_frames()
      depth_frame = frameset.get_depth_frame()
      color_frame = frameset.get_color_frame()
      dp = depth_frame.get_profile()
      di = dp.as_video_stream_profile().get_intrinsics()
      cp = color_frame.get_profile()
      ci = cp.as_video_stream_profile().get_intrinsics()

      # Start the creation of software Device
      dev = rs.software_device()
      depth_sensor = dev.add_sensor("depth")
      color_sensor = dev.add_sensor("color")

      # Form the Depth video stream that sw device will transimt
      depth_vs = rs.video_stream()
      depth_vs.intrinsics = rs.intrinsics()
      depth_vs.type = rs.stream.depth
      depth_vs.fmt = rs.format.z16
      depth_vs.index = 0
      depth_vs.uid = 0
      depth_vs.width = di.width
      depth_vs.height = di.height
      depth_vs.fps = dp.fps()
      depth_vs.bpp = depth_frame.bytes_per_pixel

      # intrinsic of camera
      depth_vs.intrinsics.fx = di.fx
      depth_vs.intrinsics.fy = di.fy
      depth_vs.intrinsics.height = di.height
      depth_vs.intrinsics.width = di.width
      depth_vs.intrinsics.ppx = di.ppx
      depth_vs.intrinsics.ppy = di.ppy
      depth_vs.intrinsics.coeffs = di.coeffs
      depth_vs.intrinsics.model = di.model

      #create a stream profile
      dprof = depth_sensor.add_video_stream(depth_vs)
      depth_sensor.add_read_only_option(rs.option.depth_units, 0.001)

      # Form the Color video stream that sw device will transimt
      color_vs = rs.video_stream()
      color_vs.intrinsics = rs.intrinsics()
      color_vs.type = rs.stream.color
      color_vs.fmt = rs.format.rgb8
      color_vs.index = 0
      color_vs.uid = 1
      color_vs.width = ci.width
      color_vs.height = ci.height
      color_vs.fps = cp.fps()
      color_vs.bpp = color_frame.bytes_per_pixel

    #Fill camera intrinsics
      color_vs.intrinsics.fx = ci.fx
      color_vs.intrinsics.fy = ci.fy
      color_vs.intrinsics.height = ci.height
      color_vs.intrinsics.width = ci.width
      color_vs.intrinsics.ppx = ci.ppx
      color_vs.intrinsics.ppy = ci.ppy
      color_vs.intrinsics.coeffs = ci.coeffs
      color_vs.intrinsics.model = ci.model
      cprof = color_sensor.add_video_stream(color_vs)

      #dev.create_matcher(RS2_MATCHER_DLR_C)
      camera_syncer = rs.syncer()
      # #depth_sensor.open(depth_vs)
      # color_sensor.open(color_vs)
      # depth_sensor.start(camera_syncer)
      # color_sensor.start(camera_syncer)
      #depth_vs.register_extrinsics_to(color_vs, { { 1,0,0,0,1,0,0,0,1 },{ 0,0,0 } })

    #create rs2 depth frame from byte-pixel
      dframe = rs.software_video_frame()
      dframe.bpp = depth_frame.bytes_per_pixel
      dframe.stride = dframe.bpp * depth_frame.width
      dframe.timestamp = 0.0
      dframe.domain = rs.timestamp_domain.hardware_clock
      dframe.frame_number = frame_number
      #dframe.set_profile(dprof)
      #dframe.set_pixels(depth_frame.get_data())
      depth_sensor.on_video_frame(dframe)

    #create rs2 color frame from byte-pixel
      cframe = rs.software_video_frame()
      cframe.bpp = color_frame.bytes_per_pixel
      cframe.stride = cframe.bpp * color_frame.width
      cframe.timestamp = 0.0
      cframe.domain = rs.timestamp_domain.hardware_clock
      cframe.frame_number = frame_number
      #cframe.set_profile(cprof)
      #cframe.set_pixels(color_frame.get_data())
      color_sensor.on_video_frame(cframe)

      fset = camera_syncer.wait_for_frames()
      rs2_depth = fset.first_or_default(RS2_STREAM_DEPTH)
      rs2_color = fset.first_or_default(RS2_STREAM_COLOR)

      # if (!rs2_depth || !rs2_color) {
      #     cout <<"EII: Frame-set is NULL" <<endl;
      #     continue;
      # }

      pc = rs.pointcloud()
      pc.map_to(rs2_color)
      points = pc.calculate(rs2_depth)

      frame_number = frame_number + 1
      
    print("Showing color frame came from software device")
      showable_col_frame = np.asanyarray(rs2_color.get_data())
      plt.rcParams["axes.grid"] = False
      plt.rcParams['figure.figsize'] = [8, 4]
      plt.imshow(showable_col_frame)
      plt.show()
      
    print("Colorizer filter for depth frame obtained from software device")
      colorizer = rs.colorizer()
      colorized_depth = np.asanyarray(colorizer.colorize(depth_frame).get_data())
      plt.rcParams["axes.grid"] = False
      plt.rcParams['figure.figsize'] = [8, 4]
      plt.imshow(colorized_depth)
      plt.show()
      print("DONE--Colorizer filter for depth frame")

    # Cleanup:
    pipe.stop()
    print("Done with software Device")

     

    0
    Comment actions Permalink
  • MartyG

    Hi Lalatendu Das  Another RealSense user who was attempting to use software_device in Python encountered the same problem with create_matcher in the case linked to below.  They were not using syncer.

    https://github.com/IntelRealSense/librealsense/issues/7057

    Whilst software_device is supported in Python, there are few references available for programming it other than the above link.

    Likewise, the only reference that I could locate regarding syncer in Python is its official documentation page:

    https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.syncer.html#pyrealsense2.syncer

     

    Could you provide further information about your project please so that I can recommend suitable alternative approaches.  For example, what is the source that you are trying to convert to rs2::frame - is it image files such as PNG or a video stream from the camera?

     

    0
    Comment actions Permalink
  • Lalatendu Das

    >>Is it image files such as PNG or a video stream from the camera?

    We are using video stream from camera but due to certain requirement for e.g. camera ingestor and analytics are part of separate containers or to the least the rs2 frame object can't be accessible by the analytics code block even if they are part of same physical machine/container. Hence we are transmitting raw pixel bytes(frame.get_data()) from one process and in another process reconstructing the rs2::frameset from those raw pixel bytes. We try to refer the c++ examples present under "software device" and "wrapper/pcl/"

    This works well in case of c++ but replaicating in python is stumbling. 

    I will go through the links provided by you and get back to you for any further hiccups. 

     

    0
    Comment actions Permalink
  • MartyG

    Thanks very much for the clarification.   I look forward to your next report after studying the links.  Good luck!

    0
    Comment actions Permalink
  • Lalatendu Das

    Hi Marty

    I found two probable approaches suggested in the thread dscribing issue 7057. 

    1. Using RS client-server example. rs-net python example. It converts the rs2 frames to numpy array then serialize the numpy array via pickle before unicast/multicast to relasense py clients. Unfortunately we haven't thought of proving numpy array to our analytics program as many useful boiler plate code and wrapper functionality are provided on top of RS2 object and we don't wat to re-invent the wheel. But in worst case we will rethink on this if there is no solution in py bindings.

    2. The other approach is slightly different it doesn't use syncer class at all. and uses a context variable and adds software device to it. then enable stream and extracts rs2 frame from synthetic frame. But somehow this code fails for me. it throws the following error.

    [<pyrealsense2.device: Intel RealSense D435 (S/N: 018322071263 FW: 05.12.07.100 LOCKED: YES)>, <pyrealsense2.device: Software-Device
    Intel RealSense D435 (Emulated) (S/N: 1313 FW: 05.10.13.00
    255.255.255.255)>]
    Traceback (most recent call last):
    File "./rs2_software_device.py", line 117, in <module>
    prof = pipe.start(config)
    RuntimeError: No device connected
     

     

    0
    Comment actions Permalink
  • MartyG

    There are not enough Python references for software_device for me to do a detailed analysis of your technique in method 2.  I note though that you are using pipe start(config).  In your earlier script you defined the custom configuration as cfg so if you are still using that naming convention then the pipe instruction in the new script would be pipe.start(cfg)

     

    0
    Comment actions Permalink
  • Lalatendu Das

    Hi Marty, 

    Looks like I missed to paste the whole code that's why these confusion. cfg used in earlier example is for extracting the real frame from camera. Additionally In that example i didn't have to use any config as syncer class is used to to extract the synthetic frame. 

    The example you have pointed out as part of issue no. 7057 is not using syncer class at all. So I have modified my program and followed his way of creating RS2 synthetic frame using software_device. PLease find the complete code pasted below. Earlier example uses software_Device  _ syncer, this below pasted example uses only software_device and context variable.( as per the issue no. 7057)

    Code does the following:

    1. Get real frames (color & depth)  from camera
    2. Create software_device & sensors and attach them 
    3. Using on_video_frame() call, feed a synthetic frame(byte pixels are taken from actual frames & intrinsics too) to software_device       defined sensor
    4. Then extract the new frames and colorize them to visualize the depth frame.
    5. Then use those frames for PCL transformation.  ----> I am stuck in between step 2 & 3 So not even hitting this path yet.

      The current error is occuring in line No. 117, while starting the pipe itself. I haven't tried to prepare the synthetic frame yet.

    Please find the entire code pasted below. Just to remind you again this is another approach as per the method described in Issue 7057 that is pointed by you. The earlier code was written after following c++ example of software_device.

    #!/usr/bin/python3

    import numpy as np # fundamental package for scientific computing
    import matplotlib.pyplot as plt # 2D plotting library, publication quality figures
    import pyrealsense2 as rs # Intel RealSense cross-platform open-source API
    print("Environment Ready")

    SIMULATED_SN = "1717"

    # Reading the usual frame camera to use the raw bytepixel from them.
    pipe = rs.pipeline()
    cfg = rs.config()
    profile = pipe.start()

    ctx = rs.context()

    # Skip 5 first frames to give the Auto-Exposure time to adjust
    for x in range(5):
    pipe.wait_for_frames()

    frame_number = 0
    while(frame_number <= 3):
    # Store next frameset for later processing:
    frameset = pipe.wait_for_frames()
    depth_frame = frameset.get_depth_frame()
    color_frame = frameset.get_color_frame()

    dp = depth_frame.get_profile()
    di = dp.as_video_stream_profile().get_intrinsics()

    cp = color_frame.get_profile()
    ci = cp.as_video_stream_profile().get_intrinsics()

    # Creation of Simulation Software Device
    dev = rs.software_device()
    # copied these setting as described in RS2 issue no. 7057
    dev.register_info(rs.camera_info.serial_number, SIMULATED_SN)
    dev.register_info(rs.camera_info.serial_number, SIMULATED_SN)
    dev.register_info(rs.camera_info.advanced_mode, "YES")
    dev.register_info(rs.camera_info.debug_op_code, "15")
    dev.register_info(
    rs.camera_info.firmware_version, "05.10.13.00\n255.255.255.255"
    )
    dev.register_info(rs.camera_info.name, "Intel RealSense D435 (Emulated)")
    dev.register_info(rs.camera_info.physical_port, "/no/path")
    dev.register_info(rs.camera_info.product_id, "0B3A")
    dev.register_info(
    rs.camera_info.recommended_firmware_version, "05.10.03.00"
    )
    dev.register_info(rs.camera_info.usb_type_descriptor, "3.2")

    depth_sensor = dev.add_sensor("depth")
    color_sensor = dev.add_sensor("color")

    # Form the Depth video stream to be used by software_device.
    depth_vs = rs.video_stream()
    depth_vs.intrinsics = rs.intrinsics()
    # Here di & ci are intrinsic setting extracted from actual frame.
    depth_vs.type = rs.stream.depth
    depth_vs.fmt = rs.format.z16
    depth_vs.index = 0
    depth_vs.uid = 0
    depth_vs.width = di.width
    depth_vs.height = di.height
    depth_vs.fps = dp.fps()
    depth_vs.bpp = depth_frame.bytes_per_pixel

    # intrinsic of camera
    depth_vs.intrinsics.fx = di.fx
    depth_vs.intrinsics.fy = di.fy
    depth_vs.intrinsics.height = di.height
    depth_vs.intrinsics.width = di.width
    depth_vs.intrinsics.ppx = di.ppx
    depth_vs.intrinsics.ppy = di.ppy
    depth_vs.intrinsics.coeffs = di.coeffs
    depth_vs.intrinsics.model = di.model

    #create a stream profile
    dprof = depth_sensor.add_video_stream(depth_vs)
    depth_sensor.add_read_only_option(rs.option.depth_units, 0.001)

    # Form the Color video stream that sw device will transimt
    color_vs = rs.video_stream()
    color_vs.intrinsics = rs.intrinsics()

    color_vs.type = rs.stream.color
    color_vs.fmt = rs.format.rgb8
    color_vs.index = 0
    color_vs.uid = 1
    color_vs.width = ci.width
    color_vs.height = ci.height
    color_vs.fps = cp.fps()
    color_vs.bpp = color_frame.bytes_per_pixel

    color_vs.intrinsics.fx = ci.fx
    color_vs.intrinsics.fy = ci.fy
    color_vs.intrinsics.height = ci.height
    color_vs.intrinsics.width = ci.width
    color_vs.intrinsics.ppx = ci.ppx
    color_vs.intrinsics.ppy = ci.ppy
    color_vs.intrinsics.coeffs = ci.coeffs
    color_vs.intrinsics.model = ci.model

    cprof = color_sensor.add_video_stream(color_vs)

    # Added as per RS2 issue no. 7057 of relasense
    dev.add_to(ctx)
    print(list(ctx.query_devices()))

    config = rs.config()
    # config.disable_all_streams()
    config.enable_device(SIMULATED_SN)
    config.enable_stream(rs.stream.depth, 0, di.width, di.height, rs.format.z16, 30)
    config.enable_stream(rs.stream.color, 0, ci.width, ci.height, rs.format.rgb8, 30)

    synthetic_pipe = rs.pipeline(ctx)
    prof = synthetic_pipe.start(config)

    dframe = rs.software_video_frame()
    dframe.domain = rs.timestamp_domain.hardware_clock
    dframe.frame_number = 1
    dframe.profile = dprof.as_video_stream_profile()
    # Raw bytes extacted from actual depth frame.
    dframe.pixels = depth_frame.get_data()
    # Done preparing the synthetic frame to feed to software_device.
    depth_sensor.on_video_frame(dframe)

    cframe = rs.software_video_frame()
    cframe.bpp = color_frame.bytes_per_pixel
    cframe.stride = cframe.bpp * color_frame.width
    cframe.timestamp = 0.0
    cframe.domain = rs.timestamp_domain.hardware_clock
    cframe.frame_number = 1
    cframe.profile = cprof.as_video_stream_profile()
    # Raw bytes extracted from actual color frame.
    cframe.pixels = color_frame.get_data()
    # Feed the synthetic frame to sensor.
    color_sensor.on_video_frame(cframe)

    frame_set = synthetic_pipe.wait_for_frames()
    new_depth_frame = frame_set.get_depth_frame()
    new_color_frame = frame_set.get_color_frame()

    print("Colorizer filter for depth frame obtained from software device")
    colorizer = rs.colorizer()
    new_depth_frame_colorize = colorizer.colorize(new_depth_frame)
    npy_frame = np.asanyarray(new_depth_frame_colorize.get_data())
    cv2.imshow("colorized", npy_frame)
    cv2.waitKey(1000//60)

    # Let's see the colorize depth frame of actual frame from camera.
    colorizer = rs.colorizer()
    colorized_depth = np.asanyarray(colorizer.colorize(depth_frame).get_data())
    plt.rcParams["axes.grid"] = False
    plt.rcParams['figure.figsize'] = [8, 4]
    plt.imshow(colorized_depth)
    plt.show()
    print("DONE--Colorizer filter for depth frame")

    # Raw bytes extracted from color frame.
    pc = rs.pointcloud()
    pc.map_to(rs2_color)
    points = pc.calculate(new_depth_frame)
    frame_number = frame_number + 1

    # Cleanup:
    pipe.stop()
    print("Done with software Device")
    0
    Comment actions Permalink
  • Lalatendu Das

    Exact error is pasted below while running the above example is pasted below. I have altered some variable names for better understanding hence this error might look little different than what pastd before. 


    [<pyrealsense2.device: Intel RealSense D435 (S/N: 018322071263 FW: 05.12.07.100 LOCKED: YES)>, <pyrealsense2.device: Software-Device
    Intel RealSense D435 (Emulated) (S/N: 1717 FW: 05.10.13.00
    255.255.255.255)>]
    Traceback (most recent call last):
    File "./rs2_software_device_context.py", line 117, in <module>
    prof = synthetic_pipe.start(config)
    RuntimeError: No device connected

    0
    Comment actions Permalink
  • MartyG

    I have re-researched your case extensively but the lack of Python-related software_device references is the main block, as the research just goes round in circles with the same small number of reference sources.

    Have you seen the Python software_device example in the link below yet, please?  I don't believe that I myself have seen it until now.

    https://gist.github.com/callendorph/4b7c61808e11967dcd002cfd0bcee824

    0
    Comment actions Permalink
  • Lalatendu Das

    Hey Marty,

    I had referred this example earlier, unfortunately it is found that many APIs mentioned in this examples are obsolete and not present in current python binding. I have altered them to refer to equivalent ones which are present in current versions. So I believe it would be of great help if Realsesne python binding developers can implement a reference  or example code to use software_device , syncer or other best practices to use software device then would be great help to community and save lot of time RS2 application writer.   

    0
    Comment actions Permalink
  • MartyG

    Thank you very much.  Requests for future addition of SDK documentation about a particular subject can be made by visiting the RealSense GitHub forum at the link below and clicking on the green New Issue button to post the request.

    https://github.com/IntelRealSense/librealsense/issues

    I went back to your list of API commands at the start of the case that you believed to be missing, such as sensor open.  They are present in the official documentation.

    Sensor open

    https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.sensor.html#pyrealsense2.sensor.open

    Sensor start

    https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.sensor.html#pyrealsense2.sensor.start

    I have not been able to find an equivalent for the matcher command though.

    0
    Comment actions Permalink
  • Lalatendu Das

    Looks like you are refering to the object representing a real sensor of the camera. They have exported the APIs you are pointing in the links. But corresponding APIs are not defined for software_sensors. See the below error to get an idea what I am referring here as not implemented I believe. Please let me know if my API usage is wrong. All sample example creating software sensor in similar way. Is there anything obvious is missing here.

    depth_sensor = dev.add_sensor("depth")     <<< Obtained software_sensor through this API. 
    color_sensor = dev.add_sensor("color")
    ---snip-----
    depth_sensor.open(depth_vs)    <<< Depth sensor is not of type pyrealsense2.sensor rather it is 
    color_sensor.open(color_vs)
    depth_sensor.start(camera_syncer)
    color_sensor.start(camera_syncer)

    The exact error looks as below:
    File "./rs2_software_device.py", line 124, in <module>
    depth_sensor.open(depth_vs)
    AttributeError: 'pyrealsense2.pyrealsense2.software_sensor' object has no attribute 'open'
    0
    Comment actions Permalink
  • MartyG

    I conducted further research but could not find information or a solution for this particular situation.  I do apologize. I cannot find a way forward without further infomation references.

    0
    Comment actions Permalink
  • Arvid2 Nilsson

    Hi there! The problem is (as of writing this, v2.47) that the Python wrappers contain several bugs and omissions related to the software_device functionality. Please find below a patch that fills in the "holes". With this patch applied, my build of librealsense and the python wrappers can be used to get a software device up and running. Only one stream can make it through the syncer for some reason - either depth or color, not both. I have not investigated that fully.

    BR, Arvid

    From 7a3e7615444e940ff9b4c3bcdfb4b8b21c9c8b43 Mon Sep 17 00:00:00 2001
    From: Arvid Nilsson <arvid2.nilsson@gmail.com>
    Date: Wed, 30 Jun 2021 13:55:29 +0200
    Subject: [PATCH] Support sw device in Python

    ---
    include/librealsense2/h/rs_types.h | 1 +
    src/realsense.def | 1 +
    src/rs.cpp | 1 +
    src/types.cpp | 2 ++
    wrappers/python/c_files.cpp | 2 +-
    wrappers/python/pyrs_internal.cpp | 10 +++++-----
    6 files changed, 11 insertions(+), 6 deletions(-)

    diff --git a/include/librealsense2/h/rs_types.h b/include/librealsense2/h/rs_types.h
    index b1a30de54..4bc526105 100644
    --- a/include/librealsense2/h/rs_types.h
    +++ b/include/librealsense2/h/rs_types.h
    @@ -252,6 +252,7 @@ typedef enum rs2_matchers

    RS2_MATCHER_COUNT
    }rs2_matchers;
    +const char* rs2_matchers_to_string(rs2_matchers matchers);

    typedef struct rs2_device_info rs2_device_info;
    typedef struct rs2_device rs2_device;
    diff --git a/src/realsense.def b/src/realsense.def
    index 4bd7c8b61..c7a9be1be 100644
    --- a/src/realsense.def
    +++ b/src/realsense.def
    @@ -128,6 +128,7 @@ EXPORTS
    rs2_exception_type_to_string
    rs2_extension_type_to_string
    rs2_extension_to_string
    + rs2_matchers_to_string
    rs2_playback_status_to_string
    rs2_log_severity_to_string
    rs2_log
    diff --git a/src/rs.cpp b/src/rs.cpp
    index 0e114aaef..a4561dfeb 100644
    --- a/src/rs.cpp
    +++ b/src/rs.cpp
    @@ -1283,6 +1283,7 @@ const char* rs2_cah_trigger_to_string( int mode )
    const char* rs2_calibration_type_to_string(rs2_calibration_type type) { return get_string(type); }
    const char* rs2_calibration_status_to_string(rs2_calibration_status status) { return get_string(status); }
    const char* rs2_host_perf_mode_to_string(rs2_host_perf_mode mode) { return get_string(mode); }
    +const char* rs2_matchers_to_string(rs2_matchers matchers) { return get_string(matchers); }

    void rs2_log_to_console(rs2_log_severity min_severity, rs2_error** error) BEGIN_API_CALL
    {
    diff --git a/src/types.cpp b/src/types.cpp
    index bb1672769..23ffd3974 100644
    --- a/src/types.cpp
    +++ b/src/types.cpp
    @@ -624,6 +624,8 @@ namespace librealsense
    CASE(DI_C)
    CASE(DLR_C)
    CASE(DLR)
    + CASE(DIC)
    + CASE(DIC_C)
    CASE(DEFAULT)
    default: assert(!is_valid(value)); return UNKNOWN_VALUE;
    }
    diff --git a/wrappers/python/c_files.cpp b/wrappers/python/c_files.cpp
    index f210f5624..bc13180cd 100644
    --- a/wrappers/python/c_files.cpp
    +++ b/wrappers/python/c_files.cpp
    @@ -25,7 +25,7 @@ void init_c_files(py::module &m) {
    BIND_ENUM(m, rs2_distortion, RS2_DISTORTION_COUNT, "Distortion model: defines how pixel coordinates should be mapped to sensor coordinates.")
    BIND_ENUM(m, rs2_log_severity, RS2_LOG_SEVERITY_COUNT, "Severity of the librealsense logger.")
    BIND_ENUM(m, rs2_extension, RS2_EXTENSION_COUNT, "Specifies advanced interfaces (capabilities) objects may implement.")
    -// BIND_ENUM(m, rs2_matchers, RS2_MATCHER_COUNT, "Specifies types of different matchers.") // TODO: implement rs2_matchers_to_string()
    + BIND_ENUM(m, rs2_matchers, RS2_MATCHER_COUNT, "Specifies types of different matchers.") // TODO: implement rs2_matchers_to_string()
    BIND_ENUM(m, rs2_camera_info, RS2_CAMERA_INFO_COUNT, "This information is mainly available for camera debug and troubleshooting and should not be used in applications.")
    BIND_ENUM(m, rs2_stream, RS2_STREAM_COUNT, "Streams are different types of data provided by RealSense devices.")
    BIND_ENUM(m, rs2_format, RS2_FORMAT_COUNT, "A stream's format identifies how binary data is encoded within a frame.")
    diff --git a/wrappers/python/pyrs_internal.cpp b/wrappers/python/pyrs_internal.cpp
    index 7b584e209..f1653268e 100644
    --- a/wrappers/python/pyrs_internal.cpp
    +++ b/wrappers/python/pyrs_internal.cpp
    @@ -117,7 +117,7 @@ void init_internal(py::module &m) {

    /** rs_internal.hpp **/
    // rs2::software_sensor
    - py::class_<rs2::software_sensor> software_sensor(m, "software_sensor");
    + py::class_<rs2::software_sensor, rs2::sensor> software_sensor(m, "software_sensor");
    software_sensor.def("add_video_stream", &rs2::software_sensor::add_video_stream, "Add video stream to software sensor",
    "video_stream"_a, "is_default"_a=false)
    .def("add_motion_stream", &rs2::software_sensor::add_motion_stream, "Add motion stream to software sensor",
    @@ -137,7 +137,7 @@ void init_internal(py::module &m) {
    .def("on_notification", &rs2::software_sensor::on_notification, "notif"_a);

    // rs2::software_device
    - py::class_<rs2::software_device> software_device(m, "software_device");
    + py::class_<rs2::software_device, rs2::device> software_device(m, "software_device");
    software_device.def(py::init<>())
    .def("add_sensor", &rs2::software_device::add_sensor, "Add software sensor with given name "
    "to the software device.", "name"_a)
    @@ -149,9 +149,9 @@ void init_internal(py::module &m) {
    .def("register_info", &rs2::software_device::register_info, "Add a new camera info value, like serial number",
    "info"_a, "val"_a)
    .def("update_info", &rs2::software_device::update_info, "Update an existing camera info value, like serial number",
    - "info"_a, "val"_a);
    - //.def("create_matcher", &rs2::software_device::create_matcher, "Set the wanted matcher type that will "
    - // "be used by the syncer", "matcher"_a) // TODO: bind rs2_matchers enum.
    + "info"_a, "val"_a)
    + .def("create_matcher", &rs2::software_device::create_matcher, "Set the wanted matcher type that will "
    + "be used by the syncer", "matcher"_a); // TODO: bind rs2_matchers enum.

    // rs2::firmware_log_message
    py::class_<rs2::firmware_log_message> firmware_log_message(m, "firmware_log_message");
    --
    2.32.0.windows.1

    0
    Comment actions Permalink
  • MartyG

    Thanks very much Arvid2 Nilsson for the contribution of your patch code!

    If you would like your patch to be considered for merging officially into the RealSense SDK, you can contribute a Pull Request at the link below.  Doing so is totally optional though. 

    https://github.com/IntelRealSense/librealsense/pulls

    0
    Comment actions Permalink
  • Sandesh Kumar S

    MartyG Are there any updates on this ? Is the full functionality of python software device APIs supported  now ?

    0
    Comment actions Permalink
  • MartyG

    Sandesh Kumar S  There has been no further information regarding using software_device on Python since the patch kindly provided by Arvid2 Nilsson above.

    https://support.intelrealsense.com/hc/en-us/community/posts/1500000934242/comments/4403012747923

    0
    Comment actions Permalink
  • MartyG

    In a past case about defining custom framesets, an alternative to using software_device that was suggested was to define a custom processing block.

    https://github.com/IntelRealSense/librealsense/issues/5847#issuecomment-586718261

    The references in that case were for the C++ language.  I believe that the pyrealsense2 equivalent is here:

    https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.processing_block.html#pyrealsense2.processing_block

    0
    Comment actions Permalink

Please sign in to leave a comment.