YARP ROS Interoperation Past and Future But Not Now

From Wiki for iCub and Friends
Jump to navigation Jump to search

This page gives some details on the status of YARP/ROS interoperation.

ROS vs roscore

"ROS" is a framework, aggregating a lot of stuff. The ROS middleware is called "roscore", and is just one part of ROS. It is the component of ROS most comparable to YARP. It will be referred to in the rest of this document as ROSCORE.

Using YARP and ROSCORE in the same program

There is no obstacle to using YARP and ROSCORE in the same program. Programs that use ROSCORE are in practice likely to be built using ROS's own build system, which is layered on CMake. Since YARP can be linked to like any regular library, and has good support for doing so from CMake, there is no problem integrating at this level. A frequent source of conflict between middleware can be in automatic code-generation steps (e.g. for generating functions/classes encoding message types). YARP has historically refrained from having such code-generation, specifically so that it remains "just a library" from the perspective of integration. YARP has been packaged as a library within the ROS package management system:


When using two types of middleware in one program, an importance concern is whether they can work together efficiently. For example, translation between data types could be complicated. This matters most for large objects such as images. YARP has deliberately designed large data types with an eye to this problem. For images, the YARP image type is structured to match the format of an "IplImage", a venerable format that has longe since been propagated to the popular OpenCV library. OpenCV is the image processing library adopted by ROS, and so YARP and ROS images share a common underlying format. This fact is used in this package (in fact the conversion here is a sub-optimal, containing an avoidable copy):


Note that the use of YARP and ROSCORE in the same program strongly implies use of the ROS build system. This has consequences. That program will be a lot less portable than if it used YARP alone (ROS developers concentrate on Ubuntu Linux) and will be less usable from frameworks that are incompatible with ROS.

Using YARP and ROSCORE on the same network

YARP and ROSCORE provide for inter-process communication. If they had compatible protocols, then programs using YARP and programs using ROSCORE could communicate with each other. For example, one use case for this would be to make the ICUB robot (whose software is written with YARP) directly accessible from programs written using the ROS framework. More generally, a non-ROS-using program (e.g. on an unsupported OS or using an incompatible framework) could be made to communicate with ROS programs via YARP.

YARP has been deliberately designed with interoperability in mind, with a plugin system of "carriers" (network protocols) that allows for great variety in how logical data flow translates into network traffic. Some preliminary work has been done to add "carriers" for ROSCORE traffic into YARP. This is done *without* linking to ROS libraries, since doing so would reduce the portability of YARP.

The XML/RPC carrier

Administrative messages in ROSCORE are transmitted using XML/RPC. Specifically, this protocol is used by ROSCORE when communicating with the roscore "master" entity, and with "slave" entities associated with each node. These messages do not carry user data, but are essential to finding out how to reach named entities on the network, making and breaking connections, and other administrative tasks.

An "xmlrpc_carrier" plugin has been added to YARP. This means that all YARP "Ports" can now act as XML/RPC servers and clients.

YARP user data, when sent to or received from a carrier, is expressed in a logical format called the "Bottle" format, which can be thought of to a first approximation as an s-expression in Lisp -- nested lists containing sequences of primitives. This format is easy to map onto the XML model used by the "XmlRpc++" library, which is the XML parser used by ROS and by the xmlrpc_carrier plugin. Here is the mapping:

* XML/RPC integer <-> Bottle integer
* XML/RPC double (floating point number) <-> Bottle double (floating point number)
* XML/RPC string <-> Bottle string
* XML/RPC blobs <-> Bottle blobs
* XML/RPC array <-> Bottle list
* XML/RPC structure <-> Bottle list

XML/RPC structures are sets of key/value pairs. These are mapped to a list with the tag "dict" as the first element, followed by sublists with key->value mappings:

 (dict (key1 val1) (key2 val2))

With some care, the mapping can be made exact, and means that no new types need to be added to YARP in order to conveniently represent data transfer to/from XML/RPC server/clients. So it is now easy to communicate from YARP to the roscore "master" and node "slaves"; it is also possible to have YARP ports pose as ROS "slaves" (or theoretically the "master").

For example, once this carrier is available, here is how YARP can be used from the commandline to look up the address of a ROSCORE entity called "/talker". We assume the roscore "master" server is running on the machine called "zimba", on port number 11311:

 # for convenience, give the roscore master a name ("/roscore") in the YARP network
 yarp name register /roscore tcp zimba 11311
 # send a look-up message, following the master API
 echo "lookupNode yarp_contact_id /talker" | yarp rpc xmlrpc://roscore

This result is printed:

 1 "node api" "http://contact:37291/"

So we know that the entity "/talker" can be reached on the machine "contact" on port number 37291, speaking "http" (this means XML/RPC). We can in turn talk to that entity using XML/RPC to get information about it or to prod it into making connections.

The command-line operation above can be trivially converted to code. Such operations are not intended for end-users, it is given here just to show some detail of this aspect of interoperability.

The TCPROS carrier

User data transmitted between ROS programs is typically carried to the "TCPROS" protocol. This is much more efficient than XML/RPC. A carrier called tcpros_carrier has been added to YARP in order to speak this protocol. We deal here with transmitting user data as an uninterpreted binary blob from YARP to ROSCORE or from ROSCORE to YARP. The issue of how that data can be correctly interpreted on both sides needs to be separately addressed.

There are two very different kinds of data flow that may be happening across a TCPROS connection, from ROSCORE's perspective:

  • "Service" data - essentially an RPC command/reply data flow initiated by a client, directed at a server.
  • "Topic" data - a stream of data from one "publisher" to one "subscriber". Initiation of this stream is a little complicated.

We can use regular YARP ports to act as publishers, subscribers, service-servers and service-clients. The ROS notion of a "Topic" is addressed separately; in summary it just affects the initiation/termination of connections and doesn't materially affect what a carrier needs to do.

There is not much that a tcpros_carrier needs to do. Here's what it does:

  • Produce/consume TCPROS headers (a list of "key=value" strings). The headers may need to include side information, such as the name of the ROS topic; such information is passed to the carrier using "carrier modifiers".
  • We choose to produce/consume a small header on the YARP side of a connection, for compatibility with the Bottle format on that side. This header does not need to be transmitted, it is in effect "virtual". It could be eliminated, but has been convenient for testing.

As an example: suppose we want to use a ROS service that adds two 64-bit integers and gives back a 64-bit result. Here's how to do it from the command line: (we assume that the "/roscore" master has been registered in YARP as in the example earlier)

 echo "lookupNode dummy_caller_id /add_two_ints_server" | yarp rpc xmlrpc://roscore
 [prints] 1 "node api" "http://contact:37291/"
 yarp name register /add_two_ints_server tcp contact 37291
 echo "requestTopic dummy /add_two_ints ((TCPROS))" | yarp rpc xmlrpc://add_two_ints_server
 [prints] 1 "" (TCPROS contact 38265)
 yarp name register /adder tcp contact 38265
 echo "{ 8 0 0 0   0 0 0 0   2 0 0 0   0 0 0 0}" | yarp rpc tcpros+service./add_two_ints://adder
 # we get back { 10 0 0 0 0 0 0 0 }, which is 8 + 2

As another example: suppose we wanted to receive messages from a ROS node called "/talker" that outputs messages on the "/chatter" topic. Let's start a simple YARP program to consume such messages:

 yarp read /read  # or make a program with an input port

Now to connect it up we would do something like:

 echo "lookupNode dummy_id /talker" | yarp rpc xmlrpc://roscore
 [prints] 1 "node api" "http://contact:37291/"
 yarp name register /talker tcp contact 37291
 echo "requestTopic dummy_id /chatter ((TCPROS))" | yarp rpc xmlrpc://talker
 [prints] 1 "" (TCPROS contact 38265)
 yarp name register /talker/chatter tcp contact 38265
 yarp connect /read /talker/chatter tcpros+topic./chatter

The "yarp read" program should now show a stream of binary blobs rendered in text format, corresponding to the strings generated by "/talker".

How about sending messages to a ROS subscriber? It turns out the subscriber has to be convinced to initiate this connection, and will first need to talk to a "slave" node via XML/RPC. This can easily be achieved in YARP by a temporary port. See the example in the README.TXT bundled with the tcpros_carrier in YARP.

It is clear that connection initiation between YARP and ROS requires some orchestration. For the moment, the necessary logic has been bundled into a simple helper program called "yarpros", which currently has the following functionality:

yarpros roscore <hostname> <port number>
 -- tell yarp how to reach the ros master
 -- example: yarpros roscore 11311
 yarpros import <name>
 -- import a ROS name into YARP
 -- example: yarpros import /talker
 yarpros read <yarpname> <nodename> <topicname>
 -- read to a YARP port from a ROS node's contribution to a topic
 -- example: yarpros read /read /talker /chatter
 yarpros write <yarpname> <nodename> <topicname>
 -- write from a YARP port to a ROS node's subscription to a topic
 -- example: yarpros write /write /listener /chatter
 yarpros rpc <yarpname> <nodename> <servicename>
 -- write/read from a YARP port to a ROS node's named service
 -- example: yarpros rpc /rpc /add_two_ints_server /add_two_ints

Note that the connection initiation used so far is experimental, and just enough to get going. There are plenty of improvements to be made, such as telling the ROS master about what is going on rather than just working around it :-).

Image carrier

Images in YARP are generally transmitted across regular carriers, with no special treatment (other than using multicast when appropriate). It can be useful to have special carriers that transform images in some way, e.g. by compressing them. The "mjpeg_carrier" in YARP does this, for example. Matching a special purpose image carrier in YARP and ROS remains to be investigated, but there are in principle no obstacles to this. Image communication is treated separately in ROS, see:


Image translation to/from network representations by a custom carrier, if done with a little care, does not require copies of the image data to be made. So in principle this type of traffic should be efficient.


The ROSCORE notion of "topic", at first sight, looks like a big difference in abstraction between YARP and ROS. Luckily it is trivial to implement topics with yarp ports. A topic can be seen as a virtual port that connects all its source ports (publishers in ROS-speak) to all its destination ports (subscribers in ROS-speak). A small bit of logic has been added to the yarp name server in implement such virtual ports.

A larger issue is that the ROS "master" tracks a lot more information about the activity of each network entity than YARP's name server does. It remains to be investigated how much a "foreign" program needs to tell the ROS master about its activities in order to interoperate fully.


The YARP network data format is self-describing. Each messages contains sufficient type-tags to interpret that message without any other information. This is less efficient than factoring out type data. But in practice efficiency is mostly a concern when the message is large - and large messages usually contain large lists of homogeneous data, requiring just a few bytes of type data.

The ROSCORE network data format is self-deliminating (it contains its size) but is otherwise a binary blob. A type name is sent at the start of a connection. This means that interpreting messages in practice requires on IDL, with generated code, implying a special build system, implying a framework, implying lower portability of user code. But this approach is more efficient than including type information with each message.

ROSCORE's message types could be sent from or received by a YARP port with ease, if that program could make use of code generated by ROS utilities. With some hacks, it is possible to do this without using ROS's build system; this could potentially be better packaged. An alternative would be to independently implement a parser of ROS message/service definition files.

A subset of YARP message types of a fixed structure can be defined with a ROS definition file, and then read/written easily from ROSCORE-using programs.

In general, the type issue remains to be investigated. The challenges in making systematic solution shouldn't obscure the fact that specific solutions for needed connections can be easily solved *now*, with a little bit of manual effort.