Protobuf Python

broken image


A basic tutorial introduction to gRPC in Python.

This tutorial provides a basic Python programmer's introductionto working with gRPC.

By walking through this example you'll learn how to:

  • Define a service in a .proto file.
  • Generate server and client code using the protocol buffer compiler.
  • Use the Python gRPC API to write a simple client and server for your service.

It assumes that you have read the Introduction to gRPC and are familiarwith protocolbuffers. You canfind out more in the proto3 languageguide and Pythongenerated codeguide.

Python google.protobuf.message.SerializeToString Examples The following are 30 code examples for showing how to use google.protobuf.message.SerializeToString. These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file. This section contains reference documentation for working with protocol buffer classes in C, Java, Python, Go, C#, Objective C, JavaScript, Ruby, PHP, and Dart, as well as some reference documentation for Protocol Buffers itself. Hashes for protobuf-3.15.8-cp27-cp27m-macosx109x8664.whl; Algorithm Hash digest; SHA256: fad4f971ec38d8df7f4b632c819bf9bbf4f57cfd7312cf526c69ce17ef32436a.

Why use gRPC?

The library you're looking for is google.protobuf.jsonformat. You can install it with the directions in the README here. The library is compatible with Python = 2.7. Python protobuf compiler. Compile all protobuf files and create a single package distribution for can be installed with pip. Python = 3.6; git (only for build from git repository) features support for multiples folders support.

Our example is a simple route mapping application that lets clients getinformation about features on their route, create a summary of their route, andexchange route information such as traffic updates with the server and otherclients.

With gRPC we can define our service once in a .proto file and generate clientsand servers in any of gRPC's supported languages, which in turn can be run inenvironments ranging from servers inside a large data center to your own tablet —all the complexity of communication between different languages and environments ishandled for you by gRPC. We also get all the advantages of working with protocolbuffers, including efficient serialization, a simple IDL, and easy interfaceupdating.

Example code and setup

The example code for this tutorial is ingrpc/grpc/examples/python/route_guide.To download the example, clone the grpc repository by running the followingcommand:

Then change your current directory to examples/python/route_guide in the repository:

You also should have the relevant tools installed to generate the server andclient interface code - if you don't already, follow the setup instructions inQuick start.

Defining the service

Your first step (as you'll know from the Introduction to gRPC) is todefine the gRPC service and the method request and response types usingprotocolbuffers. You cansee the complete .proto file inexamples/protos/route_guide.proto.

To define a service, you specify a named service in your .proto file:

Then you define rpc methods inside your service definition, specifying theirrequest and response types. gRPC lets you define four kinds of service method,all of which are used in the RouteGuide service:

  • A simple RPC where the client sends a request to the server using the stuband waits for a response to come back, just like a normal function call.

  • A response-streaming RPC where the client sends a request to the server andgets a stream to read a sequence of messages back. The client reads from thereturned stream until there are no more messages. As you can see in theexample, you specify a response-streaming method by placing the streamkeyword before the response type.

  • A request-streaming RPC where the client writes a sequence of messages andsends them to the server, again using a provided stream. Once the client hasfinished writing the messages, it waits for the server to read them all andreturn its response. You specify a request-streaming method by placing thestream keyword before the request type.

  • A bidirectionally-streaming RPC where both sides send a sequence of messagesusing a read-write stream. The two streams operate independently, so clientsand servers can read and write in whatever order they like: for example, theserver could wait to receive all the client messages before writing itsresponses, or it could alternately read a message then write a message, orsome other combination of reads and writes. The order of messages in eachstream is preserved. You specify this type of method by placing the streamkeyword before both the request and the response.

Your .proto file also contains protocol buffer message type definitions for allthe request and response types used in our service methods - for example, here'sthe Point message type:

Generating client and server code

Next you need to generate the gRPC client and server interfaces from your .protoservice definition.

First, install the grpcio-tools package:

Use the following command to generate the Python code:

Note that as we've already provided a version of the generated code in theexample directory, running this command regenerates the appropriate file ratherthan creates a new one. The generated code files are calledroute_guide_pb2.py and route_guide_pb2_grpc.py and contain:

  • classes for the messages defined in route_guide.proto
  • classes for the service defined in route_guide.proto
    • RouteGuideStub, which can be used by clients to invoke RouteGuide RPCs
    • RouteGuideServicer, which defines the interface for implementationsof the RouteGuide service
  • a function for the service defined in route_guide.proto
    • add_RouteGuideServicer_to_server, which adds a RouteGuideServicer toa grpc.Server

Note

The 2 in pb2 indicates that the generated code is following Protocol Buffers Python API version 2. Version 1 is obsolete. It has no relation to the Protocol Buffers Language version, which is the one indicated by syntax = 'proto3' or syntax = 'proto2' in a .proto file.

Creating the server

First let's look at how you create a RouteGuide server. If you're onlyinterested in creating gRPC clients, you can skip this section and go straightto Creating the client (though you might find it interestinganyway!).

Creating and running a RouteGuide server breaks down into two work items:

  • Implementing the servicer interface generated from our service definition withfunctions that perform the actual 'work' of the service.
  • Running a gRPC server to listen for requests from clients and transmitresponses.

You can find the example RouteGuide server inexamples/python/route_guide/route_guide_server.py.

Implementing RouteGuide

Protobuf

route_guide_server.py has a RouteGuideServicer class that subclasses thegenerated class route_guide_pb2_grpc.RouteGuideServicer:

RouteGuideServicer implements all the RouteGuide service methods.

Simple RPC

Let's look at the simplest type first, GetFeature, which just gets a Pointfrom the client and returns the corresponding feature information from itsdatabase in a Feature.

The method is passed a route_guide_pb2.Point request for the RPC, and agrpc.ServicerContext object that provides RPC-specific information such astimeout limits. It returns a route_guide_pb2.Feature response.

Response-streaming RPC

Now let's look at the next method. ListFeatures is a response-streaming RPCthat sends multiple Features to the client.

Here the request message is a route_guide_pb2.Rectangle within which theclient wants to find Features. Instead of returning a single response themethod yields zero or more responses.

Request-streaming RPC

The request-streaming method RecordRoute uses aniterator ofrequest values and returns a single response value.

Bidirectional streaming RPC

Lastly let's look at the bidirectionally-streaming method RouteChat.

This method's semantics are a combination of those of the request-streamingmethod and the response-streaming method. It is passed an iterator of requestvalues and is itself an iterator of response values.

Starting the server

Once you have implemented all the RouteGuide methods, the next step is tostart up a gRPC server so that clients can actually use your service:

The server start() method is non-blocking. A new thread will be instantiatedto handle requests. The thread calling server.start() will oftennot have any other work to do in the meantime. In this case, you can callserver.wait_for_termination() to cleanly block the calling thread until theserver terminates.

Creating the client

You can see the complete example client code inexamples/python/route_guide/route_guide_client.py.

Creating a stub

To call service methods, we first need to create a stub.

We instantiate the RouteGuideStub class of the route_guide_pb2_grpcmodule, generated from our .proto.

Protobuf Python Enum To String

Calling service methods

For RPC methods that return a single response ('response-unary' methods), gRPCPython supports both synchronous (blocking) and asynchronous (non-blocking)control flow semantics. For response-streaming RPC methods, calls immediatelyreturn an iterator of response values. Calls to that iterator's next() methodblock until the response to be yielded from the iterator becomes available.

Simple RPC

A synchronous call to the simple RPC GetFeature is nearly as straightforwardas calling a local method. The RPC call waits for the server to respond, andwill either return a response or raise an exception:

An asynchronous call to GetFeature is similar, but like calling a local methodasynchronously in a thread pool:

Response-streaming RPC

Calling the response-streaming ListFeatures is similar to working withsequence types:

Request-streaming RPC

Calling the request-streaming RecordRoute is similar to passing an iteratorto a local method. Like the simple RPC above that also returns a singleresponse, it can be called synchronously or asynchronously:

Bidirectional streaming RPC

Calling the bidirectionally-streaming RouteChat has (as is the case on theservice-side) a combination of the request-streaming and response-streamingsemantics:

Try it out!

Run the server:

From a different terminal, run the client:

Last modified February 10, 2021: Alert shortcodes: treat all bodies as markdown (#640) (ae46bb1)

The structuring of data plays an important role in the development of programs and websites. If project data is well structured, for example, it can be easily and precisely read by other software. On the Internet, this is especially important for text-based search engines such as Google, Bing or Yahoo, which can capture the content of a website thanks to corresponding, structured distinctions.

The use of structured data in software development is generally worthwhile - whether for Internet or desktop applications - wherever programs or services have to exchange data via interfaces and a high data processing speed is desired. You will learn the role the serialization format Protocol Buffers (Protobuf) can play and how this structuring method differs from the known alternative JSONP in this article.

  1. What are the benefits of Google's Protocol Buffers?
  2. Tutorial: Practical introduction to Protobuf using the example of Java

What is Protobuf (Protocol Buffers)?

Protocol Buffers, or Protobuf for short, a data interchange format originally developed for internal use, has been offered to the general public as an open source project (partly Apache 2.0 license) by Google since 2008. The binary format enables applications to store as well as exchange structured data in an uncomplicated way, whereby these programs can even be written in different programming languages. The following, including others, are supported languages:

  • C#
  • C++
  • Go
  • Objective-C
  • Java
  • Python
  • Ruby

Protobuf is used in combination with HTTP and RPCs (Remote Procedure Calls) for local and remote client-server communication - to describe the interfaces required here. The protocol composition is also called gRPC.

In order to protect your privacy, the video will not load until you click on it.

What are the benefits of Google's Protocol Buffers?

When developing Protobuf, Google placed emphasis on two factors: Simplicity and performance. At the time of development, the format - as already mentioned, initially used internally at Google - was to replace the similar XML format. Today it is also in competition with other solutions such as JSON(P) or FlatBuffers. As Protocol Buffers are still the better choice for many projects, an analysis makes the characteristics and strengths of this structuring method clear:

Clear, cross-application schemes

The basis of every successful application is a well-organized database system. A great deal of attention is paid to the organization of this system - including the data it contains - but the underlying structures are then lost at the latest when the data is forwarded to a third-party service. The unique encoding of the data in the Protocol Buffers schema ensures that your project forwards structured data as desired, without these structures being broken up.

Backward and forward compatibility

The implementation of Protobuf spares the annoying execution of version checks, which is usually associated with 'ugly' code. In order to maintain backward compatibility with older versions or forward compatibility with new versions, Protocol Buffers uses numbered fields that serve as reference points for accessing services. This means you do not always have to adapt the entire code in order to publish new features and functions.

Flexibility and comfort

With Protobuf coding, you automatically use modifiers (optional: required, optional or repeated) which simplify the programming work considerably. This way the structuring method allows you to determine data structure at scheme level, whereupon the implementation details of the classes used for the different programming languages are automatically regulated. You can also change the status at any time, for example from 'required' to 'optional'. The transport of data structures can also be regulated using Protocol Buffers: Through the coding of generic query and response structures, a flexible and secure data transfer between multiple services is ensured in a simple manner.

Less boilerplate code

Boilerplate code (or simply boilerplate) plays a decisive role in programming, depending on the type and complexity of a project. Put simply, it is reusable code blocks that are needed in many places in software and are usually only slightly customizable. Such code is often used, for example, to prepare the use of functions from libraries. Boilerplates are common in the web languages JavaScript, PHP, HTML and CSS in particular, although this is not optimal for the performance of the web application. A suitable Protocol Buffers scheme helps to reduce the boilerplate code and thereby improve performance in the long term.

Easy language interoperability

It is part of today's standard, that applications are no longer simply written in one language, but that program parts or modules combine different language types. Protobuf simplifies interaction between the individual code components considerably. If new components are added whose language differs from the current project language, you can simply translate the Protocol Buffers scheme into the respective target language using the appropriate code generator, whereby your own effort is reduced to a minimum. The prerequisite is, of course, that the languages used are those supported by Protobuf by default, such as the languages already listed, or via a third-party add-on.

Protobuf vs. JSON: The two formats in comparison

First and foremost, Google developed Protocol Buffers as an alternative to XML (Extensible Markup Language) and exceeded the markup language in many ways. Therefore structuring the data with Protobuf not only tends to be simpler, but according to the search engine giant, also ensures a data structure that is between three to ten times smaller and 20 to 100 times faster than a comparable XML structure.

Also, with the JavaScript markup language JSON (JavaScript Object Notation), Protocol Buffers often makes a direct comparison, whereby it should be mentioned that both technologies were designed with different objectives: JSON is a message format which originated from JavaScript, which exchanges its messages in text format and is supported by practically all common programming languages. The functionality of Protobuf includes more than one message format, as Google technology also offers various rules and tools for defining and exchanging messages. Protobuf also generally outperforms JSON when you look at the sending of messages in general, but the following tabular 'Protobuf vs. JSON' list shows that both structuring techniques have their advantages and disadvantages:

Protobuf JSON
Developer Google Douglas Crockford
Function Markup format for structured data (storage and transmission) and library Markup format for structured data (storage and transmission)
Binary format Yes No
Standardization No Yes
Human-readable format Partially Yes
Community/Documentation Small community, expandable online manuals Huge community, good official documentation as well as various online tutorials etc.

So, if you need a well-documented serialization format that stores and transmits the structured data in human-readable form, you should use JSON instead of Protocol Buffers. This is especially true if the server-side part of the application is written in JavaScript and if a large part of the data is processed directly by browsers by default. On the other hand, if flexibility and performance of the data structure play a decisive role, Protocol Buffers tends to be the more efficient and better solution.

Tutorial: Practical introduction to Protobuf using the example of Java

Protocol Buffers can make the difference in many software projects, but as is often the case, the first thing to do is get to know the particularitiesand syntactic tricks of the serialization technology and how to apply them. To give you an initial impression of Protobuf's syntax and message exchange, the following tutorial explains the basic steps with Protobuf - from defining your own format in a .proto file, to compiling the Protocol Buffers structures. A simple Java address book application example will be used as a code base that can read contact information from a file and write to a file. The parameters 'Name', 'ID', 'email address' and 'Telephone number' are assigned to each address book entry.

Define your own data format in the .proto file

You first describe any data structure that you want to implement with Protocol Buffers in the .proto file, the default configuration file of the serialization format. For each structure that you want to serialize in this file - that is, map in succession - simply add a message. Then you specify names and types for each field of this message and append the desired modifier(s). One modifier is required per field.

One possible mapping of the data structures in the .proto file looks as follows for the Java address book:

The syntax of Protocol Buffers is therefore strongly reminiscent of C++ or Java. The Protobuf version is always declared first (here proto3), followed by the description of the software package whose data you want to structure. This includes a unique name ('tutorial') and, in this code example, the two Java-specific options 'java_package'(Java package in which the generated classes are saved) and 'java_outer_classname' (defines the class name under which the classes are summarized).

This is followed by the Protobuf messages, which can be composed of any number of fields, whereby the typical data types such as 'bool', 'int32', 'float', 'double', or 'string' are available. Some of these are also used in the example. As already mentioned, each field of a message must be assigned at least one modifier - i.e. either...

  • required: a value for the field is mandatory. If this value is missing, the message remains 'uninitialized', i.e. not initialized or unsent.
  • optional: a value can be provided in an optional field but does not have to. If this is not the case, a value defined as the standard is used. In the code above, for example, the default value 'HOME' (landline number at home) is entered for the telephone number type.
  • repeated: fields with the 'repeated' modifier can be repeated any number of times (including zero times).

You can find detailed instructions on how to define your own data format with Protocol Buffers in the Google Developer Forum.

Compile your own Protocol Buffers schema

If your own data structures are defined as desired in the .proto file, generate the classes needed to read and write the Protobuf messages. To do this, use the Protocol Buffers Compiler (protoc) on the configuration file. If you have not yet installed it, simply download the current version from the official GitHub-Repository. Unzip the ZIP file at the desired location and then start the compiler with a double click (located in the 'bin' folder).

Python Protobuf Serialize

Make sure you have the appropriate edition the Protobuf compiler: Protoc is available for 32- or 64-bit architectures (Windows, Linux or macOS), as desired.

Finally, you specify:

Protobuf Python Api

  • the source directory which contains the code of your program (here placeholder 'SRC_DIR'),
  • the destination directory in which the generated code is to be stored (here placeholder 'DST_DIR')
  • and the path to the .proto file.

As you want to generate Java classes, you also use the --java_out option (similar options are also available for the other supported languages). The complete compile command is as follows:

Protobuf Python

A more detailed Protobuf Java tutorial, which explains, among other things, the transmission of messages via Protocol Buffers (read/write), is offered by Google in the 'Developers' section, the in-house project area of the search engine giant for developers. Alternatively, you also have access there to instructions for the other supported languages such as C++, Go or Python.

Related articles



broken image