This topic describes how to set up an HTTP endpoint as a Pipeline Source, and then process and inspect the endpoint data before sending it to a destination, such as Mezmo Log Analysis, or AWS S3 Storage.
Set Up the HTTP Endpoint Source
Log in to the Mezmo Web App.
In the left-hand menu, click Pipelines. This will open the Edit Pipeline interface.
In the left-hand navigation, click + New Pipeline.
For the Pipeline Name, enter
HTTP Inspect
, then click Save.At the bottom of the Pipeline map, click the Add Source button, then select HTTP Source.
HTTP Source Configuration Settings
After you add the HTTP Source to your Pipeline Map, click the node to open the configuration settings:
Setting | Value |
Title | A name for your source. |
Description | A short description of the source. |
Decoding Method |
|
Access Key Management |
Make sure to copy the Access Key and note the Ingestion URL, which should resemble |
Click Update to finish configuring the HTTP endpoint, and then click Deploy Pipeline.
You will probably see a message warning you of unconnected nodes in your Pipeline. You can safely ignore these at this step of building your Pipeline.
Send Test Data to Your Source
Now that you've deployed your pipeline, the next step is to send data into it via the Ingestion Endpoint. This example uses a basic cURL command to send test data to the endpoint, but you could also use an API tool like Postman if you prefer.
Before you send the data, you should use a Pipeline Tap monitor on the HTTP source and make sure it is receiving data.
Click the Tap button on the HTTP source.
A loading animation will begin to play and it will say "Waiting for events to arrive", but you won't see any data yet.
Open a terminal to send cURL commands, and enter this command, entering your Access Key and Ingestion Endpoint from the previous step as shown.
$ curl -H "Content-Type: application/json" -H "Authorization: %YOUR TOKEN HERE%" -X POST -d '{"Hello":"World"}' https://pipeline.mezmo.com/v1/%YOUR PIPELINE ID%
When the command executes, you should see a {"status":"ok"}
response in your terminal, and Hello World
appear in your Pipeline Tap.
Processors
Now that you've got your pipeline accepting data, the next step is to start adding additional processors to it. There are many ways you can process log data, but for this example you will use the Remove Fields Processor to remove a specific field. This is a typical use case if you want to send data to a storage location like Amazon S3 to reduce the amount or type of data you store, or if you want to send a limited amount of data to a tool like Mezmo Log Analysis.
Start by clicking on Edit Pipeline if you're not already in this view.
To add a processor, click the Add Processor button at the bottom of the pipeline map.
Select the Remove Fields Processor.
Provide a Title and Description for the node, like
Sensitive Data
andDrop sensitive data fields
.For Fields, enter
.sensitive
. In this case, the.
notation is used to designate that the configuration object is a field. You can find more information in Pipeline Event Data Model on how to designate fields, sub-fields, and string and numeric values.Save the Processor.
To connect the Processor and Source, hover your cursor over the right side of the until a blue circle appears, then click the circe and draw a Connector between the Source and Processor.
Click Deploy Pipeline. You can again ignore the Unconnected Node warning.
Open the Pipeline Tap window for the Processor.
Go back to your cURL terminal and send this command:
$ curl -H "Content-Type: application/json" -H "Authorization: %YOUR TOKEN HERE%" -X POST -d '{"Hello": "World", "sensitive": "data","was": "here"}' https://pipeline.mezmo.com/v1/%YOUR PIPELINE ID%
In the Pipeline Tap window you should see {"Hello":"World","was":"here"}.
Destinations
Now that you've set up an HTTP endpoint source and a Processor, you could send the processed data to any number of Destinations depending on your specific use case.