As a prerequisite to this tutorial, check out the Veritone Automate Studio User Guide for more information on using the app.

Welcome to Automate Studio!

This tutorial introduces Automate Studio and outlines step-by-step instructions on how to construct a simple flow that leverages Veritone's AI platform, aiWARE, architecture and machine learning from our ecosystem of over 30 cognitive capabilities. 

This example will demonstrate how to design a flow that creates an HTTP endpoint to automatically listen for and respond to a browser-based app's transmission of a video file by automatically processing the file with a transcription cognitive engine and returning the time-correlated cognitive metadata back to the app. 

The Tools You'll Need

1. Free Veritone Automate Studio Account

Are you or your organization new to Automate Studio?

Get started for free with the Community Plan. This plan enables 1 user to test out the app for their use case and receives free lifetime credits to run up to 100 flows and cognitively process $300 worth of data. 

Sign up here to create an account and gain immediate access to the application.

Need more credits or users?

Contact us about a solution tailored to your team's specific needs. During public beta, Business user licenses are free.

Already an Automate Studio user?

Login to your account here. If you would like to add colleagues to your account, contact us at [email protected] 


2. A Video File to Transcribe

Pick a video file you would like to transcribe - we will ingest it into the Veritone aiWARE platform for use in Automate Studio.

3. Free Postman Account 

Postman is a collaboration platform for API development. Postman's features simplify each step of building an API and streamline collaboration so you can create better APIs—faster.

We'll be using it to test our endpoint along the way. Download the app.
 

Let's Get Started

Step 1: Setup your end goal

Design your flow with the end goal in mind. What problem are you trying to solve and where will the data go?

In this instance, we want to automate the transcription of speech within a video and post this data to a web-based business app my organization uses to further manipulate the video. 

To get started, click and drag the http response node located in the Output menu in the node palette into the right side of your workspace to reply to requests with the cognitive metadata that contains time-correlated transcript to the app.

Double-click the node once placed in the workspace to edit it’s contents in the right sidebar. Notice the node's Information and Node Help documentation in the right hand menu. Fill in the node properties input fields:

  • Name: Type in a name of your choice, in this case we logically named the node Return Job, as it will reply to the request from the app with the output from the job or cognitive metadata. 
  • Status code: 201. This indicates the request has succeeded and a new resource has been created as a result but can be customized to your needs. Check out more information on HTTP status codes to do this.
  • Headers: The http response node uses the payload property of messages it receives as the body of the response. Other properties can be used to further customize the response. This field is blank by default and is not required to use this node but if set, provides HTTP headers to include in the response. We will leave it blank for this use case.

Step 2: Select your data

The next step is to select the data that will enter the flow or activate it with text, audio, video, images, and more. 

In this case, we are using the http in node, allowing us to listen for requests from our app for transcription of a video file. Any flow that starts with an http in node must have a path to an http response node, otherwise requests will timeout.

Drag and drop the http in node from the Input menu in the palette into the left side of your workspace.

Double-click the node once placed in the workspace to edit it’s contents in the right sidebar. Fill in the node properties input fields:

  • Method: Select POST as we will be sending data to the server to update a resource. More information on HTTP request methods.
  • URL: Here you define the resource name, we will use /video for simplicity
  • Name: Type in a name of your choice, in this case we logically named the node Video Source, as it will listen to requests from the app to run transcription on video files.

Step 3: Transform your data

The meat of the flow is where you design business logic that transforms your data to get from the input state to the desired output state.

We have a couple steps here: 1) prep the message payload to be received by the transcription cognitive engine 2) call the transcription engine 3) format the engine  response payload.

 
First: Prep the message payload

Here, we are formatting the message payload for receipt by the cognitive engine in a way it will understand how to execute. 

Click and drag the function node from the function menu in the node palette to your workspace directly to the right of the Video Source node. 

Double-click the node once placed in the workspace to edit it’s contents in the right - sidebar. 

The function node has a msg.payload property containing the body of the message, here that is the media ID for the video that was ingested within aiWARE. Fill in the Function field with below:

msg.payload = JSON.parse(msg.payload);
msg.mediaId = msg.payload.mediaId


return msg;

In the Name field, type in a name for the node an intuitive name so you know remember what it does, we chose: format message


Second: Call the transcription cognitive engine

Here, we are calling a transcription engine within the Veritone ecosystem to convert speech to text within our video source. 

Click and drag a the engine processing node from the aiWARE menu in the palette into the center of your workspace.

Double-click the node once placed in the workspace to edit it’s contents in the right - sidebar. Fill in the node properties input fields:

  • Target ID: Enter the Unique Universal Identifier (UUID) for the file in aiWARE you would like to run transcription on. Check out our tutorial on uploading and processing files in aiWARE.
  • Category: Transcription
  • Engine: DeepAffects - Speech-to-Text, check out our tutorial on looking up available cognitive engines by cognitive capability in our technical docs here.
  • Name: Type in a name of your choice, in this case we logically named the node Transcription Cognitive Processing, as it will run transcription on video files.

Third: Format the engine response

Click and drag another function node from the function menu in the node palette to your workspace but this time directly to the right of the ai processing node. 

Double-click the node once placed in the workspace to edit it’s contents in the right - sidebar.

Not only does the function node have a msg.payload property containing the body of the message but can also contain a msg.statusCode. Here we have a status code of 201, to serve as the message input for the http response node placed in step 1. We will also pass the jobID and jobStatus of  

Fill in the Function field with below:

msg.statusCode = 201;
msg.payload = {
    "jobId":msg.payload.createJob.id,
    "jobStatus":msg.payload.createJob.status
}
return msg;

In the Name field, type in a name for the node an intuitive name so you know remember what it does, we chose: format response


Step 4: Add Debug Nodes

Finally, we want to monitor our flow to make sure it is functioning properly, and if not, troubleshoot the problem. This is not required but highly recommended!

The Debug node can be added to any part of a flow that receives messages to monitor/correct errors. 

Click and drag the Debug node from the Output menu in the palette into the workspace under your existing Video Source node. 

Double-click the debug node and under the Output dropdown menu, select complete msg object.

Repeat this step for the format message and ai-processing nodes. 

Double-click the debug node and under the Output dropdown menu, select msg..

Once you've deployed and run your flow in a later step, select the bug icon in the top right hand corner of the Studio to open the debug sidebar. Here review error messages to troubleshoot any issues that may occur running your flow.


Step 5: Connect your nodes

Now that you’ve designed your automated process, wire your nodes together to form a flow. 

Click and drag between node ports to wire them together. 


Step 6: Define your endpoint

Now we need to define the endpoint we just created for our client or app to make requests of. For this we need our aiWARE account or org's URI as well as the org authentication token. 

Query the aiWARE GraphQL API using GraphiQL to find the URI:

query{
  workflowRuntime(workflowRuntimeId:"nr7682"){
    uri
    authToken
    success
  }
}

Then, query graphiQL for the authentication token:

{
  "data": {
    "workflowRuntime": {
      "uri": "https://nr7682-workflow.aws-stage.veritone.com",
      "authToken": "<authToken">,
      "success": true
    }
  }
}

The endpoint will look like this:

https://nr7682-workflow.aws-stage.veritone.com/mytest?authToken=<yourAuthToken>

Step 6: Deploy and run your flow

Flows can be run in the Studio workspace or as an automation engine within the aiWARE platform for added scalability. Save your progress and prepare to run your automated business process.

For this use case, we will run our flow in the Automate Studio runtime. Simply click the arrow on the Deploy button in the top right of the Studio and select Modified Flows from the menu given we just want to deploy this new flow. Then click the Deploy button.

You should receive a 'Successfully deployed' message to confirm your flow has been deployed with the Automate Studio runtime.

Congrats! You’ve built your first flow. 

Want to check your work? Simply import the JSON code below into your workspace to get this flow sans the work.

Click the menu icon and select Import > Clipboard.

Once the Import nodes dialog box opens, copy and past the JSON from the snippet below into the gray input box. Select the blue Import button.

[{"id":"1ff2ff6.c59c501","type":"tab","label":"My First Flow","disabled":false,"info":"## Hiya!\nThis is a sample flow using an http in node that demonstrates how to create custom endpoints around aiWARE cognition\n\n## Overview\n\n## Details\n\n## Dependencies"},{"id":"f33ece96.9114e","type":"comment","z":"1ff2ff6.c59c501","name":"Data Input: Media File info","info":"","x":151,"y":65,"wires":[]},{"id":"804bdd96.6cf9b","type":"comment","z":"1ff2ff6.c59c501","name":"Transform: Logic & ML to Process the video","info":"","x":563,"y":49,"wires":[]},{"id":"522dfc43.5f7fb4","type":"comment","z":"1ff2ff6.c59c501","name":"Output: Where the Transcript Will Go","info":"","x":1021,"y":59,"wires":[]},{"id":"783cff55.3d324","type":"http in","z":"1ff2ff6.c59c501","name":"Video Source","url":"/video","method":"post","upload":false,"swaggerDoc":"","x":110,"y":202.22222900390625,"wires":[["35699e7c.82b212","7827187c.177178"]]},{"id":"708968b5.c073f8","type":"debug","z":"1ff2ff6.c59c501","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":566.888916015625,"y":113.77777099609375,"wires":[]},{"id":"35699e7c.82b212","type":"function","z":"1ff2ff6.c59c501","name":"format message","func":"msg.payload = JSON.parse(msg.payload);\nmsg.mediaId = msg.payload.mediaId\n\n\nreturn msg;","outputs":1,"noerr":0,"x":340.6666717529297,"y":202.72222900390625,"wires":[["708968b5.c073f8","ec95866c.33a5e8"]]},{"id":"7827187c.177178","type":"debug","z":"1ff2ff6.c59c501","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","x":147.00000762939453,"y":258.8888702392578,"wires":[]},{"id":"71f2ec7a.06a694","type":"comment","z":"1ff2ff6.c59c501","name":"API Input","info":"In this sample flow, we have an http input that creates the resource a requesting client can ping\nSteps\n- Retrieve the endpoint details using the workflowRuntime query\n - uri\n - authToken\n- The constructed endpoint should look like this:\nhttps://nr<orgId>-workflow.aws-prod.veritone.com/video?authToken=<token>\n","x":128.75,"y":341.5,"wires":[]},{"id":"7e1350f0.0f2ce","type":"comment","z":"1ff2ff6.c59c501","name":"Send Job Id and Status back!","info":"This returns the JobId value back that you can query and poll until the job completes.\n\nOnce the job completes, you can query the engineResults with this query:\n\nquery {\n  engineResults(jobId:\"\")\n  {\n    records{\n      tdoId\n      startOffsetMs\n      stopOffsetMs\n    }\n  }\n}\n","x":1050.75,"y":353.5,"wires":[]},{"id":"e5ad23d4.b3843","type":"http response","z":"1ff2ff6.c59c501","name":"Return Job","statusCode":"201","headers":{},"x":1033,"y":197,"wires":[]},{"id":"3f28d549.eeae4a","type":"function","z":"1ff2ff6.c59c501","name":"format response","func":"msg.statusCode = 201;\nmsg.payload = {\n    \"jobId\":msg.payload.createJob.id,\n    \"jobStatus\":msg.payload.createJob.status\n}\nreturn msg;","outputs":1,"noerr":0,"x":776,"y":198,"wires":[["e5ad23d4.b3843"]]},{"id":"ec95866c.33a5e8","type":"aiware","z":"1ff2ff6.c59c501","name":"transcription","format":"handlebars","syntax":"mustache","template":"mutation crtJobs{\n  createJob(input:{\n    targetId:{{mediaId}}\n    tasks:[{\n      engineId:\"c0e55cde-340b-44d7-bb42-2e0d65e98141\"\n      payload:{\n        diarise:false,\n        organizationId:17465\n      }\n    }]\n  }){\n    id\n    status\n  }\n}","x":546.5,"y":201,"wires":[["3f28d549.eeae4a","708968b5.c073f8"],[]]},{"id":"d7f70c08.bc768","type":"comment","z":"1ff2ff6.c59c501","name":"Process the media!","info":"This graphQL api node accepts Veritone's graphql queries and mutations and accepts variables to be dynamically passed through with the mustache` syntax","x":574.5,"y":329,"wires":[]}]

My First Flow will pop up as a new flow tab in your workspace. Enjoy!

Additional Resources

Build something custom or get started with the pre-built flows already in your workspace. 

Check out the Node-RED Community Cookbook for more flows to create http end points and full Node-RED Library for more pre-built flows/ nodes.

Need help or have a question? Check out more Veritone Automate Studio resources:

Did this answer your question?