All Collections
Getting Started
Integrating with HTTP Request Endpoint
Integrating with HTTP Request Endpoint
Edward Hu avatar
Written by Edward Hu
Updated over a week ago

For each AI Project, the system will provide one endpoint for the HTTP Request integration:

https://payload.vextapp.com/hook/${endpoint_id}/catch/${channel_token}

You will have to replace the ${channel_token} with your own token. It is recommended to use a user ID or a unique identifier as your channel token.

For example, when a query is posted, a custom function generates a new token "hello" for the endpoint:

https://payload.vextapp.com/hook/${this will be assigned to you}/catch/hello

Calling The API

In order to send a POST request to the assigned endpoint, you will have to generate an API key first. To learn how to generate an API key, check out this article. You can also check out the API reference.

The interface will provide a simple curl example for your reference:

curl -XPOST 
-H 'Content-Type: application/json'
-H 'Apikey: Api-Key <API_KEY>'
-d '{
"payload": {your_message_here}
}' 'https://payload.vextapp.com/hook/${endpoint_id}/catch/${channel_token}'

Code logic and method may vary depending on what language you use.

POST request JavaScript example

const response = await fetch(`https://payload.vextapp.com/hook/${endpoint_id}/catch/${channelToken}`, { 
method: "POST",
headers: {
"Content-Type": "application/json",
"Apikey": `Api-Key ${API_KEY}`
},
body: JSON.stringify({
"payload": ${your_message_here}
})
});

Result Example

{
"text":"Hello! How can I assist you today?",
"citation": {
"vector_id": "...
}

For the full return structure, please check out the API reference.

Long Polling

If you have a longer pipeline and are experiencing timeout, you might want to consider the long polling method.

You will also be using the exact same endpoint mentioned above, but this time, you will add an additional parameter to the request body: long_polling (boolean):

curl -XPOST 
-H 'Content-Type: application/json'
-H 'Apikey: Api-Key <API_KEY>'
-d '{
"payload": {your_message_here},
"long_polling": true
}' 'https://payload.vextapp.com/hook/${endpoint_id}/catch/${channel_token}'

Result Example

{
"text": {
"request_id": {system_returned_request_id}
}
}

Important: You will have to store this request_id for later use.

Now, the LLM pipeline will be triggered, but you no longer have to wait for the result upon fetching. Instead, you can call the same endpoint with just the request_id you got earlier and check the status:

curl -XPOST 
-H 'Content-Type: application/json'
-H 'Apikey: Api-Key <API_KEY>'
-d '{
"request_id": {system_returned_request_id}
}' 'https://payload.vextapp.com/hook/${endpoint_id}/catch/${channel_token}'

Result Example

{
"text": "processing on action number #1"
}

The system will respond with a 202 status code and provide a message similar to the one mentioned above.

You can keep looping this action until it gives you a generated answer:

{
"text":"Hello! How can I assist you today?",
"citation": {
"vector_id": "...
}

This means the job for this particular request is now finished. If you call the endpoint again with the request_id from earlier, you will now get 400 error.

Did this answer your question?