In the labelf interface, go to the model you want to use and look for the model ID
We can see in this example that it is 1968.
Step 2: Make a request
We are now ready to make a request to Labelfs API. It's a normal HTTP REST interface so you will feel familiar.
We are going to use our endpoint https://api.app.labelf.ai/v2/models/{model_id}/inference to do inference with the model. To make a call to the API you need to add the bearer token we generated previously to your header Authorization. We also need to provide which model to do inference with, and what texts you would like to perform inference on. Max 8 texts per call.
import requests
Replace with your bearer token
bearer_token = ""
# Replace with your model ID
model_id = 0
# Replace texts with your own texts, max 8 text items in the texts array
json_data = {
'texts': [
'Breakfast was not tasty',
]
}
headers = {
'Authorization': f'Bearer {bearer_token}',
}
response = requests.post(f'https://api.app.labelf.ai/v2/models/{model_id}/inference', headers=headers, json=json_data)
print(response.json())
fetch('https://api.app.labelf.ai/v2/models/YOUR_MODEL_ID/inference', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_BEARER_TOKEN',
'Content-Type': 'application/json'
},
// Replace texts with your own texts, max 8 text items in the texts array
body: JSON.stringify({
'texts': [
'Breakfast was not tasty'
],
'max_predictions': 2
})
});
curl --location --request POST 'https://api.app.labelf.ai/v2/models/YOUR_MODEL_ID/inference' \
--header 'Authorization: Bearer YOUR_BEARER_TOKEN' \ --header 'Content-Type: application/json' \
--data-raw '{ "texts": ["Breakfast was not tasty"], "max_predictions": 2 }'
Step 3: You now have implemented your own AI model!
We cant wait to see what you have built and would love to encourage you to join our discord and discuss you implementation with us and others!