By default the list APIs return 30 objects per page. The results are paginated if the number of objects are larger than 30 and you'll have to traverse through each page to get the complete data. You can also increase the number of objects returned per page from 30 to a maximum of 100 by including the per_page parameter. Please have a look at our API documentation available here.
4 months ago
Can I extract through API calls more than 300 tickets then, using the per page parameter?
4 months ago
@Giacomo, yes, you'll just need to loop through the pagination. The headers of the response from the API includes an item called 'link', which is the unique link to the next page of your results. I'm using Python, and I used the loop below to check for a proceeding page and re-query the API until the results are all in. You might also want to add the per_page parameter to your queries to increase the number of results in each response.
Hope that helps!
#this queries the API for CSAT results, checks if the response is paginated, and queries again for following pages
desk = your support desk's URL
API_key = your API key
pass = your password
import requests as r
#make your query
query = r.get(desk+"/api/v2/surveys/satisfaction_ratings?created_since=2019-03-25T00:00:00Z, auth=(API_key, pass))
#convert the content to a Python list. This is your first page of results (30 by default)
CSAT = CSAT_query.json()
#the results may be paginated. The link is provided in the headers of your query object but we don't know how many pages there will be
#Loop through to check for pagination and add it to the CSAT list
stopper = False #prepping to exit the loop
count=1 #prepping a counter for a printer summary later on
while stopper == False:
query.headers['link'] #check if there are more pages available
print("Total", count,"pages totalling", len(CSAT),"CSAT entries")
stopper = True #if no more pages, exit loop and print a summary of what was retrieved
l = re.search(r'<(.*?)>', query.headers['link']).group(1) #grab the next page's URL from the original response
next_query = r.get(l, auth=(API_key, pass)) #query again using the new URL
CSAT = CSAT + next_query.json() #add the new set of results to the existing list
count += 1 #carry on
1 person likes this
3 months ago
I have to clean 4000 pieces of spam due to Freshdesk poor security and lack of spam filters.
How do I list all tickets to move them into spam without having to do this 30 at a time?