Jobfeed API

The Jobfeed API provides developers with real-time access to the Jobfeed data. By making the data from the richest job database in Europe accessible via an API, Textkernel aims to promote new uses of Jobfeed's big data capabilities through the development of third party applications.

Technically, this API that allows you to search and group Jobfeed jobs. It is accessible via HTTP GET requests. You need to send your Jobfeed username and password in every request via Basic Authentication.


In order to search for jobs you should access

You will get a JSON response consisting of an object with two properties: total_count (the total number of the results in the set) and results (an array of job objects). By default you will get a maximum of 10 job objects per request.


For some fields that have codes as values you get in the response not only the code, but also a string description of the code. For example:

[…] "education_level": { "value": 8, "label": "Description of code 8 here" } […]

When filtering by a certain education level you'd use the value (8 in the example above), not the label.

You can turn off labels completely by adding a _labels=0 parameter to the URL. That would make the example above look like: "education_level": 8.

If you want the labels in a specific language you can add a _language parameter. The value will be a two-letter language code:

If the labels are not translated in the requested language then a country-specific default language will be used.

In order to filter the results you would use GET parameters. For example, to get all the jobs for which the profession field is 227 you would access:

If you want to pass multiple values for the same field (for example, say you want to look for jobs for which profession is either 227 or 4980) you have to suffix the field name with []:[]=227&profession[]=4980

You can set filters on multiple fields. For example, to get all the jobs for which profession is 4980 and education_level is 2 you can access:

To filter on a boolean field (for example, via_intermediary) you can pass 1 to represent true and 0 to represent false:

For the profession and location fields value normalization is also performed. That means that instead of the code you can use a string value. For example: Programmer

or Haag

You should never filter on location_name or location_coordinates. The results may be unexpected if you do. Always use location instead.

Advanced filtering

You can filter based on the existence or non-existence of a field by using the special values _exists and _not_exists:

For the location field you can also filter using a radius around a central point by suffixing the field name with __radius. For example, to search 25 kilometers around Amsterdam you would access:

or using our code for Amsterdam (1000):

You can also use actual geographical coordinates instead of a city:,4.32__10

You can do a range search on a field by suffixing the field name with __range. For example, to search for jobs for which education_level is between 3 and 8 (inclusive) you would access:

You can specify open-ended ranges by omitting either the start or the end (but not both): 3__ (equal or greater than 3) or __8 (equal or less than 8).

For date ranges (date and expiration_date fields) you can use calculated values. To search for jobs that have the date field between 1 year ago and now you can use:

Units supported are y (year), M (month), w (week), d (day), h (hour), m (minute) and s (second). Each expression has to start with an anchor date which can be either now, or a date suffixed with ||. More examples: now-1h, now+1h+1m, 2014-03-01||+1w.

You can do an expression search on a field by suffixing the field name with __exp. This would allow you, among other things, to:

  • use boolean expressions like: /search?full_text__exp=(transport OR auto) AND diesel
  • use phrase searches: /search?full_text__exp="compleet aanbod" – the words need to be right next to each other
  • use fuzzy searches: /search?full_text__exp=colour~ – it will match "color" too; you can specify an acceptable edit distance like: colour~2; a better match means a higher rank for the document
  • use proximity searches: /search?full_text__exp="junior manager"~3 – similar to the fuzzy search, but on a word level; the closer the words are to each other the higher the rank
  • use wildcard searches: /search?full_text__exp=auto*

For text fields __exp is implicit. It doesn't matter if you add it or not, the code will behave as if you did.

NB: you can only use one of __radius, __range, __exp at a time. You can't do something like location__radius__range__exp.

You can negate any filter by suffixing it with __not. For example, you can look for jobs that don't have education_level 5:

You can also negate radius, range or expression searches:

NB: In order to improve readability we didn't urlencode the parameters in this guide. You must do it in your application (or, better yet, build the URLs using a library that does it automatically) because otherwise the service will see a value like now+1h as now 1h and return an error.

Control parameters

You can use some special parameters in order to control what is being returned. All the control parameters start with an underscore to distinguish them from the filters.

To get a certain number of results (other than the default of 10) you can use the _limit parameter:

To skip a certain number of jobs from the top of the result set you can use _offset. Say you want to skip the first 50 jobs and start from the 51st:

You can use _limit and _offset together in order to paginate the results.

You can use _sort (with a field name as a value) and _sortdir (either asc or desc) to order the results:

If you want to get a subset of the fields you can use the _fields parameter:,job_title,profession


You do grouping by accessing:

If you access the URL above directly you'll get an error message. All the requests to this endpoint must have a _group parameter which specifies the field to group by:


Filtering works exactly as it does for the /search endpoint:

Control parameters

We already mentioned the _group parameter which has to be present in all the /aggregate requests. The value will be a field name.

You can limit the number of "buckets" using _limit.

By default you get the document count for each bucket, but you can change that using the _metric parameter. The value passed to this parameter should be in the operation__field format. The supported operations are min, max, sum, avg and median:

It is also possible to get both document counts and a metric at the same time:,count

NB: This does not mean that you can request multiple arbitrary metrics at the same time. The only extra one you can ask for is count.

There is one exception to the operation__field rule: _metric=count_postings will return counts of job postings (ie: the duplicate postings are all included).

For sorting you can use _sort and _sortdir. For _sort you have to use one of two special values: _group which sorts by the group name and _value which sorts by the aggregated value (job count, or the specified metric).

You can use the _labels and _language parameters. They have the same meaning as for /search.

If you group by a date field (like date and expiration_date) you can suffix the field name by __week, __month, __quarter or __year in order to group by that time interval:

NB: If you group by a time interval _limit won't have any effect.

Multi-field aggregation

You can specify a comma-separated list of fields to group by. For example:,education_level

The jobs will be grouped in "buckets" based on the value of the first field (in the example above organization_industry), then each of those top-level buckets will be further split into smaller buckets based on the second field (education_level), and so on.

You can specify a limit for each level of aggregation. If you group by three fields you could have _limit=10,5,3. You will then get at most 10 buckets based on the first field, which will be further divided into at most 5 buckets based on the second field, which will be divied into at most 3 buckets based on the third field. You can omit limits for some aggregation levels, in which case the default of 10 will be applied (except when grouping by time interval): _limit=,5,3 or _limit=10,,3 etc.

Similarly, you can also give multiple values for _sort and _sortdir. Like for _limit you can choose to omit certain aggregation levels. If using _metric, sorting by _value will only apply to the last aggregated field; for other fields, sorting by _value will always sort by the number of jobs in buckets, not by the actual value of _metric. For example:,education_level&_metric=avg__salary&_sort=_value,_value&_sortdir=asc,desc

This will group the jobs by region, it will order those groups ascending by size (number of jobs in each), it will split them further by education level and will sort each education level bucket within its parent region bucket descending by average salary.


You can get a list of the available fields by accessing

Fields with a fixed list of values will have a possible_values property that enumerats those values. You can use the _language parameter (see /search above) to get the values translated in a specific language.

By default you get only the fields you have access to. If you want to get all the fields you can use the _get_all parameter:


For every request you can have a _pretty=1 parameter that will cause the output JSON to be pretty-printed. Do not use this in production.