# API Reference¶

This page is the core API reference for the heedy python client.

## Objects¶

class heedy.objects.objects.Object(objectData, session)[source]

Bases: heedy.base.APIObject

property app
delete(**kwargs)

Deletes the object

property kv
property meta
notify(*args, **kwargs)
property owner
props = {'access', 'description', 'icon', 'key', 'meta', 'name', 'owner_scope', 'tags'}

update(**kwargs)[source]

o.update(name="My new name",description="my new description")

class heedy.objects.objects.ObjectMeta(obj)[source]

Bases: object

ObjectMeta is a wrapper class that makes metadata access more pythonic, allowing simple updates such as:

o.meta.schema = {"type":"number"}

property cached_data
delete(*args)[source]
update(**kwargs)[source]

Update the given elements of object metadata

class heedy.objects.objects.Objects(constraints, session)[source]

Bases: heedy.base.APIList

create(name, meta={}, type='timeseries', **kwargs)[source]

Creates a new object of the given type (timeseries by default).

heedy.objects.registry.getObject(objectData, session)[source]

Heedy allows multiple different object types. getObject uses the registered object type to initialize the given data to the correct class. If the object is of an unregistered type, it returns a base Object object.

heedy.objects.registry.registerObjectType(objectType, objectClass)[source]

registerObjectType allows external libraries to implement object types available through heedy plugins. All you need to do is subclass Object, and register the corresponding type!

Return type

None

### Timeseries¶

class heedy.objects.timeseries.DatapointArray(data=[])[source]

Bases: list

The DatapointArray is a convenience wrapper on data returned from timeseries. It allows a bit of extra functionality to make working with timeseries simpler.

append(object, /)

Append object to the end of the list.

clear()

Remove all items from list.

copy()

Return a shallow copy of the list.

count(value, /)

Return number of occurrences of value.

d()[source]

Returns just the data portion of the datapoints as a list

dt()[source]

Returns just the durations of all datapoints.

extend(iterable, /)

Extend list by appending elements from the iterable.

index(value, start=0, stop=9223372036854775807, /)

Return first index of value.

Raises ValueError if the value is not present.

insert(index, object, /)

Insert object before index.

Adds the data from a JSON file. The file is expected to be in datapoint format:

d = DatapointArray().load("myfile.json")


Can be used to read data dumped by writeJSON.

mean()[source]

Gets the mean of the data portions of all datapoints within

merge(array)[source]

Adds the given array of datapoints to the generator. It assumes that the datapoints are formatted correctly for heedy, meaning that they are in the format:

[{"t": unix timestamp, "d": data}]


The data does NOT need to be sorted by timestamp - this function sorts it for you

pop(index=- 1, /)

Remove and return item at index (default last).

Raises IndexError if list is empty or index is out of range.

raw()[source]

Returns array as a raw python array. For cases where for some reason the DatapointArray wrapper does not work for you

remove(value, /)

Remove first occurrence of value.

Raises ValueError if the value is not present.

reverse()

Reverse IN PLACE.

sort(f=<function DatapointArray.<lambda>>)[source]

Sort here works by sorting by timestamp by default

sum()[source]

Gets the sum of the data portions of all datapoints within

t()[source]

Returns just the timestamp portion of the datapoints as a list. The timestamps are in python datetime’s date format.

to_df()[source]

Returns the data as a pandas dataframe

tshift(t)[source]

Shifts all timestamps in the datapoint array by the given number of seconds. It is the same as the ‘tshift’ pipescript transform.

Warning: The shift is performed in-place! This means that it modifies the underlying array:

d = DatapointArray([{"t":56,"d":1}])
d.tshift(20)
print(d) # [{"t":76,"d":1}]

write(filename)[source]

Writes the data to the given file:

DatapointArray([{"t": unix timestamp, "d": data}]).writeJSON("myfile.json")


class heedy.objects.timeseries.Timeseries(objectData, session)[source]
property app
append(data, duration=0)[source]

inserts one datapoint with the given data, and appends it to the timeseries, using the current timestamp:

s.append("Hello World!")

delete(**kwargs)

Deletes the object

insert(data, timestamp=None, duration=0)[source]
insert_array(datapoint_array, **kwargs)[source]

given an array of datapoints, inserts them to the timeseries. This is different from append(), because it requires an array of valid datapoints, whereas append only requires the data portion of the datapoint, and fills out the rest:

s.insert_array([{"d": 4, "t": time.time()},{"d": 5, "t": time.time(), "dt": 5.3}])


Each datapoint can optionally also contain a “dt” parameter with the datapoint’s duration in seconds. A time series can’t have multiple datapoints with the same timestamp, so such datapoints are automatically overwritten by default. Using method=”insert” will throw an error if a timestamp conflicts with an existing one.

property kv
length()[source]

Returns the number of datapoints in the timeseries

Loads array data from the given file to the timeseries:

property meta
notify(*args, **kwargs)
output_type = 'list'
property owner
props = {'access', 'description', 'icon', 'key', 'meta', 'name', 'owner_scope', 'tags'}

remove(**kwargs)[source]

Removes the given data from the timeseries

save(filename)[source]

Saves the entire timeseries data to the given filename:

ts.save(“myts.json”)

update(**kwargs)

o.update(name="My new name",description="my new description")

heedy.objects.timeseries.fixTimestamps(query)[source]
heedy.objects.timeseries.parseTime(t)[source]

## Apps¶

class heedy.apps.App(access_token, url='http://localhost:1324', session='sync', cached_data={})[source]

Bases: heedy.base.APIObject

delete(**kwargs)

Deletes the object

notify(*args, **kwargs)
property owner
props = {'description', 'icon', 'name', 'settings', 'settings_schema'}

update(**kwargs)

o.update(name="My new name",description="my new description")

class heedy.apps.Apps(constraints, session)[source]

Bases: heedy.base.APIList

create(name, **kwargs)[source]

## Users¶

Bases: heedy.base.APIObject

delete(**kwargs)

Deletes the object

property kv
notify(*args, **kwargs)
props = {'description', 'icon', 'name', 'username'}

update(**kwargs)

o.update(name="My new name",description="my new description")

class heedy.users.Users(constraints, session)[source]

Bases: heedy.base.APIList

## Plugins¶

class heedy.plugins.Plugin(config=None, session='async')[source]

Bases: object

copy()[source]
fire(event)[source]

Fires the given event

async forward(request, data=None, headers={}, run_as=None, overlay=None)[source]

Forwards the given request to the underlying database. It only functions in async mode.

Returns the response.

hasAccess(request, scope)[source]
isApp(request)[source]
isUser(request)[source]
property name
notify(*args, **kwargs)[source]
objectRequest(request)[source]
query_as(accessor)[source]
async respond_forwarded(request, **kwargs)[source]

Responds to the request with the result of forward()

## Datasets¶

class heedy.datasets.Dataset(h, x=None, **kwargs)[source]

Bases: object

Heedy is capable of taking several separate unrelated timeseries, and based upon the chosen interpolation method, putting them all together to generate tabular data centered about either another timeseries’ datapoints, or based upon time intervals. The underlying issue that Datasets solve is that in Heedy, timeseries are inherently unrelated. In most data stores, such as standard relational (SQL) databases, and even excel spreadsheets, data is in tabular form. That is, if we have measurements of temperature in our house and our mood, we have a table:

Mood Rating

Room Temperature (F)

7

73

3

84

5

79

The benefit of having such a table is that it is easy to perform data analysis. You know which temperature value corresponds to which mood rating. The downside of having such tables is that Mood Rating and Room Temperature must be directly related - a temperature measurement must be made each time a mood rating is given. Heedy has no such restrictions. Mood Rating and Room Temperature can be entirely separate sensors, which update data at their own rate. In Heedy, each timeseries can be inserted with any timestamp, and without regard for any other data. This separation of Timeseries makes data require some preprocessing and interpolation before it can be used for analysis. This is the purpose of the Dataset query. Heedy can put several streams together based upon chosen transforms and interpolators, returning a tabular structure which can readily be used for ML and statistical applications. There are two types of dataset queries :T-Dataset:

T-Dataset: A dataset query which is generated based upon a time range. That is, you choose a time range and a time difference between elements of the dataset, and that is used to generate your dataset.

Timestamp

Room Temperature (F)

1pm

73

4pm

84

8pm

79

If I were to generate a T-dataset from 12pm to 8pm with dt=2 hours, using the interpolator “closest”, I would get the following result:

Timestamp

Room Temperature (F)

12pm

73

2pm

73

4pm

84

6pm

84

8pm

79

The “closest” interpolator happens to return the datapoint closest to the given timestamp. There are many interpolators to choose from (described later). Hint: T-Datasets can be useful for plotting data (such as daily or weekly averages).

X-Dataset

X-datasets allow to generate datasets based not on evenly spaced timestamps, but based upon values of a timeseries. Suppose you have the following data:

Timestamp

Mood Rating

Timestamp

Room Temperature (F)

1pm

7

2pm

73

4pm

3

5pm

84

11pm

5

8pm

81

11pm

79

An X-dataset with X=Mood Rating, and the interpolator “closest” on Room Temperature would generate:

Mood Rating

Room Temperature (F)

7

73

3

84

5

79

Interpolators

Interpolators are special functions which specify how exactly the data is supposed to be combined into a dataset. Any PipeScript script can be used as an interpolator, including “sum”, “count” and other transforms. By default, the “closest” interpolator is used, which simply returns the datapoint closest to the reference timestamp.

Adds the given timeseries to the query construction. Unless an interpolator is specified, “closest” will be used. You can insert a merge query instead of a timeseries:

d = Dataset(h, t1="now-1h",t2="now",dt=10)
m = Merge(h)
result = d.run()

run()[source]

Runs the dataset query, and returns the result

class heedy.datasets.Merge(h)[source]

Bases: object

Merge represents a query which allows to merge multiple timeseries into one when reading, with all the data merged together by increasing timestamp. The merge query is used as a constructor-type object:

m = Merge(h)