The Three Layers
Asset Management Layer
This is where all the digital and physical assets are managed, including all of their technical metadata (such as timecodes, file sizes, video codecs, audio codecs, etc.) as well as their editorial metadata. From an asset perspective, it knows where the assets are based on locations (and so can resolve path information) – and also contains information about any video proxies available. In addition to this, it also has a fully configurable hierarchical schema – allowing for configurable hierarchies on a per-client basis (bearing in mind the Cubix stack is entirely multi-tenanted). The concept is that a hierarchy can be created (e.g. an episodic hierarchy would be normally 3 tiers – Product / Series / Episode) – and then metadata fields can be configured off the different tiers.
With the schema created, Mezzanine Files are then tagged under the final tier of the schema. Daughter files are associated with the mezzanine file, and so as such follow with the mezzanine file as it is tagged. Ancillary Files (e.g. images, subtitles, PDF, office documents, etc.) can be associated at any tier of the hierarchy and as such are inheritable down the nodes. The mezzanine file then inherits the metadata as soon as it is tagged – making it possible to have asynchronous workflows between media and metadata.
The metadata schema is configurable based on the Tiers of the hierarchy – with fields definable at each layer. These metadata fields can be strings, date ranges, key-value pairs (e.g. actor name and character name), tables, dropdowns and more. The fields can also be offered in multiple languages. The metadata then by default inherits down the hierarchy – so that when an asset is tagged under the final tier – it effectively “looks up” through the hierarchy to see all the metadata relevant to them.
A key benefit to this system is that metadata is only entered once – something which provides dividends when working with products that have 100s of episodes. Metadata can also override – so at a lower level, if a value needs to be overridden for that specific node and any subsequent child nodes – it can be. The hierarchies in Cubix are defined on a client basis – as they relate to the media. The associated metadata is created on a company basis – which means different companies (and the users therein) working with the content can see different schemas or scopes of schema. This provides natural flexibility for limiting what companies can see what metadata.
This layer is where the action takes place with the different devices that Cubix supports (e.g. transcoders, storage devices, file-based QC engines, etc.). The “node” based approach is that for every device we are controlling – a “harness” runs connecting into the core system (either directly to the DB or via web service) and polls for jobs to complete. By design this layer has no decision logic – it simply looks for jobs, executes them – reporting back the progress of the job, as well as any errors and metadata. These jobs relate normally to an asset or set of assets, but there is no direct visibility given to these jobs via the API.
In Cubix, we finally then have the orchestration layer, the element of Cubix that works with the asset management layer and other sources to drive actions within the automation layer to occur. We have two elements to the Orchestration layer – the first is “Media Rules”, and the second is “Taskflow”. Media Rules – are based around short process chains (e.g. if this, do that) – and allow for the system to be configured with the “common sense” of the facility. So knowing automatically when to archive a file, create a proxy, etc. This means that in any workflow design, these steps can be assumed rather than stated implicitly within the workflow. Taskflow – is our BPM system, in which tasks are spawned and then pass down fully configurable journeys.
These journeys can then drive a wide range of activities, working with all of the automation layer, but also human stages such as QC and metadata capture, and then notifications, etc. Validation points can also be configured as part of the journey to check assets or metadata against configured rules. A task can contain as much payload data as required, and often makes reference to assets, the metadata schema and the automation jobs that it has requested. The task itself often contains payload data that are only used in a “pass-thru” sense (e.g. data that needs to be queried externally but is not used as part of the taskflow itself). Via the API we provide access to the taskflow layer, but not the media rules layer.
The API follows the same authentication methods as the Cubix portals, and so a set of valid authorised credentials are required to work with the API.
No calls can be made to an action without proper authentication, you can use the username and password parameters for authentication.
The Cubix REST API supports JSON requests and can return either JSON or XML responses
If you have any questions or need support involving the REST API, please contact firstname.lastname@example.org for assistance.