Integration mechanics
The Unizin Data Platform runs in the Google Cloud Platform. Whether an institution is self-hosting the UDP or using Unizin’s UDP SaaS offering, vendors can assume that they will be pushing their data files and manifest files to a Google Cloud Storage bucket. Consequently, it may be useful to leverage Google’s command-line tools and APIs to automate an SIS, LMS, or Learning tool integration with the UDP.
Please review our Context data ingress documentation for a description of how to push context data into a UDP instance. This article provides a brief overview only.
Cloud storage bucket
Context datasets are integrated into the UDP by pushing a complete dataset into a Cloud Storage bucket using a service account for authentication. Each system pushing a context dataset into the UDP will have its own Cloud storage bucket (buckets are never shared).
For any given UDP instance, an SIS, LMS, or Learning tool will need:
A Google Cloud Storage bucket address
A service account used to authenticate with the Google Cloud Storage API; one service account should be used per UDP instance
A complete dataset includes all data files corresponding to the UDP Loading schema and a manifest file. The complete set of files must be copied into the appropriate Cloud Storage buckets on a daily basis.
Folders
A complete dataset must be pushed into a date-based folder in a Cloud Storage bucket. The folders must correspond to the date on which the complete dataset is generated.
SIS, LMS, and Learning tool providers are responsible for creating folders into which their datasets are pushed.
For those institutions seeking to implement multiple SIS ingests per day, the folder naming convention is as follows (ISO-8601): “YYYY-MM-DDTHH:mm:ss” where T stands for time, HH is hours, mm is minutes, and ss is seconds. For example, an SIS folder name might be “2021-09-22T01:25:53”. The timestamp used in the folder name should match the timestamp used in the manifest file.
Last updated