Prerequisites
-
Apache Pinot must be installed and running. To install Pinot, do one of the following:
- Install the open source version of Apache Pinot on your own infrastructure (called BYOC for bring your own cloud)
- Use StarTree Cloud, which includes Pinot hosted as software-as-a-service (SaaS) along with a few other perks.
-
StarTree ThirdEye must be installed and running. To install ThirdEye, do one of the following:
- Install ThirdEye. For more information, see infrastructure requirements.
- Use StarTree Cloud, which includes an available instance of ThirdEye, ready to connect to data you’ve uploaded into your StarTree Cloud instance of Pinot.
-
Upload your data into Apache Pinot
Data to be used must have the right data architecture in order to be useful. See ThirdEye data requirements for more information.
- Work through the ThirdEye use case planning template to ensure you understand what intend to accomplish in the following checklist.
-
Data readiness and validation checklist:
- Timestamp needs to be in epoch millis for ThirdEye to work efficiently. If there is a non-epoch millis column then convert it to epoch millis using the “derived column” and apply time range indexes based on time granularity used in ThirdEye (Daily, hourly, 15 mins, 1 min).
- Cannot have data gaps Link to completeness delay, definition, Link to external blog how to do DQ check using ThirdEye
- Based on Query patterns from ThirdEye to Pinot - apply those specific indexing.
- Joins are not supported in ThirdEye. Make sure derived columns pre-ingestion are created for those to support TE use cases.
- ThirdEye needs a denormalized schema for the following reasons:
- TE can run any SQL query Pinot can support for just running alerts. We can do this with a custom template. Link to ThirdEye templates. One can clone this to create a custom template and update the dag in it (query against Pinot). However, all the templates need to be cloned to add this custom query)
- The complexity is more on: how the metric is then defined, how people can choose different detection algorithms on it, how it is exposed in the UI, how that is shared in RCA, notifications, etc., alerts might work, but the entire experience is not great. RCA may break. If RCA top contributors, heatmap needs to provide accurate insights at a quicker speed then transformed data will be ideal.
- If there are too many dimensions in the table how to make RCA more performant? Ans: Number of dimensions in a schema - indexes on dimensions (too many dimensions will slow down RCA UI) - rcaExcludedDimensions Link to the doc