THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
- `experimental` is defined as
- exploratory data analysis
- development in notebooks
- essentially ad-hoc choice of tools
- generally batch only, "one off", manual execution
- small data, manual sampling
- models are trained offline
- the end result being reports, diagrams, etc,
- `production` = pretty much the opposite
- end result are enterprise data science applications
- ran in production
- with large, multi-dimensional data set`s that do not fit in RAM, logically infinite
- hence the algorithms / analysis must be incremental
- use of managed `data set`s : `data lake`s, `feature store`s
- models are trained onlineincrementally (_"training offline periodically and refreshed/deployed every few hours/days"_)
- with awareness of `concept drift`, `distribution drift`, `adversarial attacks` and able to adapt
- use complex orchestration between core analysis and decision layer, model monitoring and other application logic and business processes, some involving human interactions
...