I’ve observed that the learning process has 4 generalized steps:
- Inputs are received and processed.
- The input processing system runs to produce an output.
- The output is evaluated.
- Evaluation results are incorporated back into the input processing system.
Examples of the learning process
The learning process seems to be common in domains rich with knowledge workers. Tech being a prominent example, has two extremely popular implementations of the learning process:
- During sprint planning, feature requests and bugs are received as inputs. These inputs get processed into User stories which have valuable metadata like estimated completion time and priority. The team then decides which tasks will produce the best output at the end of the sprint.
- During the sprint, the team does the work outlined in the user stories they selected in sprint planning (this is the input being processed to produce an output).
- At the sprint review, delivered work is evaluated against the Definition of done from the committed user stories.
- At the sprint retrospectives, the team uses the insights of how they did during the current sprint to figure out ways of refining their process for the next sprint.
Back-propagation Neural networks
- Inputs values are provided to the input neurons
- The inputs are fed forward through the neural network to produce values at each output node
- The output is scored.
- Results from scoring the output are sent back through the neural network so connection weights can be tweaked to produce better results using Back-propagation.
Related: The world is recursive.