Coming from the CD (Continuous Deployment) perspective, I think things are a little different.
With
CD the complete "immune system" means that monitoring (different types
of monitors) are part of the immune system aside tests and they
complement the tests (other components in the immune system are code
review, static code analysis etc).
Interestingly, monitoring
resembles testing in many ways, so you'd have application level
monitoring, which usually are similar in scope to unit tests - they
usually monitor individual in-process complements (e.g. size of internal
memory buffer, operations/sec etc), you have host level monitoring
(CPU, disk etc), which is similar in concept to integration tests and
you have KPI monitoring (e.g. # daily active users etc) which takes the
user perspective and is similar to E2E tests.
The picture would not
be whole if you don't mention monitoring since, IMO monitoring come on
the expense of testing - developers either invest time in tests or in
monitoring (or split their efforts b/w these two)
I would argue that,
at least in CD where MTTR (Mean Time to Recovery) is far more important
than MTBF (Mean Time Between Failures), monitoring take precedence over
tests. I would draw yet another pyramid - a monitoring pyramid - on top
of the testing pyramid such that 70% is application level monitoring,
20% host monitoring and 10% KPI. And the entire effort b/w tests and
monitoring should be split 50/50 (or some other number that makes sense
for your use case - in some cases it's 90/10).
Again, I'm speaking
from the perspective of CD - which may or may not apply to some google
systems, but many dev organizations tend to like it.
BTW speaking
about putting the user in the center, delivering value fast and being
able to verify the value with actual users in matter of hours - the core
value of CD - fast feedback (including the user in the feedback loop) -
*is* putting the user in the center.
BTW2, a feedback loop
needs to be in the order of a few hours at most (minutes sometimes),
*including actual users* in the loop, not just automated tests. As such
- running E2E tests during the night simply makes no sense.