Just watched this great talk and made a summary of it with what I found to be the most valuable informations.
At some point he talks about The progressive disclosure workflow, which looks like ICM.
He says it is good but suffers of scaling because you need to maintain an accurate representation of data and this data is constantly evolving, if you try to maintain it through agents you most likely introduce lies over time.
My understanding is that scaling memory through .MD files might not be the best idea as the memory needs to update frequently and there are tool out there that handle the lvl of complexity needed to maintain it correctly.
What ICM shines at is making the architecture the workflow, and .MD are the rules that encompass this workflow but memory is better off living elsewhere.
Or am I entirely wrong ? I'm curious of what you guys think
Also question for the devs, do you scan the code as the source of truth (seems the most logical to me since this is pure uncorrupted truth) or do you try and maintain an index ?