-
{{[[DONE]]}} (hidden) O3SUCr9HN [[(hidden) 76o4988C2]] #watch
-
(hidden) tCNBfBR-M
-
{{[[TODO]]}} (hidden) vBJCH0u91 [[(hidden) S-ygYfi0j]] [[(hidden) iCUymQffm]] [[(hidden) lgJAATgMD]]
-
(hidden) VG3NJuYDI
-
{{[[TODO]]}} (hidden) 67qDWSjPS #[[(hidden) jOMmlCP4S]] [[roam]]
-
(hidden) nXT3SM4qO
-
{{[[TODO]]}} (hidden) g2mUXJmRY [[alfred]] #[[(hidden) jOMmlCP4S]] [[roam]]
-
(hidden) C95E0cTA7
-
[[roam-traverse-graph]]
-
{{[[TODO]]}} #[[(hidden) jOMmlCP4S]] for [[graph fragmentation]] -- i just realized that in our case, a page is a completely independent unit, i.e. it has everything it needs to fully function. thus, we could fragment the graph per page, and thus it'd be very efficient to avoid downloading the whole graph to get intel / advanced features working
-
even for e.g. queries, we can just leave the results and you're good to go.
-
and ofc, quite a bit of duplication of data fetching would arise, if e.g. you're visiting 2 pages that had a lot in common.
-
but, this is eventually solvable, if we wanna go really advanced.
-
e.g. by having a unique id/hash of the page's content (incl. metadata & that it has -- cannot only use the last updated date) & if 2 pages have the same unique id for duplicate content, you can avoid fetching that content.
-
but ofc, this is way more advanced & you'd have to fragment by blocks at this point
-
anyhow, fact still remains -- fragmenting this way is at least a good start & already has benefits
-
(!) & let's not forget that by default, none of this would be used / needed, & we're still sticking to [[static html]].
-
-
-
-
-
-
(hidden) _vqolF3YV
-
(hidden) jmzrgIqtP #[[(hidden) Py4a4fa73]]
-
(hidden) BrKdORsh6
-
(hidden) 6SBzHJI1Y
-
(hidden) W2hiIYvR6
-
(hidden) 03svoAlu-
-
(hidden) EqxMvlmlC
-
(hidden) 6PyEZyolr