This page is a draft and may be incomplete, incorrect, or just a
stub or outline. I've decided to allow myself to put draft pages on
my website as an experiment. I'm hoping they will:
This article is a companion to http://hypermedia.ratfactor.com/ where I’m experimenting with htmx, REST, HATEAOS, and progressive enhancement (TODO: link these terms).
I’ve also been buying old books on REST and the ideas behind HTTP, HTML, and the Web on abebooks.com. I’ve amassed a bit of a mini-library on the subject in the last couple months.
Though just a cryptic series of slides, I really enjoyed Paul Downey’s Web APIs Are Just Web Sites, which boils it down to the essence. This is how the Web works.
From Adaptive Web Design (2011) by Aaron Gustafson:
Fundamentally, progressive enhancement is about accessibility, but not in the limited sense the term is most often used. The term "accessibility" is traditionally used to denote making content available to individuals with "special needs" (people with limited mobility, cognitive disabilities, or visual impairments); progressive enhancement takes this one step further by recognizing that we all have special needs. Our special needs may also change over time and within different contexts. When I load up a website on my phone, for example, I am visually limited by my screen resolution (especially if I am using a browser that encourages zooming) and I am limited in my ability to interact with buttons and links because I am browsing with my fingertips, which are far larger and less precise than a mouse cursor.
Hypertext as an authoring and display format
Tim Berners-Lee describes attending a 1990 hypertext conference and trying to introduce the concept of delivering hypertext over the Internet.
From Weaving the Web (1999) by Tim Berners-Lee with Mark Fischetti:
However, like many hypertext products at the time, [Dynatext] was built around the idea that a book had to be "compiled" (like a computer program) to convert it from the from in which it was written to a form in which it could be displayed efficiently. Accustomed to this cumbersome multistep process, the EBT people could not take me seriously when I suggested that the original coded language could be sent across the Web and displayed instantly on the screen.
Also, that metadata and HTML are intentionally limited. Unlike, TeX as a counter-example, which gives the author more control, but it’s also possible to have TeX code with infinite loops and crashes. HTML can’t crash (browsers can, but that’s their problem).
With that in mind, it seems to me that the contract of the Web is that to take part, you give up some control over your content, which allows that content can be useful to others, even in ways you did not anticipate or particularly desire.
TBL intended for browsers to also double as editors. The W3C’s Amaya browser (seems to be well-maintained) is also an HTML (and SVG, etc.) editor. The Jigsaw server (seems to have been abandoned) has authentication and authoring out-of-the-box. From what I can tell, a PUT command is used to push a new or edited document to the server. Netscape had editing capabilities. That all seems to have ended in the IE era.
Hypermedia as the Engine of Application State (HATEOAS)
From REST APIs must be hypertext-driven (2008) by Roy T. Fielding:
A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand).
And in a response to a comment on the post:
Think of it in terms of the Web. How many Web browsers are aware of the distinction between an online-banking resource and a Wiki resource? None of them. They don’t need to be aware of the resource types. What they need to be aware of is the potential state transitions — the links and forms — and what semantics/actions are implied by traversing those links. A browser represents them as distinct UI controls so that a user can see potential transitions and anticipate the effect of chosen actions. A spider can follow them to the extent that the relationships are known to be safe. Typed relations, specific media types, and action-specific elements provide the guidance needed for automated agents.