Retro/fringe OS communities as an oppurtunity for more ethical Webcraft

I have been poking around lately at RiscOS, mainly because ive had some desire to play with a RetroOS and it’s recently been liberated.

Their currently wrestling with a problem they have that is somewhat familiar to me; one ive seen crop up in several OS ecosystems. As the web grows in a direction of apps instead of documents browser complexity increases and it becomes increasingly difficult for these non-mainstream OSes to browse the web. The resources to create a new browser from scratch are terribly high and sometimes the build/porting requirements to bring an existing one are equally problematic.

Off the top of my head communities/OSes where this is an explicit problem: Haiku, Plan9 (and derivatives), SailfishOS (aging embedded gecko engine), RiscOS.

What I’ve been pondering lately is, if there is any room for synergy here between these disparate groups who are adversely impacted by web-as-app and webcrafters who want to grow web-as-documents.

Some initial thoughts:

  • A new take on the old “works on any browser” campaign.
  • Maybe working with these groups to make webcrafting guidelines for both their communities that align with ethical webcrafting.
  • On the software side, maybe work on a html parser/renderer/browser that targets what the web should be and can cater to these groups.
4 Likes

Would love to read a technical breakdown of what a new take on “works on any browser” entails.

@trashHeap I haven’t responded because I don’t have anything small enough to add (big ideas), but I do have a response to:

I’d be curious as well, because I don’t feel like supporting HTTP any longer, but that seems like an issue with some hardware (being unable to handle the encryption required). Do I forsake those browsers that can’t read HTTPS? Lynx can, and works fine for me, and while it isn’t a standardized browser, I personally kinda use it as such.

I think we should make a browser with full support for CSS and HTML, but without a JS engine. I’ve looked it up, and that is not easy enough for me to do. There are “bake-your-own-browser” systems out there, but JS is used for browser UI itself, so removing it is not encouraged.

Do I forsake those browsers that can’t read HTTPS?

I’d ignore transport altogether, which can be proxied, and focus on rendering and navigation.

I doubt making a browser is a good idea. The effort dwarfs the rest of the project. There are enough legacy browsers, and it is easy to disable javascript, web fonts, and even css or images in a modern browser.

Also, I don’t think a new take on ‘works on any browser’ should simply mean ‘works with ancient browsers’. It should probably also mean ‘works with various a11y mechanisms’. I guess that the older simpler web is easier for say text2speech to deal with, but I bet there are some things about ancient web pages that are also bad for some a11y mechanisms. I don’t know, but a useful new take would include research on that.

I don’t get to ignore it, and in some ways it has become a political act. For instance, for almost all my clients, we redirect HTTP to HTTPS. Why? At some point we want to funnel them to a point where we require a secure connection. Capitalist uses of the web require HTTPS, basically.

Some folks keep pointing out that not everyone can use HTTPS, though I never get any real qualifiers from such statements. For instance, are we talking about a developing region with older versions of Android? Are we talking about some mythical community of blind users (because a11y is more than just that “text2speech” functionality)?

Here’s what I’ve learned about my work: most folks don’t have an opinion about their web protocol, but those that do require HTTPS. And there are plenty of reasons to not like HTTP, two being:

  1. performance with very complex (and likely bloated) web resources, and
  2. the ability to change the packets in route

I spent about an equal amount of time trying to get people to stop using tables for HTML layout as I did trying to get people to not use passwords over HTTP, and for the most part both (community) efforts have paid off.

Since I make simple, easy to use websites, I really only care about other people fucking around with the data. Governments and Comcast, for instance.

Since I’m resetting interi, I think of it as an artifact itself, or rather that my efforts in “the interi project” will result in a hypermedia document collection. I think about Dat and IPFS, or even how much I loved Encarta on CD! I recall thinking I would make local disk web sites for CD-ROMs in high school…

My point is, as the server operator, I am aware of how my node can be manipulated on the protocol level, while as a web publisher I can focus on navigation and useful linking.

It’s not though, it gets harder with each release, as web tech develops new fractals of privacy/security concerns. I’ve decided from now moving forward, I’m going to keep a log of every issue I have with all the browsers. I can’t point to anything right now, but if my sense of it is correct, Firefox is so complicated, almost every person has to make a compromise to even use it.

I’ll turn off javascript, fonts, CSS and images, but does the browser phone home? Who all does it phone? That’s the part the gets me! I think that is worth the effort, just to have a piece of software that isn’t using the network to report on you.

“Web-as-documents” is probably what Mike meant above, in which I leaped upon the sub-point about protocol (I’d love feedback on that discussion).

Those projects suffer from that strange phenomenon where a network is a living thing, and leaves certain practices and presumptions behind. The web is cool like that, but also painful as a living document store as far as browsers are concerned. I mean… Chrome just shaved off a huge amount of the web with required HTTPS, because vast tracts of the information superhighway doesn’t have a litter clean up sponsor, and will never be setup for HTTPS.

There are some web technologies I haven’t looked deeply enough into, such as WebAssembly and WebExtensions. Right now I am focusing on learning about human nature and how the mind forms context. Because isn’t that what this is really about? Context. :slight_smile:

Thinking on this, I wonder if configuration-as-distro would work. I’ve played with setting Firefox config by a settings file in the home directory (I forget what it is called exactly, but I really geeked out on it!).

I’m less concerned about the rebranding efforts to remove trademarks. Something that creates a minimal browsing experience. Of course, that creates a weird scenario where you are installing potentially hundreds of megs of code that isn’t being used. Hmmm.

Transport is important of course, but I don’t think it’s a reasonable thing for ‘works in any browser’ to be concerned with. The button (the old version anyway) went on web pages and was about them as documents, not how they were delivered (if I don’t have my timeline mixed up, running across web pages served from FTP still hadn’t totally disappeared). And practically, it’s not something web publishers have control over or knowledge about.

However if transport were to be considered in a new ‘works in any browser’ campaign, I’d have the material that is supposed to work in any browser have to work no matter how it got to the browser (transport neutral; http, https x, ftp, sneakernet, dat, ipfs, etc) rather than requiring it be available over http/0.9 or whatever would be needed for the most ancient browsers to be able to retrieve as well as display it. Effectively this probably would be a ‘works offline’ requirement.

Firefox is so complicated, almost every person has to make a compromise to even use it. I’ll turn off javascript, fonts, CSS and images, but does the browser phone home? Who all does it phone? That’s the part the gets me! I think that is worth the effort, just to have a piece of software that isn’t using the network to report on you.

A new browser is a huge effort, even one that mostly reused existing components. I just wouldn’t underestimate the likelihood it will be a wasted effort. I wonder if running Firefox in a jail would be a more promising defense in addition to turning off the obvious features within Firefox.

1 Like

Ya know, I used to think about CD-ROMs and help file systems all the time, but by the time I got into web publishing those were already kinda deprecated. And because of that, I’ve never really sat down and figured out what is happening when a web browser opens a local file. I mean, I studied HSTS and Content Security Policies so I could ensure secure communications for users of the sites I host, but how does that play into individual HTML documents…

I suppose a goal could be to use a URL scheme site wide that would make interi, um, resource agnostic?

Assuming the rumor to be true, what would it mean if Edge switched to Chromium as a render engine?

For market share, not much, see the previous section. Potentially, it could mean a tiny bump up, as Edge would become more compatible with the Chromium-dominated web and therefore attract more users as a browser in which the web “works”. I don’t think that would make a serious dent though, because it’s not a unique capability.

For developers, it’s one less browser engine to worry about, if they were worrying about it in the first place (unlikely). Less testing effort, less browser-specific bug fixing, a slight productivity boost.

For the open web: it’s complicated. If I were to put on the hat of a pragmatic developer, I fail to see the big gain in having competing browser engines. Pragmatically speaking, if I enter code and run it, I want the output to be the same, no matter the engine. Getting different results or even bugs in this output is not a gain, it is a pain. Having feature disparity between engines sucks, it means building multiple versions of the same thing. You can give it nice names like “progressive enhancement” but that doesn’t change the fact that it sucks, from a purely pragmatic productivity point of view.

My take on this is that when it comes to the open web, it’s not browser engines being the driving force of keeping the web open. If that would be true, the open web is already lost, given the Chromium dominance. Instead, I opt for diversity, competition and collaboration in the decision making process regarding web standards. Less engines could be acceptable for as long as ownership and the standards process regarding those fewer engines are diverse, and not controlled by one organization.

I agree with a whole bunch of what is said in this essay, but I am still part of the 100 million or so in the minority that use Firefox, and I rarely touch Chrome at all on my own devices.

It makes me think we need a new way of thinking about this stuff, because services at scale server small nation-sized populations spread across large areas, and it seems, hmmm, stupid to make decisions for people at that scale.

I agree. 100% Chromium-based browser market share would be OK because Chromium is free software and the ability to fork is real. But I think what really matters is not at the level of browser code, but getting builds of that code that act as user agents (i.e. in the interests of their direct users) into the hands of the multitudes.

Firefox is probably the best we have at that now, and it’s deeply imperfect, but the imperfections have nothing to do with the technical implementation of the browser and everything to do with the imperfections of Mozilla, and the ways that Chrome and lesser proprietary browsers are far worse than Firefox as user agents have nothing to do with their technical implementations and everything to do with their sponsors’ far worse imperfections than Mozilla.

2 Likes

I’m glad the above quoted sentence catches my feelings on the subject in the way only a run-on mobius-strip of a sentence can which is why I’m glad. :slight_smile:


In other news: aside from being not the average web user, am I detached from reality? I use Firefox, and rarely have any trouble with websites. But I also don’t visit bad websites, “bad” being pretty broad and mostly non-technical. I also have a non-smart phone, and use jabber to talk to my peeps.

As far as I’m concerned, I’m just waiting for everyone else to catch up.


Ya know, I feel like my web position, or perhaps my position on how the web should grow, is very similar to how I feel my local tribal politics are, in that while we are trying to win Dem seats all over the place, locally we kinda hate Dems. As in, when one takes an extreme position in reference to the perceived mainstream/default position, that POV sees the progressive forces as too conservative.

I’m personally trying to shift my position, as it were. Because I love the web too much. I really do like HTML, warts and all. I like the web as a platform and a strata of knowledge. (And I’d like to survive outside of Oakland one day, so I’m trying to open myself up to more “mainstream” politics.)

What are you suggesting there with the user agents? That’s a part of the web stack that hasn’t really concerned me as a hoster or publisher, aside from logging. I know some folks discriminate by user agent, but that never made much sense to me since it is arbitrary and easily changed. However! Recently I’ve been reading a lot of XEPs, and user-agent seems like that could be expanded to be meaningful to humans.

I’m thinking of how servers and clients negotiate which features they share, that kinda thing. And wow, I feel like I just glimpsed the future, where such a negotiation could be made and clients could actually provide that mythical “progressive web” experience!

Because I’d much rather have my browser tell me a given resource is unavailable due to my user profile, than load a bunch of broken document and UI design that might still leak my info.

Also, I’m just going to merge this into that that other thread. :slight_smile:

1 Like

Just sticking this here: Goodbye, EdgeHTML

That last paragraph sparked a thought (emphasis mine):

If you care about what’s happening with online life today, take another look at Firefox. It’s radically better than it was 18 months ago — Firefox once again holds its own when it comes to speed and performance. Try Firefox as your default browser for a week and then decide. Making Firefox stronger won’t solve all the problems of online life — browsers are only one part of the equation. But if you find Firefox is a good product for you, then your use makes Firefox stronger. Your use helps web developers and businesses think beyond Chrome. And this helps Firefox and Mozilla make overall life on the internet better — more choice, more security options, more competition.

A useful endeavor for someone in my position (someone that organizes knowledge but doesn’t program desktop apps) may be figuring out what that equation is, and then making sure everyone knows what it is. Not just a checklist of things that we all agree on (“the power of open”, “freedom and privacy”, etc.), but do the work to deep dive into each of those components and document it somewhere. Because at this point I’ve forgotten more good ideas than I remember, so it is obviously very complicated.

3 Likes

I don’t have troubles either, or if I do, they aren’t Firefox-specific – they are related to blocking ads/trackers/webfonts. But I can imagine there being lots of edge cases that cause some people to have problems with websites tested only with Chrome. I don’t really care about that, because I don’t really care about browser engine diversity, I care about browsers acting as user agents (and my use of those words is just to emphasize that browsers should be agents for their users, acting in users’ best interests). Sadly Mozilla only contributes to that a small amount since they don’t ship with state of the art (eg uBlock Origin, Tor) included. Brave browser is surely the best user agent (ships with ad blocking and Tor) at this point, though I’m still an only occasional user because I’m also slow to change and have to be skeptical of anything attached to a cryptocurrency scheme (BAT in this case).

1 Like

If a website has a feed I generally don’t visit it again, and I cron newsboat to download and archive everything in a feed. I think of that as my user agent, in a very bot-like manner. :slight_smile:

Okay, so Tor, that is one way I’d accept serving interi over HTTP, because it has an added layer of assurance the data hasn’t been manipulated in transit. And that is how I think most onion sites do it, because of how wonky it is to get a CA to cover it and hence not serving over HTTPS.

I’d love to get an interi.onion site going, but I figured it is essentially a different version of the site (or I guess we call it a collection). I won’t get into a whole bunch of detail here, but I found myself, over the last couple years, slowly eliminating a bunch of protocols as ways to serve interi:

  • http - targeted for data manipulation
  • gopher - not substantially simpler than my HTML aesthetic
  • ipfs and dat - weird upload limits and a bunch of misc.
  • tor - no substantial difference over visiting via relay over https (I’m sure I’m saying that wrong…)

Now, I’ve presumed a lot, but also I look at really high level snapshots of folks that visit my site, and as far as I can tell rarely does anyone visit that can’t easily view the site over HTTPS.

Back to my point: if we look at “browsers” as “user agents”, the web we know is just really, truly and perhaps irrevocably fragmented, for all time. I don’t expect lynx to render javascript, and I certainly don’t expect Chrome to be cool and read everything forever.

I’m not sure where I’m leaning with this. I still think, maybe a new browser. Not a browser that goes after the latest trends, but maybe one that tries to be the best document browser it could be. And maybe it had good defaults, opts-in to all decisions, and places hard limits on processing. ¯_(ツ)_/¯

1 Like

Coming in late. Was not really up for much conversation for a bit.

I’ve thought about this recently. I am not a web guy. But considering that HTMLv5 is modular, there is an internal core to the original spec which is just called HTML5 markup. It appears to be the core of HTML5 which could be rendered javascript free. Canvas elements, web sockets and so forth are seperate objects which I think might be safely ignored.

In terms of CSS im not sure where to begin there. Are there any terribly evil bits of CSS which need exclusing? The one thing that has me arching an eyebrow is their doesn’t seem to be a “base” CSS 4, just an ever expanding list of modules. Kind of hard to plant a flag there. I seem to recall CSS being discovered to be turing complete at some point. So something needs to be stripped out there.

Agreed. Transport are kissing cousins but a slightly different problem. Not that it is unimportant, but its another axis of a very large multivariable problem. Transports are also situational/contextual too. IETF is playing around with delivering HTML over UDP as HTTP3. (Formal specification is Quic). Different transports might also appeal to people depending on their document delivery needs. (Example: Darknet for subversive documents). Though id be potentially interested in a new transport for documents if applications and APIs are going to continue to clutter HTTP(S). I keep wondering how useable GNU Net has become but cant be bothered to find the time to set it up.

Dark times though. Traditional browser usage numbers have historically given a certain amount of weight to various voices at the W3C. Which is exactly why Mozilla has found itself declawed a bit, when DRM reared it’s head in the recent past. Google also has a habit of driving standards by implementing first and chrome and standardizing at the W3C when they can show off their success. The only way I see that changing is if Microsoft and others with Blink forks start joining in that game.

Not to say standards and processes won’t win out in the end. I think they could. I suspect however we are going to repeat some very 90s IE style mistakes with the web before we learn our lesson though; and that might be a long painful process.

3 Likes

My brain is grinding on this in the background. I’m trying to redesign IA around the concept of URLs as links between resources. It’s… kinda weird in my head.

Let’s say I make my site available over HTTP, HTTPS, FTP, Gopher, and Dat. Keep it simple, document store based on directories, basic filesystem metaphor stuff.

Okay, what do folks link to? A given resource at a path will be the same, if generate them correctly. I’m not entirely sure about Dat, but I believe all the rest can be easily set up on a single server, serving the files over different ports.

So… the user agent? Does that decide? I think that decides. So Chrome always chooses HTTPS, whereas without a plugin or config, Firefox tries HTTP first. And ftp always chooses FTP.

I guess Firefox’s default behavior bugs me, and that realization really bugs me. Because I am always the last one to arrive to the “market share meetings”, and I’m never on board. I don’t think our species should make decisions like that.

I think dynamic interactions should be opt-in, just like notifications and using a camera, in the browser. That includes opting-in to javascript. Not sexy defaults.

Meh. Maybe we just need <!doctype app>.

Standards wise, the URL should decide. I can issue a “gopher://” or a “http://” or a “https://” to the same domain in lynx. Likewise in firefox you can make it explicit with a “ftp://” a “http://” or a “https://”

Browsers fucked it up when they started hiding the transport in the URL. But it is still the deciding factor.

I didn’t know that but its because I aggressively push out https everywhere to my and every machine I provide support for.

That being said why not shut off http / port 80 ?

At this point in time, the answer to that question is: I haven’t set it up to not route and also be available for Let’s Encrypt. This is not a difficult technical task, just one of those things that gets passed over because “we might as well forward 80 to 443”. :slight_smile: