Why inference helps me personally

wordpress
jabber
xmpp
xeps
inference
Tags: #<Tag:0x00007f0fa528f800> #<Tag:0x00007f0fa528e8d8> #<Tag:0x00007f0fa528e798> #<Tag:0x00007f0fa528e270> #<Tag:0x00007f0fa528db90>

#1

Lately I’ve been ramping up my interest in triple stores and SPARQL. And as I operate #mage-party I find myself with a very concrete example of why I want to use it.

Okay, XEPs, amirite? I want to have a straightforward guide to assist in federating the jabber network. While I love Prosody, I want to document the other servers, as well as all the clients.

One issue of course is which XEPs are supported by which client, server, config, whatever.

Think of this: a best practice for Prosody that I’ve never seen explicitly stated as such is to clone the entire modules repo. Then you can basically turn on any config setting, they are all there.

But if I am listing a page of XEPs supported by Prosody, which do I list? The ones in the community modules? Those supported by core?

Now multiply that by jabber server; okay, there aren’t that many. But they do come and go, so tracking that over time is a bit difficult, especially if recommending a particular setup.

Okay, now let’s break all these things apart and relate them! We can suddenly query a XEP, get back a list of servers and clients that support it, as well as the nature of that support (core, contributed, experimental, stable, deprecated, no support, dev hate, etc.).

Another pool of integrations would be something like WordPress, which has become an ecosystem to such a point that any new technology will eventually disappear or have a WordPress integration (or both…).

If WordPress happened to support an XEP, I’d want it to be queryable. But imagine all the things that WP would have integrated. Some core, some other, some overlapping with jabber servers!

And being able to model these relationships allow me to build automated systems to track the status of those facts, enabling narrative driven queries that produce timely, useful documents for people.

Is this starting to make sense? :slight_smile:


#2

The first sentence doesn’t seem like the rest, but it most interests me. Say more!

Oh, the title is pretty cool too. :slight_smile:


#3

Today I came up with a very solid example: resume generation.

If I loaded up all my projects, how they relate to each other, who o collaborated with, etc., then I can build a resume based on some criteria that tracks.

  • show three to five technical/bizdev/advocacy accomplishments from community based/non profit/coops from the last five years

That’s a useful query for me! Because my work is so all over the place, and the “portfolios” I fulfill in various lines of work are numerous, I have a difficult time tracking what I know, how I know it, and how it applies.

And because I’ve built enough resumes over time, I see how some collaborations or clients fade from view, only to come back years later, and I’m off tracking the breadth of the work I’ve done to reflect I support systems long term… I hadn’t really thought about the different ways I could express my expertise and project history by tapping a semantic database set up for it. I had just been building it my head.

Oh, and that’s the big one. I’ve got to get this out of my head. I don’t know what to think, but I hope once I get this out it will clear the way for other things. Right now this is taking up a lot of space.

There are actually a few other queries I have tucked away in the odd notebook that I’d like to try answering with a small, maiki-sized dataset, and then see if I can get others to take them and run.

And finally, this is important because while we will be radically open and inclusive and transparent, not many people will emulate this kind of knowledge engine. Fortunately my bar for success at this point is incredibly low. :slight_smile: I’m certainly going to use it! But I don’t think “knowledge engines”, " wikae", or any other popular and completely legit words people use to describe their online resources, I don’t think these are of general interest, nor should it.

But damn ain’t there a wiki for everything?! Wikiapiary tracks something like 1,800 public wikae right now.

Is that enough to take on the giants? One giant? Anything?

At the end of the day I’m just trying to get by so I can carry on the delusion that people will use good services provided to them, and there isn’t a reason to stop building until that is the norm (not stopping is a horrible trait for a delusion, I agree!).