Discussion:
Meta: a usenet server just for sci.math
(too old to reply)
Ross Finlayson
2024-03-14 19:41:18 UTC
Permalink
So, the usual abstraction of request/response,
and the usual abstraction of header and body,
and the usual abstraction of composition and transport,
and the usual abstraction of multiplexing mux/demux,
and the usual abstraction of streaming and stuffing,
and the usual abstraction of handles and layers,
in the usual abstraction of connections and resources,
of a usual context of attachments and sessions,
in the usual abstraction of route links and handles,
makes for a usual abstraction of protocol,
for connection-oriented architectures.
Hipoio

"Protocol" and "Negotiation"

The usual sort of framework, for request/response or
message-oriented protocols, often has a serialization
layer, which means from the wire to an object representation,
and from an object to a wire representation.

So, deserializing, involves parsing the contents as arrive
on the wire, and resultingly constructing an object. Then,
serializing is the complementary converse notion, iterating
over the content of the object and emitting it to the wire.

Here the wire is an octet-sequence, for a connection that's
bi-directional there is the request or client wire and response
or server wire, then that usual matters of protocol, are
communicating sequential processes, either taking turns
talking on the wire, "half-duplex", or, multiplexing events
as independently, "full-duplex".

So, the message deserialization and message composition,
result in the protocol, as about those get nested, what's
generally called "header and body". So, a command or
request, it's got a header and body, then in some protocols
that's all there is to it, while for example in other protocols,
the command is its own sort of header then its body is the
header and body of a contained message, treating messages
first class, and basically how that results all sorts of notions
of header and body, and the body and payload, these are the
usual kinds of ideas and words, that apply to pretty much all
these kinds of things, and, it's usually simplified as much as
possible, so that frameworks implement all this and then
people implementing a single function don't need to know
anything about it at all, instead just in terms of objects.

Protocol usually also involves the stateful, or session,
anything that's static or "more global" with respect to
the scope, the state, the content, the completions,
the protocol, the session, the state.

The idea then I've been getting into is a sort of framework,
which more or less supports the protocol in its terms, and,
the wire in its terms, and, the resources in their terms, where
here, "the resources" usually refers to one of two things,
the "logical resource" that is a business object or has an identifier,
and the "physical" or "computational resource" which is of
the resources that fulfill transfer or changes of the state of
the "logical resources". So, usually when I say "resources"
I mean capacity and when I say "objects" it means what's
often called "business objects" or the stateful representations
of identified logical values their lifecycle of being, objects.


So, one of the things that happens in the frameworks,
is the unbounded, and what happens when messages
or payloads get large, in terms of the serial action that
reads or writes them off the wire, into an object, about
that it fills all the "ephemeral" resources, vis-a-vis vis
the "durable" resources, where the goal is to pass the
"streaming" of these, by coordinating the (de)serialization
and (de)composition, what makes it like so.

start ... end

start ... first ... following ... end

Then another usual notion besides "streaming", a large
item broken into smaller, is "batching", small items
gathered into larger.


So what I'm figuring for the framework and the protocols
and the negotiation, is what results a first-class sort of
abstraction of serialization and composition as together,
in terms of composing the payload and serializing the message,
of the message's header and body, that the payload is the message.

This might be familiar in packets, as, nested packets,
and, collected packets, with regards to that in the model
of the Ethernet network, packets are finite and small,
and that a convention of sockets, for example, establishes
a connection-oriented protocol, for example, that then
either the packets have external organization of their
reassembly, or internal organization of their reassembly,
their sequencing, their serialization.


Of course the entire usual idea of encapsulation is to
keep these things ignorant of each other, as it results
making a coupling of the things, and things that are
coupled must be de-coupled and re-coupled, as sequential
must be serialized and deserialized or even scattered and
gathered, about then the idea of the least sort of
"protocol or streaming" or "convention of streaming",
that the parsing picks up start/first/following/end,
vis-a-vis that when it fits in start/end, then that's
"under available ephemeral resources", and that when
the message as it starts getting parsed gets large,
then makes for "over available ephemeral resources",
that it's to be coordinate with its receiver or handler,
whether there's enough context, to go from batch-to-streaming
or streaming-to-batch, or to spool it off in what results
anything other an ephemeral resource, so it doesn't
block the messages that do fit, "under ephemeral resources".


So, it gets into the whole idea of the difference between
"request/response" of a command invocation in a protocol,
and, "commence/complete", of an own sort of protocol,
within otherwise the wire protocol, of the receives and
handlers, either round-tripping or one-way in the half-duplex
or full-duplex, with mux/demux both sides of request/response
and commence/complete.


This then becomes a matter relevant to protocol usually,
how to define, that within the protocol command + payload,
within the protocol header + body, with a stream-of-sequences
being a batch-of-bytes, and vice-versa, that for the conventions
and protocols of the utilization and disposition of resources,
computational and business, results defining how to implement
streaming and batching as conventions inside protocols,
according to inner and outer the bodies and payloads.


The big deal with that is implementing that in the (de)serializers,
the (de)composers, then about that a complete operation can
exit as of start -> success/fail, while commence might start but
it can fail while then it's underway, vis-a-vis that it's "well-formed".

So, what this introduces, is a sort of notion, of, "well-formedness",
which is pretty usual, "well-formed", "valid", these being the things,
then "well-flowing", "viable", or "versed" or these automatic sorts
of notions of batching and streaming, with regards to all-or-none and
goodrows/badrows.


Thusly, getting into the framework and the protocols, and the
layers and granular and smooth or discrete and indiscrete,
I've been studying request/response and the stateful in session
and streaming and batching and the computational and business
for a long time, basically that any protocol has a wire protocol,
and a logical protocol above that, then that streaming or batching,
is either "in the protocol" or "beneath the protocol", (or, "over the
protocol", of course the most usual notion of event streams and their
batches), is that here the idea is to fill out according to message
composition, what then can result "under the protocol", a simplest
definition of (de)serialization and (de)composition,
for the well-formedness and well-flowingness the valid and versed,
that for half-duplex and full-duplex protocols or the (de)multiplexer,
makes it so possible to have a most usual means to declare
under strong types, "implement streaming", in otherwise
a very simple framework, that has a most usual adapter
the receiver or handler when the work is "within available
ephemeral resources", and falls back to the valid/versed
when not, all the through the same layers and multiplexers,
pretty much any sort usual connection-oriented protocol.


Hi-Po I/O
Ross Finlayson
2024-03-28 04:05:44 UTC
Permalink
arithmetic hash searches
take a hashcode, split it up
invert each arithmetically, find intersection in 64 bits
fill in those
detect misses when the bits don't intersect the search
when all hits, then "refine", next double range,
compose those naturally by union
when definite misses excluded then go find matching partition
arithmetic partition hash
So, the idea is, that, each message ID, has applied a uniform
hash, then that it fills a range, of so many bits.
Then, its hash is split into smaller chunks the same 1/2/3/4
of the paths, then those are considered a fixed-point fraction,
of the bits set of the word width, plus one.
Then, sort of pyramidally, is that in increasing words, or doubling,
is that a bunch of those together, mark those words,
uniformly in the range.
For example 0b00001111, would mark 0b00001000, then
0b0000000010000000, and so on, for detecting whether
the hash code's integer value, is in the range 15/16 - 16/16.
The idea is that the ranges this way compose with binary OR,
then that a given integer, then that the integer, can be
detected to be out of the range, if its bit is zero, and then
otherwise that it may or may not be in the range.
0b00001111 number N1
0b00001000 range R1
0b00000111 number N2
0b00000100 range R2
0b00001100 union range UR = R1 | R2 | ....
missing(N) {
return (UR & RN == 0);
}
This sort of helps where, in a usual hash map, determining
that an item doesn't exist, is worst case, while the usual
finding the item that exists is log 2, then that usually its value
is associated with that, besides.
Then, when there are lots of partitions, and they're about
uniform, it's expected the message ID to be found in only
one of the partitions, is that the partitions can be organized
according to their axes of partitions, composing the ranges
together, then that search walks down those, until it's either
a definite miss, or an ambiguous hit, then to search among
those.
It seems then for each partition (group x date), then those
can be composed together (group x month, group x year,
groups x year, all), so that looking to find the group x date
where a message ID is, results that it's a constant-time
operation to check each of those, and the data structure
is not very large, with regards to computing the integers'
offset in each larger range, either giving up when it's
an unambiguous miss or fully searching when it's an
ambiguous hit.
This is where, the binary-tree that searches in log 2 n,
worst-case, where it's balanced and uniform, though
it's not to be excluded that a usual hashmap implementation
is linear in hash collisions, is for excluding partitions,
in about constant time and space given that it's just a
function of the number of partitions and the eventual
size of the pyramidal range, that instead of having a
binary tree with space n^2, the front of it has size L r
for L the levels of the partition pyramid and r the size
of the range stamp.
Then, searching in the partitions, seems it essentially
results, that there's an ordering of the message IDs,
so there's the "message IDs" file, either fixed-length-records
or with an index file with fixed-length-records or otherwise
for reading out the groups' messages, then another one
with the message ID's sorted, figuring there's a natural
enough binary search of those with value identity, or bsearch
after qsort, as it were.
So, the idea is that there's a big grid of group X date archives,
each one of those a zip file, with being sort of contrived the
zip files, so that each entry is self-contained, and it sort of
results that concatenating them results another. So
anyways, the idea then is for each of those, for each of
their message IDs, to compute its four integers, W_i,
then allocate a range, and zero it, then saturate each
bit, in each range for each integer. So, that's like, say,
for fitting the range into 4K, for each partition, with
there being 2^8 of those in a megabyte, or that many
partitions (512), or about a megabyte in space for each
partition, but really where these are just variables,
because it's opportunistic, and the ranges can start
with just 32 or 64 bits figuring that most partitions
are sparse, also, in this case, though usually it would
be expected they are half-full.
There are as many of these ranges as the hash is split
into numbers, is the idea.
Then the idea is that these ranges are pyramidal in the
sense, that when doing lookup for the ID, is starting
from the top of the pyramid, projecting the hash number
into the range bit string, with one bit for each sub-range,
so it's branchless, and'ing the number bits and the partition
range together, and if any of the hash splits isn't in the
range, a branch, dropping the partition pyramid, else,
descending into the partition pyramid.
(Code without branches can go a lot faster than
code with lots of branches, if/then.)
At each level of the pyramid, it's figured that only one
of the partitions will not be excluded, except for hash
collisions, then if it's a base level to commence bsearch,
else to drop the other partition pyramids, and continue
with the reduced set of ranges in RAM, and the projected
bits of the ID's hash integer.
The ranges don't even really have to be constant if it's
so that there's a limit so they're under a constant, then
according to uniformity they only have so many, eg,
just projecting out their 1's, so the partition pyramid
digging sort of always finds one or more partitions
with possible matches, those being hash collisions or
messages duplicated across groups, and mostly finds
those with exclusions, so that it results reducing, for
example that empty groups are dropped right off
though not being skipped, while full groups then
get into needing more than constant space and
constant time to search.
Of course if all the partitions miss then it's
also a fast exit that none have the ID.
So, this, "partition pyramid hash filter", with basically,
"constant and configurable space and time", basically
has that because Message Id's will only exist in one or
a few partitions, and for a single group and not across
about all groups, exactly one, and the hash is uniform, so
that hash collisions are low, and the partitions aren't
overfilled, so that hash collisions are low, then it sort
of results all the un-used partitions at rest, don't fill
up in n^2 space the log 2 n hash-map search. Then,
they could, if there was spare space, and it made sense
that in the write-once-read-many world it was somehow
many instead of never, a usual case, or, just using a
list of sorted message Id's in the partition and bsearch,
this can map the file without loading its contents in
space, except as ephemerally, or the usual disk controller's
mmap space, or "ready-time" and "ephemeral-space".
In this sort of way there's no resident RAM for the partitions
except each one with a fixed-size arithmetic hash stamp,
while lookups have a fixed or constant cost, plus then
also a much smaller usual log 2 time / n^2 space trade-off,
while memory-mapping active files automatically caches.
So, the idea is to combine the BFF backing file format
and LFF library file format ideas, with that the group x date
partitions make the for archive and active partitions,
then to have constant-time/constant-space partition
pyramid arithmetic hash range for lookup, then
ready-time/ephemeral-space lookup in partitions,
then that the maintenance of the pyramid tree,
happens with dropping partitions, while just
accumulating with adding partitions.
Yeah, I know that a usual idea is just to make a hash map
after an associative array with log 2 n lookup in n^2 space,
that maintenance is in adding and removing items,
here the idea is to have partitions above items,
and sort of naturally to result "on startup, find
the current partitions, compose their partition pyramid,
then run usually constant-time/constant-space in that
then ready-time/ephemeral-space under that,
maintenance free", then that as active partitions
being written roll over to archive partitions being
finished, then they just get added to the pyramid
and their ranges or'ed up into the pyramid.
Hmm... 32K or 2^15 groups, 16K or 2^14 days, or
about 40 years of Usenet in partitions, 2^29,
about 2^8 per megabyte or about 2^20 or one
gigabyte RAM, or, just a file, then memory-mapping
the partition pyramid file, figuring again that
most partitions are not resident in RAM,
this seems a sort of good simple idea to
implement lookup by Message ID over 2^30 many.
I mean if "text Usenet for all time is about a billion messages",
it seems around that size.
So, trying to figure out if this "arithmetic hash range
pyramidal partition" data structure is actually sort of
reasonable, gets into that it involves finding a balance
in what's otherwise a very well-understood trade-off,
in terms of the cost of a lookup, over time, and then
especially as whether an algorithm is "scale-able",
that even a slightly lesser algorithm might be better
if it results "scale-able", especially if it breaks down
to a very, very minimal set of resources, in time,
and in various organizations of space, or distance,
which everybody knows as CPU, RAM, and DISK,
in terms of time, those of lookups per second,
and particularly where parallelizable as with
regards to both linear speed-up and also immutable
data structures, or, clustering. ("Scale.")


Then it's probably so that the ranges are pretty small,
because they double, and whether it's best just to
have an overall single range, or, refinements of it,
according to a "factor", a "factor" that represents
how likely it is that hashes don't collide in the range,
or that they do.

This is a different way of looking at hash collisions,
besides that two objects have the same hash,
just that they're in the same partition of the range
their integer value, for fixed-length uniform hashes.

I.e., a hash collision proper would always be a
redundant or order-dependent dig-up, of a sort,
where the idea is that the lookup first results
searching the pyramid plan for possibles, then
digging up each of those and checking for match.

The idea that group x date sort of has that those
are about on the same order is a thing, then about
the idea that "category" and "year" are similarly
about so,

Big8 x year
group x date

it's very contrived to have those be on the same
order, in terms of otherwise partitioning, or about
what it results that "partitions are organized so that
their partitions are tuples and the tuples are about
on the same order, so it goes, thus that uniformity
of hashes, results being equi-distributed in those,
so that it results the factor is good and that arithmetic
hash ranges filter out most of the partitions, and,
especially that there aren't many false-positive dig-up
partitions.

It's sort of contrived, but then it does sort of make
it so that also other search concerns like "only these
groups or only these years anyways", naturally get
dropped out at the partition layer, and, right in the
front of the lookup algorithm.

It's pretty much expected though that there would
be non-zero false-positive dig-ups, where here a dig-up
is that the arithmetic hash range matched, but it's
actually a different Message ID's hash in the range,
and not the lookup value(s).

Right, so just re-capping here a bit, the idea is that
there are groups, and dates, and for each is a zip file,
which is a collection of files in a file-system entry file
with about random access on the zip file each entry,
and compressed, and the entries include Messages,
by their Message ID's, then that the entries are
maybe in sub-directories, that reflect components
of the Message ID's hash, where a hash, is a fixed-length
value, like 64 bytes or 128 bytes, or a power of two
and usually an even power of two thus a multiple of four,
thus that a 64 byte hash has 2^64 * 2^8 many possible
values, then that a range, of length R bits, has R many
partitions, in terms of the hash size and the range size,
whether the factor is low enough, that most partitions
will naturally be absent most ranges, because hashes
can only be computed from Message ID's, not by their
partitions or other information like the group or date.

So, if there are 2^30 or a billion messages, then a
32 bit hash, would have a fair expectation that
unused values would be not dense, then for
what gets into "birthday problem" or otherwise
how "Dirichlet principle" makes for how often
are hash collisions, for how often are range collisions,
either making redundant dig-ups, in the way this
sort of algorithm services look-ups.

The 32 bits is quite a bit less than 64 * 8, though,
about whether it would also result, that, splitting
that into subdirectories, results different organizations
here about "tuned to Usenet-scale and organization",
vis-a-vis, "everybody's email" or something like that.
That said, it shouldn't just fall apart if the size or
count blows up, though it might be expect then
a various sorts of partitioning, to keep the partition
tuple orders square, or on the same orders.


The md5 is widely available, "md5sum", it's 128 bits,
its output is hexadecimal characters, 32-many.

https://en.wikipedia.org/wiki/MD5
https://en.wikipedia.org/wiki/Partition_(database)
https://en.wikipedia.org/wiki/Hash_function#Uniformity

Otherwise the only goal of the hash is to be uniform,
and also to have "avalanche criterion", so that near Message-Id's
will still be expected to have different hashes, as it's not
necessarily expected that they're the same group and
date, though that would be a thing, yet Message ID's
should be considered opaque and not seated together.

Then MD5 is about the most usual hash utility laying
around, if not SHA-1, or SHA-256. Hmm..., in the
interests of digital preservation is "the tools for
any algorithms should also be around forever",
one of those things.

So anyways, then each group x date has its Message ID's,
each of those has its hash, each of those fits in a range,
indicating one bit in the range where it is, then those are
OR'd together to result a bit-mask of the range, then
that a lookup can check its hash's bit against the range,
and dig-up the partition if it's in, or, skip the partition
if it's not, with the idea that the range is big enough
and the resulting group x date is small enough, that
the "pyramidal partition", is mostly sparse, at the lower
levels, that it's mostly "look-arounds" until finally the
"dig-ups", in the leaf nodes of the pyramidal partitions.

I.e., the dig-ups will eventually include spurious or
redundant false-positives, that the algorithm will
access the leaf partitions at uniform random.

The "pyramidal" then also get into both the empties,
like rec.calm with zero posts ten years running,
or alt.spew which any given day exceeds zip files
or results a lot of "zip format, but the variously
packaged, not-recompressed binaries", the various
other use cases than mostly at-rest and never-read
archival purposes. The idea of the "arithmetic hash
range pyramidal partition" is that mostly the
leaf partitions are quite small and sparse, and
mostly the leveling of the pyramid into year/month/date
and big8/middle/group, as it were, winnows those
down in what's a constant-rate constant-space scan
on the immutable data structure of the partition pyramid.

Yeah, I know, "numbers", here though the idea is
that about 30K groups at around 18K days = 50 years
makes about 30 * 20 * million or less than a billion
files the zip files, which would all fit on a volume
that supports up to four billion-many files, or an
object-store, then with regards to that most of
those would be quite small or even empty,
then with regards to "building the pyramid",
the levels big8/middle/group X year/month/date,
the data structure of the hashes marking the ranges,
then those themselves resulting a file, which are
basically the entire contents of allocated RAM,
or for that matter a memory-mapped file, with
the idea that everything else is ephemeral RAM.
Ross Finlayson
2024-04-14 15:36:01 UTC
Permalink
Post by Ross Finlayson
arithmetic hash searches
take a hashcode, split it up
invert each arithmetically, find intersection in 64 bits
fill in those
detect misses when the bits don't intersect the search
when all hits, then "refine", next double range,
compose those naturally by union
when definite misses excluded then go find matching partition
arithmetic partition hash
So, the idea is, that, each message ID, has applied a uniform
hash, then that it fills a range, of so many bits.
Then, its hash is split into smaller chunks the same 1/2/3/4
of the paths, then those are considered a fixed-point fraction,
of the bits set of the word width, plus one.
Then, sort of pyramidally, is that in increasing words, or doubling,
is that a bunch of those together, mark those words,
uniformly in the range.
For example 0b00001111, would mark 0b00001000, then
0b0000000010000000, and so on, for detecting whether
the hash code's integer value, is in the range 15/16 - 16/16.
The idea is that the ranges this way compose with binary OR,
then that a given integer, then that the integer, can be
detected to be out of the range, if its bit is zero, and then
otherwise that it may or may not be in the range.
0b00001111 number N1
0b00001000 range R1
0b00000111 number N2
0b00000100 range R2
0b00001100 union range UR = R1 | R2 | ....
missing(N) {
return (UR & RN == 0);
}
This sort of helps where, in a usual hash map, determining
that an item doesn't exist, is worst case, while the usual
finding the item that exists is log 2, then that usually its value
is associated with that, besides.
Then, when there are lots of partitions, and they're about
uniform, it's expected the message ID to be found in only
one of the partitions, is that the partitions can be organized
according to their axes of partitions, composing the ranges
together, then that search walks down those, until it's either
a definite miss, or an ambiguous hit, then to search among
those.
It seems then for each partition (group x date), then those
can be composed together (group x month, group x year,
groups x year, all), so that looking to find the group x date
where a message ID is, results that it's a constant-time
operation to check each of those, and the data structure
is not very large, with regards to computing the integers'
offset in each larger range, either giving up when it's
an unambiguous miss or fully searching when it's an
ambiguous hit.
This is where, the binary-tree that searches in log 2 n,
worst-case, where it's balanced and uniform, though
it's not to be excluded that a usual hashmap implementation
is linear in hash collisions, is for excluding partitions,
in about constant time and space given that it's just a
function of the number of partitions and the eventual
size of the pyramidal range, that instead of having a
binary tree with space n^2, the front of it has size L r
for L the levels of the partition pyramid and r the size
of the range stamp.
Then, searching in the partitions, seems it essentially
results, that there's an ordering of the message IDs,
so there's the "message IDs" file, either fixed-length-records
or with an index file with fixed-length-records or otherwise
for reading out the groups' messages, then another one
with the message ID's sorted, figuring there's a natural
enough binary search of those with value identity, or bsearch
after qsort, as it were.
So, the idea is that there's a big grid of group X date archives,
each one of those a zip file, with being sort of contrived the
zip files, so that each entry is self-contained, and it sort of
results that concatenating them results another. So
anyways, the idea then is for each of those, for each of
their message IDs, to compute its four integers, W_i,
then allocate a range, and zero it, then saturate each
bit, in each range for each integer. So, that's like, say,
for fitting the range into 4K, for each partition, with
there being 2^8 of those in a megabyte, or that many
partitions (512), or about a megabyte in space for each
partition, but really where these are just variables,
because it's opportunistic, and the ranges can start
with just 32 or 64 bits figuring that most partitions
are sparse, also, in this case, though usually it would
be expected they are half-full.
There are as many of these ranges as the hash is split
into numbers, is the idea.
Then the idea is that these ranges are pyramidal in the
sense, that when doing lookup for the ID, is starting
from the top of the pyramid, projecting the hash number
into the range bit string, with one bit for each sub-range,
so it's branchless, and'ing the number bits and the partition
range together, and if any of the hash splits isn't in the
range, a branch, dropping the partition pyramid, else,
descending into the partition pyramid.
(Code without branches can go a lot faster than
code with lots of branches, if/then.)
At each level of the pyramid, it's figured that only one
of the partitions will not be excluded, except for hash
collisions, then if it's a base level to commence bsearch,
else to drop the other partition pyramids, and continue
with the reduced set of ranges in RAM, and the projected
bits of the ID's hash integer.
The ranges don't even really have to be constant if it's
so that there's a limit so they're under a constant, then
according to uniformity they only have so many, eg,
just projecting out their 1's, so the partition pyramid
digging sort of always finds one or more partitions
with possible matches, those being hash collisions or
messages duplicated across groups, and mostly finds
those with exclusions, so that it results reducing, for
example that empty groups are dropped right off
though not being skipped, while full groups then
get into needing more than constant space and
constant time to search.
Of course if all the partitions miss then it's
also a fast exit that none have the ID.
So, this, "partition pyramid hash filter", with basically,
"constant and configurable space and time", basically
has that because Message Id's will only exist in one or
a few partitions, and for a single group and not across
about all groups, exactly one, and the hash is uniform, so
that hash collisions are low, and the partitions aren't
overfilled, so that hash collisions are low, then it sort
of results all the un-used partitions at rest, don't fill
up in n^2 space the log 2 n hash-map search. Then,
they could, if there was spare space, and it made sense
that in the write-once-read-many world it was somehow
many instead of never, a usual case, or, just using a
list of sorted message Id's in the partition and bsearch,
this can map the file without loading its contents in
space, except as ephemerally, or the usual disk controller's
mmap space, or "ready-time" and "ephemeral-space".
In this sort of way there's no resident RAM for the partitions
except each one with a fixed-size arithmetic hash stamp,
while lookups have a fixed or constant cost, plus then
also a much smaller usual log 2 time / n^2 space trade-off,
while memory-mapping active files automatically caches.
So, the idea is to combine the BFF backing file format
and LFF library file format ideas, with that the group x date
partitions make the for archive and active partitions,
then to have constant-time/constant-space partition
pyramid arithmetic hash range for lookup, then
ready-time/ephemeral-space lookup in partitions,
then that the maintenance of the pyramid tree,
happens with dropping partitions, while just
accumulating with adding partitions.
Yeah, I know that a usual idea is just to make a hash map
after an associative array with log 2 n lookup in n^2 space,
that maintenance is in adding and removing items,
here the idea is to have partitions above items,
and sort of naturally to result "on startup, find
the current partitions, compose their partition pyramid,
then run usually constant-time/constant-space in that
then ready-time/ephemeral-space under that,
maintenance free", then that as active partitions
being written roll over to archive partitions being
finished, then they just get added to the pyramid
and their ranges or'ed up into the pyramid.
Hmm... 32K or 2^15 groups, 16K or 2^14 days, or
about 40 years of Usenet in partitions, 2^29,
about 2^8 per megabyte or about 2^20 or one
gigabyte RAM, or, just a file, then memory-mapping
the partition pyramid file, figuring again that
most partitions are not resident in RAM,
this seems a sort of good simple idea to
implement lookup by Message ID over 2^30 many.
I mean if "text Usenet for all time is about a billion messages",
it seems around that size.
So, trying to figure out if this "arithmetic hash range
pyramidal partition" data structure is actually sort of
reasonable, gets into that it involves finding a balance
in what's otherwise a very well-understood trade-off,
in terms of the cost of a lookup, over time, and then
especially as whether an algorithm is "scale-able",
that even a slightly lesser algorithm might be better
if it results "scale-able", especially if it breaks down
to a very, very minimal set of resources, in time,
and in various organizations of space, or distance,
which everybody knows as CPU, RAM, and DISK,
in terms of time, those of lookups per second,
and particularly where parallelizable as with
regards to both linear speed-up and also immutable
data structures, or, clustering. ("Scale.")
Then it's probably so that the ranges are pretty small,
because they double, and whether it's best just to
have an overall single range, or, refinements of it,
according to a "factor", a "factor" that represents
how likely it is that hashes don't collide in the range,
or that they do.
This is a different way of looking at hash collisions,
besides that two objects have the same hash,
just that they're in the same partition of the range
their integer value, for fixed-length uniform hashes.
I.e., a hash collision proper would always be a
redundant or order-dependent dig-up, of a sort,
where the idea is that the lookup first results
searching the pyramid plan for possibles, then
digging up each of those and checking for match.
The idea that group x date sort of has that those
are about on the same order is a thing, then about
the idea that "category" and "year" are similarly
about so,
Big8 x year
group x date
it's very contrived to have those be on the same
order, in terms of otherwise partitioning, or about
what it results that "partitions are organized so that
their partitions are tuples and the tuples are about
on the same order, so it goes, thus that uniformity
of hashes, results being equi-distributed in those,
so that it results the factor is good and that arithmetic
hash ranges filter out most of the partitions, and,
especially that there aren't many false-positive dig-up
partitions.
It's sort of contrived, but then it does sort of make
it so that also other search concerns like "only these
groups or only these years anyways", naturally get
dropped out at the partition layer, and, right in the
front of the lookup algorithm.
It's pretty much expected though that there would
be non-zero false-positive dig-ups, where here a dig-up
is that the arithmetic hash range matched, but it's
actually a different Message ID's hash in the range,
and not the lookup value(s).
Right, so just re-capping here a bit, the idea is that
there are groups, and dates, and for each is a zip file,
which is a collection of files in a file-system entry file
with about random access on the zip file each entry,
and compressed, and the entries include Messages,
by their Message ID's, then that the entries are
maybe in sub-directories, that reflect components
of the Message ID's hash, where a hash, is a fixed-length
value, like 64 bytes or 128 bytes, or a power of two
and usually an even power of two thus a multiple of four,
thus that a 64 byte hash has 2^64 * 2^8 many possible
values, then that a range, of length R bits, has R many
partitions, in terms of the hash size and the range size,
whether the factor is low enough, that most partitions
will naturally be absent most ranges, because hashes
can only be computed from Message ID's, not by their
partitions or other information like the group or date.
So, if there are 2^30 or a billion messages, then a
32 bit hash, would have a fair expectation that
unused values would be not dense, then for
what gets into "birthday problem" or otherwise
how "Dirichlet principle" makes for how often
are hash collisions, for how often are range collisions,
either making redundant dig-ups, in the way this
sort of algorithm services look-ups.
The 32 bits is quite a bit less than 64 * 8, though,
about whether it would also result, that, splitting
that into subdirectories, results different organizations
here about "tuned to Usenet-scale and organization",
vis-a-vis, "everybody's email" or something like that.
That said, it shouldn't just fall apart if the size or
count blows up, though it might be expect then
a various sorts of partitioning, to keep the partition
tuple orders square, or on the same orders.
The md5 is widely available, "md5sum", it's 128 bits,
its output is hexadecimal characters, 32-many.
https://en.wikipedia.org/wiki/MD5
https://en.wikipedia.org/wiki/Partition_(database)
https://en.wikipedia.org/wiki/Hash_function#Uniformity
Otherwise the only goal of the hash is to be uniform,
and also to have "avalanche criterion", so that near Message-Id's
will still be expected to have different hashes, as it's not
necessarily expected that they're the same group and
date, though that would be a thing, yet Message ID's
should be considered opaque and not seated together.
Then MD5 is about the most usual hash utility laying
around, if not SHA-1, or SHA-256. Hmm..., in the
interests of digital preservation is "the tools for
any algorithms should also be around forever",
one of those things.
So anyways, then each group x date has its Message ID's,
each of those has its hash, each of those fits in a range,
indicating one bit in the range where it is, then those are
OR'd together to result a bit-mask of the range, then
that a lookup can check its hash's bit against the range,
and dig-up the partition if it's in, or, skip the partition
if it's not, with the idea that the range is big enough
and the resulting group x date is small enough, that
the "pyramidal partition", is mostly sparse, at the lower
levels, that it's mostly "look-arounds" until finally the
"dig-ups", in the leaf nodes of the pyramidal partitions.
I.e., the dig-ups will eventually include spurious or
redundant false-positives, that the algorithm will
access the leaf partitions at uniform random.
The "pyramidal" then also get into both the empties,
like rec.calm with zero posts ten years running,
or alt.spew which any given day exceeds zip files
or results a lot of "zip format, but the variously
packaged, not-recompressed binaries", the various
other use cases than mostly at-rest and never-read
archival purposes. The idea of the "arithmetic hash
range pyramidal partition" is that mostly the
leaf partitions are quite small and sparse, and
mostly the leveling of the pyramid into year/month/date
and big8/middle/group, as it were, winnows those
down in what's a constant-rate constant-space scan
on the immutable data structure of the partition pyramid.
Yeah, I know, "numbers", here though the idea is
that about 30K groups at around 18K days = 50 years
makes about 30 * 20 * million or less than a billion
files the zip files, which would all fit on a volume
that supports up to four billion-many files, or an
object-store, then with regards to that most of
those would be quite small or even empty,
then with regards to "building the pyramid",
the levels big8/middle/group X year/month/date,
the data structure of the hashes marking the ranges,
then those themselves resulting a file, which are
basically the entire contents of allocated RAM,
or for that matter a memory-mapped file, with
the idea that everything else is ephemeral RAM.
Wonder about the pyramidal partition arithmetic range hash
some more, with figuring out how to make it so that
the group x date grid of buckets, has a reasonably
well-defined run-time, while using a minimal amount
of memory, or a tunable amount giving performance,
for a well-defined constant resource, that's constant
and fully re-entrant with regards to parallel lookups.

The idea is to implement the lookup by message-id,
where messages are in buckets or partitions basically
according to group x date,

a.b.c/yyyy/mmdd/0.zip
a.b.c/yyyy/mmdd/0.pyr

with the idea of working up so that the groups,
on the order of 30K or so, and days, on the order
of 15K or so, have that mostly also the posts are
pretty sparse over all the groups and dates,
with the idea that absence and presence in
the file-system or object-store result for usual
sorts lookups, that search hits would be associated
with a message-id, then to look it up in any group
it was posted to, then across those or concomitantly,
the idea that cross-posts exist in duplicate data
across each partition.

a/b.c/yyyy/mmdd

yyyy/mmdd/a/b.c

The idea is that yyyy is on the order of 40 or 50,
while mmdd is 365, with the idea of having "0000"
for example as placeholders for otherwise dateless
posts sort of found in the order, and that 'a' is about
on the order of 30 or 40, all beyond the Big 8, then
that after finding matches in those, which would
be expected to be pretty dense in those, where
the message-id is hashed, then split into four pieces
and each of those a smaller uniform hash, then
it's value in then the range, simply |'s into the range
bits, then diving into the next level of the pyramid,
and those that match, and those that match, and
so on, serially yet parallelizably, until finding the
group's date files to dig, then actually looking
into the file of message-ids.

a/b.c/yyyy/mmdd/0.zip
a/b.c/yyyy/mmdd/0.pyr
a/b.c/yyyy/mmdd/0.ids

a/b.c/yyyy/mmdd.pyr
a/b.c/yyyy.pyr
a/b.c.pyr
a/pyr

yyyy/mmdd/a/b.c.pyr
yyyy/mmdd/a.pyr
yyyy/mmdd.pyr
yyyy.pyr

One can see here that "building the pyramid" is
pretty simply, it's a depth-first sort of traversal
to just "or" together the lower level's .pyramid files,
then usually for the active or recent besides the
archival or older, those just being checked for
when usually lookups are for recent. The maintenance
or re-building the pyramid, has a basic invalidation
routine, where lastModifiedTime is reliable, or
for example a signature or even just a checksum,
or that anyways the rebuilding the data structure's
file backing is just a filesystem operation of a usual sort.

Then, with like a 16KiB or so, range, is basically
about 4KiB for each the 4 hashes, so any hash-miss
results a drop, then that's about 16 kibibits,
about as above usual or a default hash for
the message-id's, where it's also designed that
/h1/h2/h3/h4/message-id results a file-system
depth that keeps the directory size within usual
limits of filesystems and archival package files,
of all the files, apiece.

Then, a megabyte of RAM or so, 2^20, then with
regards to 2^10 2^4, is about 2^6 = 64 of those
per megabyte.

30K groups x 15K days ~ 450M group days, hmm, ...,
not planning on fitting that into RAM.

2 groups x 18262 days, 36K, that should fit,
or, 32768 = 2^15, say, by 2^6 is about 2^9 or
512 megabytes RAM, hmm..., figuring lookups
in that would be at about 1 GHz / 512 MiB
about half a second, ....

The idea is that message by group-number are just
according to the partitions adding up counts,
then lookup by message-Id is that whatever
results search that returns a message-id for hits,
then has some reasonable lookup for message-id.
Ross Finlayson
2024-04-20 18:24:49 UTC
Permalink
Well I've been thinking about the re-routine as a model of cooperative
multithreading,
then thinking about the flow-machine of protocols

NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP

Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP 1.1
and connections, and the session ID shared client/server.


The re-routine idea is this, that each kind of method, is memoizable,
and, it memoizes, by object identity as the key, for the method, all
its callers, how this is like so.

interface Reroutine1 {

Result1 rr1(String a1) {

Result2 r2 = reroutine2.rr2(a1);

Result3 r3 = reroutine3.rr3(r2);

return result(r2, r3);
}

}


The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.

Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.

Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.


Result 1 rr(String a1) {
// if a1 is in the memo, return for it
// else queue for it and carry on

}


What is a re-routine?

It's a pattern for cooperative multithreading.

It's sort of a functional approach to functions and flow.

It has a declarative syntax in the language with usual flow-of-control.

So, it's cooperative multithreading so it yields?

No, it just quits, and expects to be called back.

So, if it quits, how does it complete?

The entry point to re-routine provides a callback.

Re-routines only return results to other re-routines,
It's the default callback. Otherwise they just callback.

So, it just quits?

If a re-routine gets called with a null, it throws.

If a re-routine gets a null, it just continues.

If a re-routine completes, it callbacks.

So, can a re-routine call any regular code?

Yeah, there are some issues, though.

So, it's got callbacks everywhere?

Well, it's just got callbacks implicitly everywhere.

So, how does it work?

Well, you build a re-routine with an input and a callback,
you call it, then when it completes, it calls the callback.

Then, re-routines call other re-routines with the argument,
and the callback's in a ThreadLocal, and the re-routine memoizes
all of its return values according to the object identity of the inputs,
then when a re-routine completes, it calls again with another ThreadLocal
indicating to delete the memos, following the exact same flow-of-control
only deleting the memos going along, until it results all the memos in
the re-routines for the interned or ref-counted input are deleted,
then the state of the re-routine is de-allocated.

So, it's sort of like a monad and all in pure and idempotent functions?

Yeah, it's sort of like a monad and all in pure and idempotent functions.

So, it's a model of cooperative multithreading, though with no yield,
and callbacks implicitly everywhere?

Yeah, it's sort of figured that a called re-routine always has a
callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its re-routines in
their normal declarative flow-of-control with exception handling, and
whatever re-routines or other pure monadic idempotent functions it
calls, throw when they get null inputs.

Also it sort of doesn't have primitive types, Strings must always be
interned, all objects must have a distinct identity w.r.t. ==, and null
is never an argument or return value.

So, what does it look like?

interface Reroutine1 {

Result1 rr1(String a1) {

Result2 r2 = reroutine2.rr2(a1);

Result3 r3 = reroutine3.rr3(r2);

return result(r2, r3);
}

}

So, I expect that to return "result(r2, r3)".

Well, that's synchronous, and maybe blocking, the idea is that it calls
rr2, gets a1, and rr2 constructs with the callback of rr1 and it's own
callback, and a1, and makes a memo for a1, and invokes whatever is its
implementation, and returns null, then rr1 continues and invokes rr3
with r2, which is null, so that throws a NullPointerException, and rr1
quits.

So, ..., that's cooperative multithreading?

Well you see what happens is that rr2 invoked another re-routine or end
routine, and at some point it will get called back, and that will happen
over and over again until rr2 has an r2, then rr2 will memoize (a1, r2),
and then it will callback rr1.

Then rr1 had quit, it runs again, this time it gets r2 from the (a1,
r2) memo in the monad it's building, then it passes a non-null r2 to
rr3, which proceeds in much the same way, while rr1 quits again until
rr3 calls it back.

So, ..., it's non-blocking, because it just quits all the time, then
happens to run through the same paces filling in?

That's the idea, that re-routines are responsible to build the monad
and call-back.

So, can I just implement rr2 and rr3 as synchronous and blocking?

Sure, they're interfaces, their implementation is separate. If they
don't know re-routine semantics then they're just synchronous and
blocking. They'll get called every time though when the re-routine gets
called back, and actually they need to know the semantics of returning
an Object or value by identity, because, calling equals() to implement
Memo usually would be too much, where the idea is to actually function
only monadically, and that given same Object or value input, must return
same Object or value output.

So, it's sort of an approach as a monadic pure idempotency?

Well, yeah, you can call it that.

So, what's the point of all this?

Well, the idea is that there are 10,000 connections, and any time one
of them demultiplexes off the connection an input command message, then
it builds one of these with the response input to the demultiplexer on
its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself. Then the re-routine is launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back out.

The point is that there are only as many Theads as cores so the goal is
that they never block,
and that the memos make for interning Objects by value, then the goal is
mostly to receive command objects and handles to request bodies and
result objects and handles to response bodies, then to call-back with
those in whatever serial order is necessary, or not.

So, won't this run through each of these re-routines umpteen times?

Yeah, you figure that the runtime of the re-routine is on the order of
n^2 the order of statements in the re-routine.

So, isn't that terrible?

Well, it doesn't block.

So, it sounds like a big mess.

Yeah, it could be. That's why to avoid blocking and callback
semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, and
they're well-defined behavior is exactly according to flow-of-control
including exception-handling.

There's that and there's basically it only needs one Thread, so, less
Thread x stack size, for a deep enough thread call-stack. Then the idea
is about one Thread per core, figuring for the thread to always be
running and never be blocking.

So, it's just normal flow-of-control.

Well yeah, you expect to write the routine in normal flow-of-control,
and to test it with synchronous and in-memory editions that just run
through synchronously, and that if you don't much care if it blocks,
then it's the same code and has no semantics about the asynchronous or
callbacks actually in it. It just returns when it's done.


So what's the requirements of one of these again?

Well, the idea is, that, for a given instance of a re-routine, it's an
Object, that implements an interface, and it has arguments, and it has a
return value. The expectation is that the re-routine gets called with
the same arguments, and must return the same return value. This way
later calls to re-routines can match the same expectation, same/same.

Also, if it gets different arguments, by Object identity or primitive
value, the re-routine must return a different return value, those being
same/same.

The re-routine memoizes its arguments by its argument list, Object or
primitive value, and a given argument list is same if the order and
types and values of those are same, and it must return the same return
value by type and value.

So, how is this cooperative multithreading unobtrusively in
flow-of-control again?

Here for example the idea would be, rr2 quits and rr1 continues, rr3
quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
those come in, at some point rr4 will be fulfilled, and thus rr4 will
quit and rr1 will quit. When rr4's callback completes, then it will
call-back rr1, which will finally complete, and then call-back whatever
called r1. Then rr1 runs itself through one more time to
delete or decrement all its memos.

interface Reroutine1 {

Result1 rr1(String a1) {

Result2 r2 = reroutine2.rr2(a1);

Result3 r3 = reroutine3.rr3(a1);

Result4 r4 = reroutine4.rr4(a1, r2, r3);

return Result1.r4(a1, r4);
}

}

The idea is that it doesn't block when it launchs rr2 and rr3, until
such time as it just quits when it tries to invoke rr4 and gets a
resulting NullPointerException, then eventually rr4 will complete and be
memoized and call-back rr1, then rr1 will be called-back and then
complete, then run itself through to delete or decrement the ref-count
of all its memo-ized fragmented monad respectively.

Thusly it's cooperative multithreading by never blocking and always just
launching callbacks.

There's this System.identityHashCode() method and then there's a notion
of Object pools and interning Objects then as for about this way that
it's about numeric identity instead of value identity, so that when
making memo's that it's always "==" and for a HashMap with
System.identityHashCode() instead of ever calling equals(), when calling
equals() is more expensive than calling == and the same/same
memo-ization is about Object numeric value or the primitive scalar
value, those being same/same.

https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-

So, you figure to return Objects to these connections by their session
and connection and mux/demux in these callbacks and then write those out?

Well, the idea is to make it so that according to the protocol, the
back-end sort of knows what makes a handle to a datum of the sort, given
the protocol and the protocol and the protocol, and the callback is just
these handles, about what goes in the outer callbacks or outside the
re-routine, those can be different/same. Then the single writer thread
servicing the network I/O just wants to transfer those handles, or, as
necessary through the compression and encryption codecs, then write
those out, well making use of the java.nio for scatter/gather and vector
I/O in the non-blocking and asynchronous I/O as much as possible.


So, that seems a lot of effort to just passing the handles, ....

Well, I don't want to write any code except normal flow-of-control.

So, this same/same bit seems onerous, as long as different/same has a
ref-count and thus the memo-ized monad-fragment is maintained when all
sorts of requests fetch the same thing.

Yeah, maybe you're right. There's much to be gained by re-using monadic
pure idempotent functions yet only invoking them once. That gets into
value equality besides numeric equality, though, with regards to going
into re-routines and interning all Objects by value, so that inside and
through it's all "==" and System.identityHashCode, the memos, then about
the ref-counting in the memos.


So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?

Yeah, it's a thing.

So, I think this needs a much cleaner and well-defined definition, to
fully explore its meaning.

Yeah, I suppose. There's something to be said for reading it again.
Ross Finlayson
2024-04-22 17:06:02 UTC
Permalink
Post by Ross Finlayson
Well I've been thinking about the re-routine as a model of cooperative
multithreading,
then thinking about the flow-machine of protocols
NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP
Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP 1.1
and connections, and the session ID shared client/server.
The re-routine idea is this, that each kind of method, is memoizable,
and, it memoizes, by object identity as the key, for the method, all
its callers, how this is like so.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.
Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.
Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.
Result 1 rr(String a1) {
// if a1 is in the memo, return for it
// else queue for it and carry on
}
What is a re-routine?
It's a pattern for cooperative multithreading.
It's sort of a functional approach to functions and flow.
It has a declarative syntax in the language with usual
flow-of-control.
So, it's cooperative multithreading so it yields?
No, it just quits, and expects to be called back.
So, if it quits, how does it complete?
The entry point to re-routine provides a callback.
Re-routines only return results to other re-routines,
It's the default callback. Otherwise they just callback.
So, it just quits?
If a re-routine gets called with a null, it throws.
If a re-routine gets a null, it just continues.
If a re-routine completes, it callbacks.
So, can a re-routine call any regular code?
Yeah, there are some issues, though.
So, it's got callbacks everywhere?
Well, it's just got callbacks implicitly everywhere.
So, how does it work?
Well, you build a re-routine with an input and a callback,
you call it, then when it completes, it calls the callback.
Then, re-routines call other re-routines with the argument,
and the callback's in a ThreadLocal, and the re-routine memoizes
all of its return values according to the object identity of the inputs,
then when a re-routine completes, it calls again with another ThreadLocal
indicating to delete the memos, following the exact same
flow-of-control
only deleting the memos going along, until it results all the memos in
the re-routines for the interned or ref-counted input are deleted,
then the state of the re-routine is de-allocated.
So, it's sort of like a monad and all in pure and idempotent functions?
Yeah, it's sort of like a monad and all in pure and idempotent functions.
So, it's a model of cooperative multithreading, though with no yield,
and callbacks implicitly everywhere?
Yeah, it's sort of figured that a called re-routine always has a
callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its re-routines in
their normal declarative flow-of-control with exception handling, and
whatever re-routines or other pure monadic idempotent functions it
calls, throw when they get null inputs.
Also it sort of doesn't have primitive types, Strings must always
be interned, all objects must have a distinct identity w.r.t. ==, and
null is never an argument or return value.
So, what does it look like?
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
So, I expect that to return "result(r2, r3)".
Well, that's synchronous, and maybe blocking, the idea is that it
calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
own callback, and a1, and makes a memo for a1, and invokes whatever is
its implementation, and returns null, then rr1 continues and invokes rr3
with r2, which is null, so that throws a NullPointerException, and rr1
quits.
So, ..., that's cooperative multithreading?
Well you see what happens is that rr2 invoked another re-routine or
end routine, and at some point it will get called back, and that will
happen over and over again until rr2 has an r2, then rr2 will memoize
(a1, r2), and then it will callback rr1.
Then rr1 had quit, it runs again, this time it gets r2 from the
(a1, r2) memo in the monad it's building, then it passes a non-null r2
to rr3, which proceeds in much the same way, while rr1 quits again until
rr3 calls it back.
So, ..., it's non-blocking, because it just quits all the time, then
happens to run through the same paces filling in?
That's the idea, that re-routines are responsible to build the
monad and call-back.
So, can I just implement rr2 and rr3 as synchronous and blocking?
Sure, they're interfaces, their implementation is separate. If
they don't know re-routine semantics then they're just synchronous and
blocking. They'll get called every time though when the re-routine gets
called back, and actually they need to know the semantics of returning
an Object or value by identity, because, calling equals() to implement
Memo usually would be too much, where the idea is to actually function
only monadically, and that given same Object or value input, must return
same Object or value output.
So, it's sort of an approach as a monadic pure idempotency?
Well, yeah, you can call it that.
So, what's the point of all this?
Well, the idea is that there are 10,000 connections, and any time
one of them demultiplexes off the connection an input command message,
then it builds one of these with the response input to the demultiplexer
on its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself. Then the re-routine is launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back out.
The point is that there are only as many Theads as cores so the
goal is that they never block,
and that the memos make for interning Objects by value, then the goal is
mostly to receive command objects and handles to request bodies and
result objects and handles to response bodies, then to call-back with
those in whatever serial order is necessary, or not.
So, won't this run through each of these re-routines umpteen times?
Yeah, you figure that the runtime of the re-routine is on the order
of n^2 the order of statements in the re-routine.
So, isn't that terrible?
Well, it doesn't block.
So, it sounds like a big mess.
Yeah, it could be. That's why to avoid blocking and callback
semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, and
they're well-defined behavior is exactly according to flow-of-control
including exception-handling.
There's that and there's basically it only needs one Thread, so,
less Thread x stack size, for a deep enough thread call-stack. Then the
idea is about one Thread per core, figuring for the thread to always be
running and never be blocking.
So, it's just normal flow-of-control.
Well yeah, you expect to write the routine in normal
flow-of-control, and to test it with synchronous and in-memory editions
that just run through synchronously, and that if you don't much care if
it blocks, then it's the same code and has no semantics about the
asynchronous or callbacks actually in it. It just returns when it's done.
So what's the requirements of one of these again?
Well, the idea is, that, for a given instance of a re-routine, it's
an Object, that implements an interface, and it has arguments, and it
has a return value. The expectation is that the re-routine gets called
with the same arguments, and must return the same return value. This
way later calls to re-routines can match the same expectation, same/same.
Also, if it gets different arguments, by Object identity or
primitive value, the re-routine must return a different return value,
those being same/same.
The re-routine memoizes its arguments by its argument list, Object
or primitive value, and a given argument list is same if the order and
types and values of those are same, and it must return the same return
value by type and value.
So, how is this cooperative multithreading unobtrusively in
flow-of-control again?
Here for example the idea would be, rr2 quits and rr1 continues, rr3
quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
those come in, at some point rr4 will be fulfilled, and thus rr4 will
quit and rr1 will quit. When rr4's callback completes, then it will
call-back rr1, which will finally complete, and then call-back whatever
called r1. Then rr1 runs itself through one more time to
delete or decrement all its memos.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(a1);
Result4 r4 = reroutine4.rr4(a1, r2, r3);
return Result1.r4(a1, r4);
}
}
The idea is that it doesn't block when it launchs rr2 and rr3, until
such time as it just quits when it tries to invoke rr4 and gets a
resulting NullPointerException, then eventually rr4 will complete and be
memoized and call-back rr1, then rr1 will be called-back and then
complete, then run itself through to delete or decrement the ref-count
of all its memo-ized fragmented monad respectively.
Thusly it's cooperative multithreading by never blocking and always just
launching callbacks.
There's this System.identityHashCode() method and then there's a notion
of Object pools and interning Objects then as for about this way that
it's about numeric identity instead of value identity, so that when
making memo's that it's always "==" and for a HashMap with
System.identityHashCode() instead of ever calling equals(), when calling
equals() is more expensive than calling == and the same/same
memo-ization is about Object numeric value or the primitive scalar
value, those being same/same.
https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
So, you figure to return Objects to these connections by their session
and connection and mux/demux in these callbacks and then write those out?
Well, the idea is to make it so that according to the protocol, the
back-end sort of knows what makes a handle to a datum of the sort, given
the protocol and the protocol and the protocol, and the callback is just
these handles, about what goes in the outer callbacks or outside the
re-routine, those can be different/same. Then the single writer thread
servicing the network I/O just wants to transfer those handles, or, as
necessary through the compression and encryption codecs, then write
those out, well making use of the java.nio for scatter/gather and vector
I/O in the non-blocking and asynchronous I/O as much as possible.
So, that seems a lot of effort to just passing the handles, ....
Well, I don't want to write any code except normal flow-of-control.
So, this same/same bit seems onerous, as long as different/same has a
ref-count and thus the memo-ized monad-fragment is maintained when all
sorts of requests fetch the same thing.
Yeah, maybe you're right. There's much to be gained by re-using monadic
pure idempotent functions yet only invoking them once. That gets into
value equality besides numeric equality, though, with regards to going
into re-routines and interning all Objects by value, so that inside and
through it's all "==" and System.identityHashCode, the memos, then about
the ref-counting in the memos.
So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
Yeah, it's a thing.
So, I think this needs a much cleaner and well-defined definition, to
fully explore its meaning.
Yeah, I suppose. There's something to be said for reading it again.
ReRoutines: monadic functional non-blocking asynchrony in the language


Implementing a sort of Internet protocol server, it sort of has three or
four kinds of machines.

flow-machine: select/epoll hardware driven I/O events

protocol-establishment: setting up and changing protocol (commands,
encryption/compression)

protocol-coding: block coding in encryption/compression and wire/object
commands/results

routine: inside the objects of the commands of the protocol,
commands/results

Then, it often looks sort of like

flow <-> protocol <-> routine <-> protocol <-> flow


On either outer side of the flow is a connection, it's a socket or the
receipt or sending of a datagram, according to the network interface and
select/epoll.

The establishment of a protocol looks like
connection/configuration/commencement/conclusion, or setup/teardown.
Protocols get involved renegotiation within a protocol, and for example
upgrade among protocols. Then the protocol is setup and established.

The idea is that a protocol's coding is in three parts for
coding/decoding, compression/decompression, and (en)cryption/decryption,
or as it gets set up.

flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-



Whenever data arrives, the idea goes, is that the flow is interpreted
according to the protocol, resulting commands, then the routine derives
results from the commands, as by issuing others, in their protocols, to
the backend flow. Then, the results get sent back out through the
protocol, to the frontend, the clients of what it serves the protocol
the server.

The idea is that there are about 10,000 connections at a time, or more
or less.

flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
...




Then, the routine in the middle, has that there's one processor, and on
the processor are a number of cores, each one independent. Then, the
operating system establishes that each of the cores, has any number of
threads-of-control or threads, and each thread has the state of where it
is in the callstack of routines, and the threads are preempted so that
multithreading, that a core runs multiple threads, gives each thread
some running from the entry to the exit of the thread, in any given
interval of time. Each thread-of-control is thusly independent, while it
must synchronize with any other thread-of-control, to establish common
or mutual state, and threads establish taking turns by mutual exclusion,
called "mutex".

Into and out of the protocol, coding, is either a byte-sequence or
block, or otherwise the flow is a byte-sequence, that being serial,
however the protocol multiplexes and demultiplexes messages, the
commands and their results, to and from the flow.

Then the idea is that what arrives to/from the routine, is objects in
the protocol, or handles to the transport of byte sequences, in the
protocol, to the flow.

A usual idea is that there's a thread that services the flow, where, how
it works is that a thread blocks waiting for there to be any I/O,
input/output, reading input from the flow, and writing output to the
flow. So, mostly the thread that blocks has that there's one thread that
blocks on input, and when there's any input, then it reads or transfers
the bytes from the input, into buffers. That's its only job, and only
one thread can block on a given select/epoll selector, which is any
given number of ports, the connections, the idea being that it just
blocks until select returns for its keys of interest, it services each
of the I/O's by copying from the network interface's buffers into the
program's buffers, then other threads do the rest.

So, if a thread results waiting at all for any other action to complete
or be ready, it's said to "block". While a thread is blocked, the CPU or
core just skips it in scheduling the preemptive multithreading, yet it
still takes some memory and other resources and is in the scheduler of
the threads.

The idea that the I/O thread, ever blocks, is that it's a feature of
select/epoll that hardware results waking it up, with the idea that
that's the only thread that ever blocks.

So, for the other threads, in the decryption/decompression/decoding and
coding/compression/cryption, the idea is that a thread, runs through
those, then returns what it's doing, and joins back to a limited pool of
threads, with a usual idea of there being 1 core : 1 thread, so that
multithreading is sort of simplified, because as far as the system
process is concerned, it has a given number of cores and the system
preemptively multithreads it, and as far as the virtual machine is
concerned, is has a given number of cores and the virtual machine
preemptively multithreads its threads, about the thread-of-control, in
the flow-of-control, of the thing.

A usual way that the routine muliplexes and demultiplexes objects in the
protocol from a flow's input back to a flow's output, has that the
thread-per-connection model has that a single thread carries out the
entire task through the backend flow, blocking along the way, until it
results joining after writing back out to its connection. Yet, that has
a thread per each connection, and threads use scheduling and heap
resources. So, here thread-per-connection is being avoided.

Then, a usual idea of the tasks, is that as I/O is received and flows
into the decryption/decompression/decoding, then what's decoded, results
the specification of a task, the command, and the connection, where to
return its result. The specification is a data structure, so it's an
object or Object, then. This is added to a queue of tasks, where
"buffers" represent the ephemeral storage of content in transport the
byte-sequences, while, the queue is as usually a first-in/first-out
(FIFO) queue also, of tasks.

Then, the idea is that each of the cores consumes task specifications
from the task queue, performs them according to the task specification,
then the results are written out, as coded/compressed/crypted, in the
protocol.

So, to avoid the threads blocking at all, introduces the idea of
"asynchrony" or callbacks, where the idea is that the "blocking" and
"synchronous" has that anywhere in the threads' thread-of-control
flow-of-control, according to the program or the routine, it is current
and synchronous, the value that it has, then with regards to what it
returns or writes, as the result. So, "asynchrony" is the idea that
there's established a callback, or a place to pause and continue, then a
specification of the task in the protocol is put to an event queue and
executed, or from servicing the O/I's of the backend flow, that what
results from that, has the context of the callback and returns/writes to
the relevant connection, its result.

I -> flow -> protocol -> routine -> protocol -> flow -> O -v
O <- flow <- protocol <- routine <- protocol <- flow <- I <-


The idea of non-blocking then, is that a routine either provides a
result immediately available, and is non-blocking, or, queues a task
what results a callback that provides the result eventually, and is
non-blocking, and never invokes any other routine that blocks, so is
non-blocking.

This way a thread, executing tasks, always runs through a task, and thus
services the task queue or TQ, so that the cores' threads are always
running and never blocking. (Besides the I/O and O/I threads which block
when there's no traffic, and usually would be constantly woken up and
not waiting blocked.) This way, the TQ threads, only block when there's
nothing in the TQ, or are just deconstructed, and reconstructed, in a
"pool" of threads, the TQ's executor pool.

Enter the ReRoutine

The idea of a ReRoutine, a re-routine, is that it is a usual procedural
implementation as if it were synchronous, and agnostic of callbacks.

It is named after "routine" and "co-routine". It is a sort of co-routine
that builds a monad and is aware its originating caller, re-caller, and
callback, or, its re-routine caller, re-caller, and callback.

The idea is that there are callbacks implicitly at each method boundary,
and that nulls are reserved values to indicate the result or lack
thereof of re-routines, so that the code has neither callbacks nor any
nulls.

The originating caller has that the TQ, has a task specification, the
session+attachment of the client in the protocol where to write the
output, and the command, then the state of the monad of the task, that
lives on the heap with the task specification and task object. The TQ
consumers or executors or the executor, when a thread picks up the task,
it picks up or builds ("originates") the monad state, which is the
partial state of the re-routine and a memo of the partial state of the
re-routine, and installs this in the thread local storage or
ThreadLocal, for the duration of the invocation of the re-routine. Then
the thread enters the re-routine, which proceeds until it would block,
where instead it queues a command/task with callback to re-call it to
re-launch it, and throw a NullPointerException and quits/returns.

This happens recursively and iteratively in the re-routine implemented
as re-routines, each re-routine updates the partial state of the monad,
then that as a re-routine completes, it re-launches the calling
re-routine, until the original re-routine completes, and it calls the
original callback with the result.

This way the re-routine's method body, is written as plain declarative
procedural code, the flow-of-control, is exactly as if it were
synchronous code, and flow-of-control is exactly as if written in the
language with no callbacks and never nulls, and exception-handling as
exactly defined by the language.

As the re-routine accumulates the partial results, they live on the
heap, in the monad, as a member of the originating task's object the
task in the task queue. This is always added back to the queue as one of
the pending results of a re-routine, so it stays referenced as an object
on the heap, then that as it is completed and the original re-routine
returns, then it's no longer referenced and the garbage-collector can
reclaim it from the heap or the allocator can delete it.







Well, for the re-routine, I sort of figure there's a Callstack and a
Callback type

class Callstack {
Stack<Callback> callstack;
}

interface Callback {
void callback() throws Exception;
}

and then a placeholder sort of type for Callflush

class Callflush {
Callstack callstack;
}

with the idea that the presence in ThreadLocals is to be sorted out,
about a kind of ThreadLocal static pretty much.

With not returning null and for memoizing call-graph dependencies,
there's basically for an "unvoid" type.

class unvoid {

}

Then it's sort of figure that there's an interface with some defaults,
with the idea that some boilerplate gets involved in the Memoization.

interface Caller {}

interface Callee {}

class Callmemo {
memoize(Caller caller, Object[] args);
flush(Caller caller);
}


Then it seems that the Callstack should instead be of a Callgraph, and
then what's maintained from call to call is a Callpath, and then what's
memoized is all kept with the Callgraph, then with regards to objects on
the heap and their distinctness, only being reachable from the
Callgraph, leaving less work for the garbage collector, to maintain the
heap.

The interning semantics would still be on the class level, or for
constructor semantics, as with regards to either interning Objects for
uniqueness, or that otherwise they'd be memoized, with the key being the
Callpath, and the initial arguments into the Callgraph.

Then the idea seems that the ThreaderCaller, establishes the Callgraph
with respect to the Callgraph of an object, installing it on the thread,
otherwise attached to the Callgraph, with regards to the ReRoutine.



About the ReRoutine, it's starting to come together as an idea, what is
the apparatus for invoking re-routines, that they build the monad of the
IOE's (inputs, outputs, exceptions) of the re-routines in their
call-graph, in terms of ThreadLocals of some ThreadLocals that callers
of the re-routines, maintain, with idea of the memoized monad along the
way, and each original re-routine.

class IOE <O, E> {
Object[] input;
Object output;
Exception exception;
}

So the idea is that there are some ThreadLocal's in a static ThreadGlobal

public class ThreadGlobals {
public static ThreadLocal<MonadMemo> monadMemo;
}

where callers or originators or ReRoutines, keep a map of the Runnables
or Callables they have, to the MonadMemo's,

class Originator {
Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
}

then when it's about to invoke a Runnable, if it's a ReRoutine, then it
either retrieves the MonadMemo or makes a new one, and sets it on the
ThreadLocal, then invokes the Runnable, then clears the ThreadLocal.

Then a MonadMemo, pretty simply, is a List of IOE's, that when the
ReRoutine runs through the callgraph, the callstack is indicated by a
tree of integers, and the stack path in the ReRoutine, so that any
ReRoutine that calls ReRoutines A/B/C, points to an IOE that it finds in
the thing, then it's default behavior is to return its memo-ized value,
that otherwise is making the callback that fills its memo and re-invokes
all the way back the Original routine, or just its own entry point.

This is basically that the Originator, when the ReRoutine quits out,
sort of has that any ReRoutine it originates, also gets filled up by the
Originator.

So, then the Originator sort of has a map to a ReRoutine, then for any
Path, the Monad, so that when it sets the ThreadLocal with the
MonadMemo, it also sets the Path for the callee, launches it again when
its callback returned to set its memo and relaunch it, then back up the
path stack to the original re-routine.

One of the issues here is "automatic parallelization". What I mean by
that is that the re-routine just goes along and when it gets nulls
meaning "pending" it just continues along, then expects
NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it gets
relaunched when its input is satisfied.

This way then when routines serially don't depend on each others'
outputs, then they all get launched apiece, parallelizing.

Then, I wonder about usual library code, basically about Collections and
Streams, and the usual sorts of routines that are applied to the
arguments, and how to basically establish that the rule of re-routine
code is that anything that gets a null must throw a
NullPointerException, so the re-routine will quit until the arguments
are satisfied, the inputs to library code. Then with the Memo being
stored in the MonadMemo, it's figured that will work out regardless the
Objects' or primitives' value, with regards to Collections and Stream
code and after usual flow-of-control in Iterables for the for loops, or
whatever other application library code, that they will be run each time
the re-routine passes their section with satisfied arguments, then as
with regards to, that the Memo is just whatever serial order the
re-routine passes, not needing to lookup by Object identity which is
otherwise part of an interning pattern.

void rr1(String s1) {

List<String> l1 = rr2.get(s1);

Map<String, String> m1 = new LinkedHashMap<>();

l1.stream().forEach(s -> m1.put(s, rr3.get(s)));

return m1;
}

See what I figure is that the order of the invocations to rr3.get() is
serial, so it really only needs to memoize its OE, Output|Exception,
then about that putting null values in the Map, and having to check the
values in the Map for null values, and otherwise to make it so that the
semantics of null and NullPointerException, result that satisfying
inputs result calls, and unsatisfying inputs result quits, figuring
those unsatisfying inputs are results of unsatisfied outputs, that will
be satisfied when the callee gets populated its memo and makes the callback.

If the order of invocations is out-of-order, gets again into whether the
Object/primitive by value needs to be the same each time, IOE, about the
library code in Collections, Streams, parallelStream, and Iterables, and
basically otherwise that any kind of library code, should throw
NullPointerException if it gets an "unexpected" null or what doesn't
fulfill it.

The idea though that rr3 will get invoked say 1000 times with the rr2's
result, those each make their call, then re-launch 1000 times, has that
it's figured that the Executor, or Originator, when it looks up and
loads the "ReRoutineMapKey", is to have the count of those and whether
the count is fulfilled, then to no-op later re-launches of the
call-backs, after all the results are populated in the partial monad memo.

Then, there's perhaps instead as that each re-routine just checks its
input or checks its return value for nulls, those being unsatisfied.

(The exception handling thoroughly or what happens when rr3 throws and
this kind of thing is involved thoroughly in library code.)

The idea is it remains correct if the worst thing nulls do is throw
NullPointerException, because that's just a usual quit and means another
re-launch is coming up, and that it automatically queues for
asynchronous parallel invocation each the derivations while resulting
never blocking.

It's figured that re-routines check their inputs for nulls, and throw
quit, and check their inputs for library container types, and checking
any member of a library container collection for null, to throw quit,
and then it will result that the automatic asynchronous parallelization
proceeds, while the re-routines are never blocking, there's only as much
memory on the heap of the monad as would be in the lifetime of the
original re-routine, and whatever re-calls or re-launches of the
re-routine established local state in local variables and library code,
would come in and out of scope according to plain stack unwinding.

Then there's still the perceived deficiency that the re-routine's method
body will be run many times, yet it's only run as many times as result
throwing-quit, when it reaches where its argument to the re-routine or
result value isn't yet satisfied yet is pending.

It would re-run the library code any number of times, until it results
all non-nulls, then the resulting satisfied argument to the following
re-routines, would be memo-ized in the monad, and the return value of
the re-routine thus returning immediately its value on the partial monad.

This way each re-call of the re-routine, mostly encounters its own monad
results in constant time, and throws-quit or gets thrown-quit only when
it would be unsatisfying, with the expectation that whatever
throws-quit, either NullPointerException or extending
NullPointerException, will have a pending callback, that will queue on a
TQ, the task specification to re-launch and re-enter the original or
derived, re-routine.

The idea is sort of that it's sort of, Java with non-blocking I/O and
ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O
and thread local storage, then for the abstract or interface of the
re-routines, how it works out that it's a usual sort of model of
co-operative multithreading, the re-routine, the routine "in the language".


Then it's great that the routine can be stubbed or implemented agnostic
of asynchrony, and declared in the language with standard libraries,
basically using the semantics of exception handling and convention of
re-launching callbacks to implement thread-of-control flow-of-control,
that can be implemented in the synchronous and blocking for unit tests
and modules of the routine, making a great abstraction of flow-of-control.


Basically anything that _does_ block then makes for having its own
thread, whose only job is to block and when it unblocks, throw-toss the
re-launch toward the origin of the re-routine, and consume the next
blocking-task off the TQ. Yet, the re-routines and their servicing the
TQ only need one thread and never block. (And scale in core count and
automatically parallelize asynchronous requests according to satisfied
inputs.)


Mostly the idea of the re-routine is "in the language, it's just plain,
ordinary, synchronous routine".
Ross Finlayson
2024-04-25 17:46:48 UTC
Permalink
Post by Ross Finlayson
Post by Ross Finlayson
Well I've been thinking about the re-routine as a model of cooperative
multithreading,
then thinking about the flow-machine of protocols
NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP
Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP 1.1
and connections, and the session ID shared client/server.
The re-routine idea is this, that each kind of method, is memoizable,
and, it memoizes, by object identity as the key, for the method, all
its callers, how this is like so.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.
Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.
Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.
Result 1 rr(String a1) {
// if a1 is in the memo, return for it
// else queue for it and carry on
}
What is a re-routine?
It's a pattern for cooperative multithreading.
It's sort of a functional approach to functions and flow.
It has a declarative syntax in the language with usual
flow-of-control.
So, it's cooperative multithreading so it yields?
No, it just quits, and expects to be called back.
So, if it quits, how does it complete?
The entry point to re-routine provides a callback.
Re-routines only return results to other re-routines,
It's the default callback. Otherwise they just callback.
So, it just quits?
If a re-routine gets called with a null, it throws.
If a re-routine gets a null, it just continues.
If a re-routine completes, it callbacks.
So, can a re-routine call any regular code?
Yeah, there are some issues, though.
So, it's got callbacks everywhere?
Well, it's just got callbacks implicitly everywhere.
So, how does it work?
Well, you build a re-routine with an input and a callback,
you call it, then when it completes, it calls the callback.
Then, re-routines call other re-routines with the argument,
and the callback's in a ThreadLocal, and the re-routine memoizes
all of its return values according to the object identity of the inputs,
then when a re-routine completes, it calls again with another ThreadLocal
indicating to delete the memos, following the exact same
flow-of-control
only deleting the memos going along, until it results all the memos in
the re-routines for the interned or ref-counted input are deleted,
then the state of the re-routine is de-allocated.
So, it's sort of like a monad and all in pure and idempotent functions?
Yeah, it's sort of like a monad and all in pure and idempotent functions.
So, it's a model of cooperative multithreading, though with no yield,
and callbacks implicitly everywhere?
Yeah, it's sort of figured that a called re-routine always has a
callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its re-routines in
their normal declarative flow-of-control with exception handling, and
whatever re-routines or other pure monadic idempotent functions it
calls, throw when they get null inputs.
Also it sort of doesn't have primitive types, Strings must always
be interned, all objects must have a distinct identity w.r.t. ==, and
null is never an argument or return value.
So, what does it look like?
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
So, I expect that to return "result(r2, r3)".
Well, that's synchronous, and maybe blocking, the idea is that it
calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
own callback, and a1, and makes a memo for a1, and invokes whatever is
its implementation, and returns null, then rr1 continues and invokes rr3
with r2, which is null, so that throws a NullPointerException, and rr1
quits.
So, ..., that's cooperative multithreading?
Well you see what happens is that rr2 invoked another re-routine or
end routine, and at some point it will get called back, and that will
happen over and over again until rr2 has an r2, then rr2 will memoize
(a1, r2), and then it will callback rr1.
Then rr1 had quit, it runs again, this time it gets r2 from the
(a1, r2) memo in the monad it's building, then it passes a non-null r2
to rr3, which proceeds in much the same way, while rr1 quits again until
rr3 calls it back.
So, ..., it's non-blocking, because it just quits all the time, then
happens to run through the same paces filling in?
That's the idea, that re-routines are responsible to build the
monad and call-back.
So, can I just implement rr2 and rr3 as synchronous and blocking?
Sure, they're interfaces, their implementation is separate. If
they don't know re-routine semantics then they're just synchronous and
blocking. They'll get called every time though when the re-routine gets
called back, and actually they need to know the semantics of returning
an Object or value by identity, because, calling equals() to implement
Memo usually would be too much, where the idea is to actually function
only monadically, and that given same Object or value input, must return
same Object or value output.
So, it's sort of an approach as a monadic pure idempotency?
Well, yeah, you can call it that.
So, what's the point of all this?
Well, the idea is that there are 10,000 connections, and any time
one of them demultiplexes off the connection an input command message,
then it builds one of these with the response input to the demultiplexer
on its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself. Then the re-routine is launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back out.
The point is that there are only as many Theads as cores so the
goal is that they never block,
and that the memos make for interning Objects by value, then the goal is
mostly to receive command objects and handles to request bodies and
result objects and handles to response bodies, then to call-back with
those in whatever serial order is necessary, or not.
So, won't this run through each of these re-routines umpteen times?
Yeah, you figure that the runtime of the re-routine is on the order
of n^2 the order of statements in the re-routine.
So, isn't that terrible?
Well, it doesn't block.
So, it sounds like a big mess.
Yeah, it could be. That's why to avoid blocking and callback
semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, and
they're well-defined behavior is exactly according to flow-of-control
including exception-handling.
There's that and there's basically it only needs one Thread, so,
less Thread x stack size, for a deep enough thread call-stack. Then the
idea is about one Thread per core, figuring for the thread to always be
running and never be blocking.
So, it's just normal flow-of-control.
Well yeah, you expect to write the routine in normal
flow-of-control, and to test it with synchronous and in-memory editions
that just run through synchronously, and that if you don't much care if
it blocks, then it's the same code and has no semantics about the
asynchronous or callbacks actually in it. It just returns when it's done.
So what's the requirements of one of these again?
Well, the idea is, that, for a given instance of a re-routine, it's
an Object, that implements an interface, and it has arguments, and it
has a return value. The expectation is that the re-routine gets called
with the same arguments, and must return the same return value. This
way later calls to re-routines can match the same expectation, same/same.
Also, if it gets different arguments, by Object identity or
primitive value, the re-routine must return a different return value,
those being same/same.
The re-routine memoizes its arguments by its argument list, Object
or primitive value, and a given argument list is same if the order and
types and values of those are same, and it must return the same return
value by type and value.
So, how is this cooperative multithreading unobtrusively in
flow-of-control again?
Here for example the idea would be, rr2 quits and rr1 continues, rr3
quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
those come in, at some point rr4 will be fulfilled, and thus rr4 will
quit and rr1 will quit. When rr4's callback completes, then it will
call-back rr1, which will finally complete, and then call-back whatever
called r1. Then rr1 runs itself through one more time to
delete or decrement all its memos.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(a1);
Result4 r4 = reroutine4.rr4(a1, r2, r3);
return Result1.r4(a1, r4);
}
}
The idea is that it doesn't block when it launchs rr2 and rr3, until
such time as it just quits when it tries to invoke rr4 and gets a
resulting NullPointerException, then eventually rr4 will complete and be
memoized and call-back rr1, then rr1 will be called-back and then
complete, then run itself through to delete or decrement the ref-count
of all its memo-ized fragmented monad respectively.
Thusly it's cooperative multithreading by never blocking and always just
launching callbacks.
There's this System.identityHashCode() method and then there's a notion
of Object pools and interning Objects then as for about this way that
it's about numeric identity instead of value identity, so that when
making memo's that it's always "==" and for a HashMap with
System.identityHashCode() instead of ever calling equals(), when calling
equals() is more expensive than calling == and the same/same
memo-ization is about Object numeric value or the primitive scalar
value, those being same/same.
https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
So, you figure to return Objects to these connections by their session
and connection and mux/demux in these callbacks and then write those out?
Well, the idea is to make it so that according to the protocol, the
back-end sort of knows what makes a handle to a datum of the sort, given
the protocol and the protocol and the protocol, and the callback is just
these handles, about what goes in the outer callbacks or outside the
re-routine, those can be different/same. Then the single writer thread
servicing the network I/O just wants to transfer those handles, or, as
necessary through the compression and encryption codecs, then write
those out, well making use of the java.nio for scatter/gather and vector
I/O in the non-blocking and asynchronous I/O as much as possible.
So, that seems a lot of effort to just passing the handles, ....
Well, I don't want to write any code except normal flow-of-control.
So, this same/same bit seems onerous, as long as different/same has a
ref-count and thus the memo-ized monad-fragment is maintained when all
sorts of requests fetch the same thing.
Yeah, maybe you're right. There's much to be gained by re-using monadic
pure idempotent functions yet only invoking them once. That gets into
value equality besides numeric equality, though, with regards to going
into re-routines and interning all Objects by value, so that inside and
through it's all "==" and System.identityHashCode, the memos, then about
the ref-counting in the memos.
So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
Yeah, it's a thing.
So, I think this needs a much cleaner and well-defined definition, to
fully explore its meaning.
Yeah, I suppose. There's something to be said for reading it again.
ReRoutines: monadic functional non-blocking asynchrony in the language
Implementing a sort of Internet protocol server, it sort of has three or
four kinds of machines.
flow-machine: select/epoll hardware driven I/O events
protocol-establishment: setting up and changing protocol (commands,
encryption/compression)
protocol-coding: block coding in encryption/compression and wire/object
commands/results
routine: inside the objects of the commands of the protocol,
commands/results
Then, it often looks sort of like
flow <-> protocol <-> routine <-> protocol <-> flow
On either outer side of the flow is a connection, it's a socket or the
receipt or sending of a datagram, according to the network interface and
select/epoll.
The establishment of a protocol looks like
connection/configuration/commencement/conclusion, or setup/teardown.
Protocols get involved renegotiation within a protocol, and for example
upgrade among protocols. Then the protocol is setup and established.
The idea is that a protocol's coding is in three parts for
coding/decoding, compression/decompression, and (en)cryption/decryption,
or as it gets set up.
flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-
Whenever data arrives, the idea goes, is that the flow is interpreted
according to the protocol, resulting commands, then the routine derives
results from the commands, as by issuing others, in their protocols, to
the backend flow. Then, the results get sent back out through the
protocol, to the frontend, the clients of what it serves the protocol
the server.
The idea is that there are about 10,000 connections at a time, or more
or less.
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
...
Then, the routine in the middle, has that there's one processor, and on
the processor are a number of cores, each one independent. Then, the
operating system establishes that each of the cores, has any number of
threads-of-control or threads, and each thread has the state of where it
is in the callstack of routines, and the threads are preempted so that
multithreading, that a core runs multiple threads, gives each thread
some running from the entry to the exit of the thread, in any given
interval of time. Each thread-of-control is thusly independent, while it
must synchronize with any other thread-of-control, to establish common
or mutual state, and threads establish taking turns by mutual exclusion,
called "mutex".
Into and out of the protocol, coding, is either a byte-sequence or
block, or otherwise the flow is a byte-sequence, that being serial,
however the protocol multiplexes and demultiplexes messages, the
commands and their results, to and from the flow.
Then the idea is that what arrives to/from the routine, is objects in
the protocol, or handles to the transport of byte sequences, in the
protocol, to the flow.
A usual idea is that there's a thread that services the flow, where, how
it works is that a thread blocks waiting for there to be any I/O,
input/output, reading input from the flow, and writing output to the
flow. So, mostly the thread that blocks has that there's one thread that
blocks on input, and when there's any input, then it reads or transfers
the bytes from the input, into buffers. That's its only job, and only
one thread can block on a given select/epoll selector, which is any
given number of ports, the connections, the idea being that it just
blocks until select returns for its keys of interest, it services each
of the I/O's by copying from the network interface's buffers into the
program's buffers, then other threads do the rest.
So, if a thread results waiting at all for any other action to complete
or be ready, it's said to "block". While a thread is blocked, the CPU or
core just skips it in scheduling the preemptive multithreading, yet it
still takes some memory and other resources and is in the scheduler of
the threads.
The idea that the I/O thread, ever blocks, is that it's a feature of
select/epoll that hardware results waking it up, with the idea that
that's the only thread that ever blocks.
So, for the other threads, in the decryption/decompression/decoding and
coding/compression/cryption, the idea is that a thread, runs through
those, then returns what it's doing, and joins back to a limited pool of
threads, with a usual idea of there being 1 core : 1 thread, so that
multithreading is sort of simplified, because as far as the system
process is concerned, it has a given number of cores and the system
preemptively multithreads it, and as far as the virtual machine is
concerned, is has a given number of cores and the virtual machine
preemptively multithreads its threads, about the thread-of-control, in
the flow-of-control, of the thing.
A usual way that the routine muliplexes and demultiplexes objects in the
protocol from a flow's input back to a flow's output, has that the
thread-per-connection model has that a single thread carries out the
entire task through the backend flow, blocking along the way, until it
results joining after writing back out to its connection. Yet, that has
a thread per each connection, and threads use scheduling and heap
resources. So, here thread-per-connection is being avoided.
Then, a usual idea of the tasks, is that as I/O is received and flows
into the decryption/decompression/decoding, then what's decoded, results
the specification of a task, the command, and the connection, where to
return its result. The specification is a data structure, so it's an
object or Object, then. This is added to a queue of tasks, where
"buffers" represent the ephemeral storage of content in transport the
byte-sequences, while, the queue is as usually a first-in/first-out
(FIFO) queue also, of tasks.
Then, the idea is that each of the cores consumes task specifications
from the task queue, performs them according to the task specification,
then the results are written out, as coded/compressed/crypted, in the
protocol.
So, to avoid the threads blocking at all, introduces the idea of
"asynchrony" or callbacks, where the idea is that the "blocking" and
"synchronous" has that anywhere in the threads' thread-of-control
flow-of-control, according to the program or the routine, it is current
and synchronous, the value that it has, then with regards to what it
returns or writes, as the result. So, "asynchrony" is the idea that
there's established a callback, or a place to pause and continue, then a
specification of the task in the protocol is put to an event queue and
executed, or from servicing the O/I's of the backend flow, that what
results from that, has the context of the callback and returns/writes to
the relevant connection, its result.
I -> flow -> protocol -> routine -> protocol -> flow -> O -v
O <- flow <- protocol <- routine <- protocol <- flow <- I <-
The idea of non-blocking then, is that a routine either provides a
result immediately available, and is non-blocking, or, queues a task
what results a callback that provides the result eventually, and is
non-blocking, and never invokes any other routine that blocks, so is
non-blocking.
This way a thread, executing tasks, always runs through a task, and thus
services the task queue or TQ, so that the cores' threads are always
running and never blocking. (Besides the I/O and O/I threads which block
when there's no traffic, and usually would be constantly woken up and
not waiting blocked.) This way, the TQ threads, only block when there's
nothing in the TQ, or are just deconstructed, and reconstructed, in a
"pool" of threads, the TQ's executor pool.
Enter the ReRoutine
The idea of a ReRoutine, a re-routine, is that it is a usual procedural
implementation as if it were synchronous, and agnostic of callbacks.
It is named after "routine" and "co-routine". It is a sort of co-routine
that builds a monad and is aware its originating caller, re-caller, and
callback, or, its re-routine caller, re-caller, and callback.
The idea is that there are callbacks implicitly at each method boundary,
and that nulls are reserved values to indicate the result or lack
thereof of re-routines, so that the code has neither callbacks nor any
nulls.
The originating caller has that the TQ, has a task specification, the
session+attachment of the client in the protocol where to write the
output, and the command, then the state of the monad of the task, that
lives on the heap with the task specification and task object. The TQ
consumers or executors or the executor, when a thread picks up the task,
it picks up or builds ("originates") the monad state, which is the
partial state of the re-routine and a memo of the partial state of the
re-routine, and installs this in the thread local storage or
ThreadLocal, for the duration of the invocation of the re-routine. Then
the thread enters the re-routine, which proceeds until it would block,
where instead it queues a command/task with callback to re-call it to
re-launch it, and throw a NullPointerException and quits/returns.
This happens recursively and iteratively in the re-routine implemented
as re-routines, each re-routine updates the partial state of the monad,
then that as a re-routine completes, it re-launches the calling
re-routine, until the original re-routine completes, and it calls the
original callback with the result.
This way the re-routine's method body, is written as plain declarative
procedural code, the flow-of-control, is exactly as if it were
synchronous code, and flow-of-control is exactly as if written in the
language with no callbacks and never nulls, and exception-handling as
exactly defined by the language.
As the re-routine accumulates the partial results, they live on the
heap, in the monad, as a member of the originating task's object the
task in the task queue. This is always added back to the queue as one of
the pending results of a re-routine, so it stays referenced as an object
on the heap, then that as it is completed and the original re-routine
returns, then it's no longer referenced and the garbage-collector can
reclaim it from the heap or the allocator can delete it.
Well, for the re-routine, I sort of figure there's a Callstack and a
Callback type
class Callstack {
Stack<Callback> callstack;
}
interface Callback {
void callback() throws Exception;
}
and then a placeholder sort of type for Callflush
class Callflush {
Callstack callstack;
}
with the idea that the presence in ThreadLocals is to be sorted out,
about a kind of ThreadLocal static pretty much.
With not returning null and for memoizing call-graph dependencies,
there's basically for an "unvoid" type.
class unvoid {
}
Then it's sort of figure that there's an interface with some defaults,
with the idea that some boilerplate gets involved in the Memoization.
interface Caller {}
interface Callee {}
class Callmemo {
memoize(Caller caller, Object[] args);
flush(Caller caller);
}
Then it seems that the Callstack should instead be of a Callgraph, and
then what's maintained from call to call is a Callpath, and then what's
memoized is all kept with the Callgraph, then with regards to objects on
the heap and their distinctness, only being reachable from the
Callgraph, leaving less work for the garbage collector, to maintain the
heap.
The interning semantics would still be on the class level, or for
constructor semantics, as with regards to either interning Objects for
uniqueness, or that otherwise they'd be memoized, with the key being the
Callpath, and the initial arguments into the Callgraph.
Then the idea seems that the ThreaderCaller, establishes the Callgraph
with respect to the Callgraph of an object, installing it on the thread,
otherwise attached to the Callgraph, with regards to the ReRoutine.
About the ReRoutine, it's starting to come together as an idea, what is
the apparatus for invoking re-routines, that they build the monad of the
IOE's (inputs, outputs, exceptions) of the re-routines in their
call-graph, in terms of ThreadLocals of some ThreadLocals that callers
of the re-routines, maintain, with idea of the memoized monad along the
way, and each original re-routine.
class IOE <O, E> {
Object[] input;
Object output;
Exception exception;
}
So the idea is that there are some ThreadLocal's in a static ThreadGlobal
public class ThreadGlobals {
public static ThreadLocal<MonadMemo> monadMemo;
}
where callers or originators or ReRoutines, keep a map of the Runnables
or Callables they have, to the MonadMemo's,
class Originator {
Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
}
then when it's about to invoke a Runnable, if it's a ReRoutine, then it
either retrieves the MonadMemo or makes a new one, and sets it on the
ThreadLocal, then invokes the Runnable, then clears the ThreadLocal.
Then a MonadMemo, pretty simply, is a List of IOE's, that when the
ReRoutine runs through the callgraph, the callstack is indicated by a
tree of integers, and the stack path in the ReRoutine, so that any
ReRoutine that calls ReRoutines A/B/C, points to an IOE that it finds in
the thing, then it's default behavior is to return its memo-ized value,
that otherwise is making the callback that fills its memo and re-invokes
all the way back the Original routine, or just its own entry point.
This is basically that the Originator, when the ReRoutine quits out,
sort of has that any ReRoutine it originates, also gets filled up by the
Originator.
So, then the Originator sort of has a map to a ReRoutine, then for any
Path, the Monad, so that when it sets the ThreadLocal with the
MonadMemo, it also sets the Path for the callee, launches it again when
its callback returned to set its memo and relaunch it, then back up the
path stack to the original re-routine.
One of the issues here is "automatic parallelization". What I mean by
that is that the re-routine just goes along and when it gets nulls
meaning "pending" it just continues along, then expects
NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it gets
relaunched when its input is satisfied.
This way then when routines serially don't depend on each others'
outputs, then they all get launched apiece, parallelizing.
Then, I wonder about usual library code, basically about Collections and
Streams, and the usual sorts of routines that are applied to the
arguments, and how to basically establish that the rule of re-routine
code is that anything that gets a null must throw a
NullPointerException, so the re-routine will quit until the arguments
are satisfied, the inputs to library code. Then with the Memo being
stored in the MonadMemo, it's figured that will work out regardless the
Objects' or primitives' value, with regards to Collections and Stream
code and after usual flow-of-control in Iterables for the for loops, or
whatever other application library code, that they will be run each time
the re-routine passes their section with satisfied arguments, then as
with regards to, that the Memo is just whatever serial order the
re-routine passes, not needing to lookup by Object identity which is
otherwise part of an interning pattern.
void rr1(String s1) {
List<String> l1 = rr2.get(s1);
Map<String, String> m1 = new LinkedHashMap<>();
l1.stream().forEach(s -> m1.put(s, rr3.get(s)));
return m1;
}
See what I figure is that the order of the invocations to rr3.get() is
serial, so it really only needs to memoize its OE, Output|Exception,
then about that putting null values in the Map, and having to check the
values in the Map for null values, and otherwise to make it so that the
semantics of null and NullPointerException, result that satisfying
inputs result calls, and unsatisfying inputs result quits, figuring
those unsatisfying inputs are results of unsatisfied outputs, that will
be satisfied when the callee gets populated its memo and makes the callback.
If the order of invocations is out-of-order, gets again into whether the
Object/primitive by value needs to be the same each time, IOE, about the
library code in Collections, Streams, parallelStream, and Iterables, and
basically otherwise that any kind of library code, should throw
NullPointerException if it gets an "unexpected" null or what doesn't
fulfill it.
The idea though that rr3 will get invoked say 1000 times with the rr2's
result, those each make their call, then re-launch 1000 times, has that
it's figured that the Executor, or Originator, when it looks up and
loads the "ReRoutineMapKey", is to have the count of those and whether
the count is fulfilled, then to no-op later re-launches of the
call-backs, after all the results are populated in the partial monad memo.
Then, there's perhaps instead as that each re-routine just checks its
input or checks its return value for nulls, those being unsatisfied.
(The exception handling thoroughly or what happens when rr3 throws and
this kind of thing is involved thoroughly in library code.)
The idea is it remains correct if the worst thing nulls do is throw
NullPointerException, because that's just a usual quit and means another
re-launch is coming up, and that it automatically queues for
asynchronous parallel invocation each the derivations while resulting
never blocking.
It's figured that re-routines check their inputs for nulls, and throw
quit, and check their inputs for library container types, and checking
any member of a library container collection for null, to throw quit,
and then it will result that the automatic asynchronous parallelization
proceeds, while the re-routines are never blocking, there's only as much
memory on the heap of the monad as would be in the lifetime of the
original re-routine, and whatever re-calls or re-launches of the
re-routine established local state in local variables and library code,
would come in and out of scope according to plain stack unwinding.
Then there's still the perceived deficiency that the re-routine's method
body will be run many times, yet it's only run as many times as result
throwing-quit, when it reaches where its argument to the re-routine or
result value isn't yet satisfied yet is pending.
It would re-run the library code any number of times, until it results
all non-nulls, then the resulting satisfied argument to the following
re-routines, would be memo-ized in the monad, and the return value of
the re-routine thus returning immediately its value on the partial monad.
This way each re-call of the re-routine, mostly encounters its own monad
results in constant time, and throws-quit or gets thrown-quit only when
it would be unsatisfying, with the expectation that whatever
throws-quit, either NullPointerException or extending
NullPointerException, will have a pending callback, that will queue on a
TQ, the task specification to re-launch and re-enter the original or
derived, re-routine.
The idea is sort of that it's sort of, Java with non-blocking I/O and
ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O
and thread local storage, then for the abstract or interface of the
re-routines, how it works out that it's a usual sort of model of
co-operative multithreading, the re-routine, the routine "in the language".
Then it's great that the routine can be stubbed or implemented agnostic
of asynchrony, and declared in the language with standard libraries,
basically using the semantics of exception handling and convention of
re-launching callbacks to implement thread-of-control flow-of-control,
that can be implemented in the synchronous and blocking for unit tests
and modules of the routine, making a great abstraction of flow-of-control.
Basically anything that _does_ block then makes for having its own
thread, whose only job is to block and when it unblocks, throw-toss the
re-launch toward the origin of the re-routine, and consume the next
blocking-task off the TQ. Yet, the re-routines and their servicing the
TQ only need one thread and never block. (And scale in core count and
automatically parallelize asynchronous requests according to satisfied
inputs.)
Mostly the idea of the re-routine is "in the language, it's just plain,
ordinary, synchronous routine".
Protocol Establishment

Each of these protocols is a combined sort of protocol, then according
to different modes, there's established a protocol, then data flows in
the protocol (in time).


stream-based (connections)
sockets, TCP/IP
sctp SCTP
message-based (datagrams)
datagrams, UDP

The idea is that connections can have state and session state, while,
messages do not.

Abstractly then there's just that connections make for reading from the
connection, or writing to the connection, byte-by-byte,
while messages make for receiving a complete message, or writing a
complete message. SCTP is sort of both.

A bit more concretely, the non-blocking or asychronous or vector I/O,
means that when some bytes arrive the connection is readable, and while
the output buffer is not full a connection is writeable.

For messages it's that when messages arrive messages are readable, and
while the output buffer is not full messages are writeable.

Otherwise bytes or messages that pile up while not readable/writeable
pile up and in cases of limited resources get lost.

So, the idea is that when bytes arrive, whatever's servicing the I/O's
has that the connection has data to read, and, data to write.
The usual idea is that an abstract Reader thread, will give any or all
of the connections something to read, in an arbitrary order,
at an arbitrary rate, then the role of the protocol, is to consume the
bytes to read, thus releasing the buffers, that the Reader, writes to.

Inputting/Reading
Writing/Outputting

The most usual idea of client-server is that
client writes to server then reads from server, while,
server reads from client then writes to client.

Yet, that is just a mode, reads and writes are peer-peer,
reads and writes in any order, while serial according to
that bytes in the octet stream arrive in an order.

There isn't much consideration of the out-of-band,
about sockets and the STREAMS protocol, for
that bytes can arrive out-of-band.


So, the layers of the protocol, result that some layers of the protocol
don't know anything about the protocol, all they know is sequences of
bytes, and, whatever session state is involved to implement the codec,
of the layers of the protocol. All they need to know is that given that
all previous bytes are read/written, that the connection's state is
synchronized, and everything after is read/written through the layer.
Mostly once encryption or compression is setup it's never toredown.

Encryption, TLS
Compression, LZ77 (Deflate, gzip)

The layers of the protocol, result that some layers of the protocol,
only indicate state or conditions of the session.

SASL, Login, AuthN/AuthZ

So, for NNTP, a connection, usually enough starts with no layers,
then in the various protocols and layers, get negotiated to get
established,
combinations of the protocols and layers. Other protocols expect to
start with layers, or not, it varies.

Layering, then, either is in the protocol, to synchronize the session
then establish the layer in the layer protocol then maintain the layer
in the main protocol, has that TLS makes a handsake to establish a
encryption key for all the data, then the TLS layer only needs to
encrypt and decrypt the data by that key, while for Deflate, it's
usually the only option, then after it's setup as a layer, then
everything other way reads/writes gets compressed.


client -> REQUEST
RESPONSE <- server

In some protocols these interleave

client -> REQUEST1
client -> REQUEST2

RESPONSE1A <- server
RESPONSE2A <- server
RESPONSE1B <- server
RESPONSE2B <- server

This then is called multiplexing/demultiplexing, for protocols like IMAP
and HTTP/2,
and another name for multiplexer/demultiplexer is mux/demux.




So, for TLS, the idea is that usually most or all of the connections
will be using the same algorithms with different keys, and each
connection will have its own key, so the idea is to completely separate
TLS establishment from TLS cryptec (crypt/decryp), so, the layer need
only key up the bytes by the connection's key, in their TLS frames.

Then, most of the connections will use compression, then the idea is
that the data is stored at rest compressed already and in a form that it
can be concatenated, and that similarly as constants are a bunch of the
textual context of the text-based protocol, they have compressed and
concatenable constants, with the idea that the Deflate compec
(comp/decomp) just passes those along concatenating them, or actively
compresses/decompresses buffers of bytes or as of sequences of bytes.

The idea is that Readers and Writers deal with bytes at a time,
arbitrarily many, then that what results being passed around as the
data, is as much as possible handles to the data. So, according to the
protocol and layers, indicates the types, that the command routines, get
and return, so that the command routines can get specialized, when the
data at rest, is already layerized, and otherwise to adapt to the more
concrete abstraction, of the non-blocking, asynchronous, and vector I/O,
of what results the flow-machine.


When the library of the runtime of the framework of the language
provides the cryptec or compec, then, there's issues, when, it doesn't
make it so for something like "I will read and write you the bytes as of
making a TLS handshake, then return the algorithm and the key and that
will implement the cryptec", or, "compec, here's either some data or
handles of various types, send them through", it's to be figured out.
The idea for the TLS handshake, is basically to sit in the middle, i.e.
to read and write bytes as of what the client and server send, then
figuring out what is the algorithm and key and then just using that as
the cryptec. Then after TLS algorithm and key is established the rest is
sort of discarded, though there's some idea about state and session, for
the session key feature in TLS. The TLS 1.2 also includes comp/decomp,
though, it's figured that instead it's a feature of the protocol whether
it supports compression, point being that's combining layers, and to be
implemented about these byte-sequences/handles.


mux/demux
crypt/decrypt
comp/decomp
cod/decod

codec


So, the idea is to implement toward the concrete abstraction of
nonblocking vector I/O, while, remaining agnostic of that, so that all
sorts the usual test routines yet particularly the composition of layers
and establishment and upgrade of protocols, is to happen.


Then, from the byte sequences or messages as byte sequences, or handles
of byte sequences, results that in the protocol, the protocol either way
in/out has a given expected set of alternatives that it can read, then
as of derivative of those what it will write.

So, after the layers, which are agnostic of anything but byte-sequences,
and their buffers and framing and chunking and so on, then is the
protocol, or protocols, of the command-set and request/response
semantics, and ordering/session statefulness, and lack thereof.

Then, a particular machine in the flow-machine is as of the "Recognizer"
and "Parser", then what results "Annunciators" and "Legibilizers", as it
were, of what's usually enough called "Deserialization", reading off
from a serial byte-sequence, and "Serialization, writing off to a serial
byte-sequence, first the text of the commands or the structures in these
text-based protocols, the commands and their headers/bodies/payloads,
then the Objects in the object types of the languages of the runtime,
where then the routines of the servicing of the protocol, are defined in
types according to the domain types of the protocol (and their
representations as byte-sequences and handles).

As packets and bytes arrive in the byte-sequence, the Recognizer/Parser
detects when there's a fully-formed command, and its payload, after the
Mux/Demux Demultiplexer, has that the Demultiplexer represents any given
number of separate byte-sequences, then according to the protocol
anything their statefulness/session or orderedness/unorderedness.

So, the Demultiplexer is to Recognize/Parse from the combined input
byte-stream its chunks, that now the connection, has any number of
ordered/unordered byte-sequences, then usually that those are ephemeral
or come and go, while the connection endures, with the most usual notion
that there's only one stream and it's ordered in requets and ordered in
responses, then whether commands gets pipelined and requests need not
await their responses (they're ordered), and whether commands are
numbers and their responses get associated with their command sequence
numbers (they're unordered and the client has its own mux/demux to
relate them).

So, the Recognizer/Parser, theoretically only gets a byte at a time, or
even none, and may get an entire fully-formed message (command), or not,
and may get more bytes than a fully-formed message, or not, and the
bytes may be a well-formed message, or not, and valid, or not.

Then the job of the Recognizer/Parser, is from the beginning of the
byte-sequence, to Recognize a fully-formed message, then to create an
instance of the command object related to the handle back through the
mux/demux to the multiplexer, called the attachment to the connection,
or the return address according to the attachment representing any
routed response and usually meaning that the attachment is the user-data
and any session data attached to the connection and here of the
mux/demux of the connection, the job of the Recognizer/Parser is to work
any time input is received, then to recognize and parse any number of
fully-formed messages from the input, create those Commands according to
the protocol, that the attachment includes the return destination, and,
thusly release those buffers or advance the marker on the Input
byte-sequence, so that the resources are freed, and later
Recognizings/Parsing starts where it left off.

The idea is that bytes arrive, the Recognizer/Parser has to determine
when there's a fully-formed message, consume that and service the
buffers the byte-sequence, having created the derived command.

Now, commands are small, or so few words, then the headers/body/payload,
basically get larger and later unboundedly large. Then, the idea is that
the protocol, has certain modes or sub-protocols, about "switching
protocols", or modes, when basically the service of the routine changes
from recognizing and servicing the beginning to ending of a command, to
recognizing and servicing an arbitrarily large payload, or, for example,
entering a mode where streamed data arrives or whatever sort, then that
according to the length or content of the sub-protocol format, the
Recognizer's job includes that the sub-protocol-streaming, modes, get
into that "sub-protocols" is a sort of "switching protocols", the only
idea though being going into the sub-protocol then back out to the main
protocol, while "switching protocols" is involved in basically any the
establishment or upgrade of the protocol, with regards to the stateful
connection (and not stateless messages, which always are according to
their established or simply some fixed protocol).

This way unboundedly large inputs, don't actually live in the buffers of
the Recognizers that service the buffers of the Inputters/Readers and
Multiplexers/Demultiplexers, instead define modes where they will be
streaming through arbitrarily large payloads.

Here for NNTP and so on, the payloads are not considered arbitrarily
large, though, it's sort of a thing that sending or receiving the
payload of each message, can be defined this way so that in very, very
limited resources of buffers, that the flow-machine keeps flowing.


Then, here, the idea is that these commands and their payloads, have
their outputs that are derived as a function of the inputs. It's
abstractly however this so occurs is the way it is. The idea here is
that the attachment+command+payload makes a re-routine task, and is
pushed onto a task queue (TQ). Then it's figured that the TQ represents
abstractly the execution of all the commands. Then, however many Task
Workers or TW, or the TQ that runs itself, get the oldest task from the
queue (FIFO) and run it. When it's complete, then there's a response
ready in byte-sequences are handles, these are returned to the attachment.

(The "attachment" usually just means a user or private datum associated
with the connection to identify its session with the connection
according to non-blocking I/O, here it also means the mux/demux
"remultiplexer" attachment, it's the destination of any response
associated with a stream of commands over the connection.)

So, here then the TQ basically has the idea of the re-routine, that is
non-blocking and involves the asynchronous fulfillment of the routine in
the domain types of the domain of object types that the protocol adapts
as an adapter, that the domain types fulfill as adapted. Then for NNTP
that's like groups and messages and summaries and such, the objects. For
IMAP its mailboxes and messages to read, for SMTP its emails to send,
with various protocols in SMTP being separate protocols like DKIM or
what, for all these sorts protocols. For HTTP and HTTP/2 it's usual HTTP
verbs, usually HTTP 1.1 serial and pipelined requests over a connection,
in HTTP/2 mutiplexed requests over a connection. Then "session" means
broadly that it may be across connections, what gets into the attachment
and the establishment and upgrade of protocol, that sessions are
stateful thusly, yet granularly, as to connections yet as to each request.


Then, the same sort of thing is the same sort of thing to back-end,
whatever makes for adapters, to domain types, that have their protocols,
and what results the O/I side to the I/O side, that the I/O side is the
server's client-facing side, while the O/I side is the
server-as-a-client-to-the-backend's, side.

Then, the O/I side is just the same sort of idea that in the
flow-machine, the protocols get established in their layers, so that all
through the routine, then the domain type are to get specialized to when
byte-sequences and handles are known well-formed in compatible
protocols, that the domain and protocol come together in their
definition, basically so it results that from the back-end is retrieved
for messages by their message-ID that are stored compressed at rest, to
result passing back handles to those, for example a memory-map range
offset to an open handle of a zip file that has the concatenable entry
of the message-Id from the groups' day's messages, or a list of those
for a range of messages, then the re-routine results passing the handles
back out to the attachment, which sends them right out.

So, this way there's that besides the TQ and its TW's, that those are to
never block or be long-running, that anything that's long-running is on
the O/I side, and has its own resources, buffers, and so on, where of
course all the resources here of this flow-machine are shared by all the
flow-machines in the flow-machine, in the sense that they are not shared
yet come from a common resource altogether, and are exclusive. (This
gets into the definition of "share" as with regards to "free to share,
or copy" and "exclusive to share, a.k.a. taking turns, not cutting in
line, and not stealing nor hoarding".)


Then on the O/I side or the backend side, it's figured the backend is
any kind of adapters, like DB adapters or FS adapters or WS adapters,
database or filesystem or webservice, where object-stores are considered
filesystem adapters. What that gets into is "pools" like client pools,
connection pools, resource pools, that a pool is usually enough
according to a session and the establishment of protocol, then with
regards to servicing the adapter and according to the protocol and the
domain objects that thusly implement the protocol, the backend side has
its own dedicated routines and TW's, or threads of execution, with
regards to that the backend side basically gets a callback+request and
the job is to invoke the adapter with the request, and invoke the
callback with the response, then whether for example the callback is
actually the original attachment, or it involves "bridging the unbounded
sub-protocol", what it means for the adapter to service the command.

Then the adapter is usually either provided as with intermediate or
domain types, or, for example it's just another protocol flow machine
and according to the connections or messaging or mux/demux or
establishing and upgrading layers and protocols, it basically works the
same way as above in reverse.

Here "to service" is the usual infinitive that for the noun means "this
machine provides a service" yet as a verb that service means to operate
according to the defined behavior of the machine in the resources of the
machine to meet the resource needs of the machine's actions in the
capabilities and limits of the resources of the machine, where this "I/O
flow-machine: a service" is basically one "node" or "process" in a usual
process model, allocated its own quota of resources according to the
process and its environment model in the runtime in the system, and
that's it. So, there's servicing as the main routine, then also what it
means the maintenance servicing or service of the extended routine.
Then, for protocols it's "implement this protocol according to its
standards according to the resources in routine".


You know, I don't know where they have one of these anywhere, ....
Ross Finlayson
2024-04-27 16:01:43 UTC
Permalink
Post by Ross Finlayson
Post by Ross Finlayson
Post by Ross Finlayson
Well I've been thinking about the re-routine as a model of cooperative
multithreading,
then thinking about the flow-machine of protocols
NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP
Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP 1.1
and connections, and the session ID shared client/server.
The re-routine idea is this, that each kind of method, is memoizable,
and, it memoizes, by object identity as the key, for the method, all
its callers, how this is like so.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.
Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.
Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.
Result 1 rr(String a1) {
// if a1 is in the memo, return for it
// else queue for it and carry on
}
What is a re-routine?
It's a pattern for cooperative multithreading.
It's sort of a functional approach to functions and flow.
It has a declarative syntax in the language with usual
flow-of-control.
So, it's cooperative multithreading so it yields?
No, it just quits, and expects to be called back.
So, if it quits, how does it complete?
The entry point to re-routine provides a callback.
Re-routines only return results to other re-routines,
It's the default callback. Otherwise they just callback.
So, it just quits?
If a re-routine gets called with a null, it throws.
If a re-routine gets a null, it just continues.
If a re-routine completes, it callbacks.
So, can a re-routine call any regular code?
Yeah, there are some issues, though.
So, it's got callbacks everywhere?
Well, it's just got callbacks implicitly everywhere.
So, how does it work?
Well, you build a re-routine with an input and a callback,
you call it, then when it completes, it calls the callback.
Then, re-routines call other re-routines with the argument,
and the callback's in a ThreadLocal, and the re-routine memoizes
all of its return values according to the object identity of the inputs,
then when a re-routine completes, it calls again with another ThreadLocal
indicating to delete the memos, following the exact same flow-of-control
only deleting the memos going along, until it results all the memos in
the re-routines for the interned or ref-counted input are deleted,
then the state of the re-routine is de-allocated.
So, it's sort of like a monad and all in pure and idempotent functions?
Yeah, it's sort of like a monad and all in pure and idempotent functions.
So, it's a model of cooperative multithreading, though with no yield,
and callbacks implicitly everywhere?
Yeah, it's sort of figured that a called re-routine always has a
callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its re-routines in
their normal declarative flow-of-control with exception handling, and
whatever re-routines or other pure monadic idempotent functions it
calls, throw when they get null inputs.
Also it sort of doesn't have primitive types, Strings must always
be interned, all objects must have a distinct identity w.r.t. ==, and
null is never an argument or return value.
So, what does it look like?
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(r2);
return result(r2, r3);
}
}
So, I expect that to return "result(r2, r3)".
Well, that's synchronous, and maybe blocking, the idea is that it
calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
own callback, and a1, and makes a memo for a1, and invokes whatever is
its implementation, and returns null, then rr1 continues and invokes rr3
with r2, which is null, so that throws a NullPointerException, and rr1
quits.
So, ..., that's cooperative multithreading?
Well you see what happens is that rr2 invoked another re-routine or
end routine, and at some point it will get called back, and that will
happen over and over again until rr2 has an r2, then rr2 will memoize
(a1, r2), and then it will callback rr1.
Then rr1 had quit, it runs again, this time it gets r2 from the
(a1, r2) memo in the monad it's building, then it passes a non-null r2
to rr3, which proceeds in much the same way, while rr1 quits again until
rr3 calls it back.
So, ..., it's non-blocking, because it just quits all the time, then
happens to run through the same paces filling in?
That's the idea, that re-routines are responsible to build the
monad and call-back.
So, can I just implement rr2 and rr3 as synchronous and blocking?
Sure, they're interfaces, their implementation is separate. If
they don't know re-routine semantics then they're just synchronous and
blocking. They'll get called every time though when the re-routine gets
called back, and actually they need to know the semantics of returning
an Object or value by identity, because, calling equals() to implement
Memo usually would be too much, where the idea is to actually function
only monadically, and that given same Object or value input, must return
same Object or value output.
So, it's sort of an approach as a monadic pure idempotency?
Well, yeah, you can call it that.
So, what's the point of all this?
Well, the idea is that there are 10,000 connections, and any time
one of them demultiplexes off the connection an input command message,
then it builds one of these with the response input to the demultiplexer
on its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself. Then the re-routine is launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back out.
The point is that there are only as many Theads as cores so the
goal is that they never block,
and that the memos make for interning Objects by value, then the goal is
mostly to receive command objects and handles to request bodies and
result objects and handles to response bodies, then to call-back with
those in whatever serial order is necessary, or not.
So, won't this run through each of these re-routines umpteen times?
Yeah, you figure that the runtime of the re-routine is on the order
of n^2 the order of statements in the re-routine.
So, isn't that terrible?
Well, it doesn't block.
So, it sounds like a big mess.
Yeah, it could be. That's why to avoid blocking and callback
semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, and
they're well-defined behavior is exactly according to flow-of-control
including exception-handling.
There's that and there's basically it only needs one Thread, so,
less Thread x stack size, for a deep enough thread call-stack. Then the
idea is about one Thread per core, figuring for the thread to always be
running and never be blocking.
So, it's just normal flow-of-control.
Well yeah, you expect to write the routine in normal
flow-of-control, and to test it with synchronous and in-memory editions
that just run through synchronously, and that if you don't much care if
it blocks, then it's the same code and has no semantics about the
asynchronous or callbacks actually in it. It just returns when it's done.
So what's the requirements of one of these again?
Well, the idea is, that, for a given instance of a re-routine, it's
an Object, that implements an interface, and it has arguments, and it
has a return value. The expectation is that the re-routine gets called
with the same arguments, and must return the same return value. This
way later calls to re-routines can match the same expectation, same/same.
Also, if it gets different arguments, by Object identity or
primitive value, the re-routine must return a different return value,
those being same/same.
The re-routine memoizes its arguments by its argument list, Object
or primitive value, and a given argument list is same if the order and
types and values of those are same, and it must return the same return
value by type and value.
So, how is this cooperative multithreading unobtrusively in
flow-of-control again?
Here for example the idea would be, rr2 quits and rr1 continues, rr3
quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
those come in, at some point rr4 will be fulfilled, and thus rr4 will
quit and rr1 will quit. When rr4's callback completes, then it will
call-back rr1, which will finally complete, and then call-back whatever
called r1. Then rr1 runs itself through one more time to
delete or decrement all its memos.
interface Reroutine1 {
Result1 rr1(String a1) {
Result2 r2 = reroutine2.rr2(a1);
Result3 r3 = reroutine3.rr3(a1);
Result4 r4 = reroutine4.rr4(a1, r2, r3);
return Result1.r4(a1, r4);
}
}
The idea is that it doesn't block when it launchs rr2 and rr3, until
such time as it just quits when it tries to invoke rr4 and gets a
resulting NullPointerException, then eventually rr4 will complete and be
memoized and call-back rr1, then rr1 will be called-back and then
complete, then run itself through to delete or decrement the ref-count
of all its memo-ized fragmented monad respectively.
Thusly it's cooperative multithreading by never blocking and always just
launching callbacks.
There's this System.identityHashCode() method and then there's a notion
of Object pools and interning Objects then as for about this way that
it's about numeric identity instead of value identity, so that when
making memo's that it's always "==" and for a HashMap with
System.identityHashCode() instead of ever calling equals(), when calling
equals() is more expensive than calling == and the same/same
memo-ization is about Object numeric value or the primitive scalar
value, those being same/same.
https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
So, you figure to return Objects to these connections by their session
and connection and mux/demux in these callbacks and then write those out?
Well, the idea is to make it so that according to the protocol, the
back-end sort of knows what makes a handle to a datum of the sort, given
the protocol and the protocol and the protocol, and the callback is just
these handles, about what goes in the outer callbacks or outside the
re-routine, those can be different/same. Then the single writer thread
servicing the network I/O just wants to transfer those handles, or, as
necessary through the compression and encryption codecs, then write
those out, well making use of the java.nio for scatter/gather and vector
I/O in the non-blocking and asynchronous I/O as much as possible.
So, that seems a lot of effort to just passing the handles, ....
Well, I don't want to write any code except normal flow-of-control.
So, this same/same bit seems onerous, as long as different/same has a
ref-count and thus the memo-ized monad-fragment is maintained when all
sorts of requests fetch the same thing.
Yeah, maybe you're right. There's much to be gained by re-using monadic
pure idempotent functions yet only invoking them once. That gets into
value equality besides numeric equality, though, with regards to going
into re-routines and interning all Objects by value, so that inside and
through it's all "==" and System.identityHashCode, the memos, then about
the ref-counting in the memos.
So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
Yeah, it's a thing.
So, I think this needs a much cleaner and well-defined definition, to
fully explore its meaning.
Yeah, I suppose. There's something to be said for reading it again.
ReRoutines: monadic functional non-blocking asynchrony in the language
Implementing a sort of Internet protocol server, it sort of has three or
four kinds of machines.
flow-machine: select/epoll hardware driven I/O events
protocol-establishment: setting up and changing protocol (commands,
encryption/compression)
protocol-coding: block coding in encryption/compression and wire/object
commands/results
routine: inside the objects of the commands of the protocol,
commands/results
Then, it often looks sort of like
flow <-> protocol <-> routine <-> protocol <-> flow
On either outer side of the flow is a connection, it's a socket or the
receipt or sending of a datagram, according to the network interface and
select/epoll.
The establishment of a protocol looks like
connection/configuration/commencement/conclusion, or setup/teardown.
Protocols get involved renegotiation within a protocol, and for example
upgrade among protocols. Then the protocol is setup and established.
The idea is that a protocol's coding is in three parts for
coding/decoding, compression/decompression, and (en)cryption/decryption,
or as it gets set up.
flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-
Whenever data arrives, the idea goes, is that the flow is interpreted
according to the protocol, resulting commands, then the routine derives
results from the commands, as by issuing others, in their protocols, to
the backend flow. Then, the results get sent back out through the
protocol, to the frontend, the clients of what it serves the protocol
the server.
The idea is that there are about 10,000 connections at a time, or more
or less.
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
...
Then, the routine in the middle, has that there's one processor, and on
the processor are a number of cores, each one independent. Then, the
operating system establishes that each of the cores, has any number of
threads-of-control or threads, and each thread has the state of where it
is in the callstack of routines, and the threads are preempted so that
multithreading, that a core runs multiple threads, gives each thread
some running from the entry to the exit of the thread, in any given
interval of time. Each thread-of-control is thusly independent, while it
must synchronize with any other thread-of-control, to establish common
or mutual state, and threads establish taking turns by mutual exclusion,
called "mutex".
Into and out of the protocol, coding, is either a byte-sequence or
block, or otherwise the flow is a byte-sequence, that being serial,
however the protocol multiplexes and demultiplexes messages, the
commands and their results, to and from the flow.
Then the idea is that what arrives to/from the routine, is objects in
the protocol, or handles to the transport of byte sequences, in the
protocol, to the flow.
A usual idea is that there's a thread that services the flow, where, how
it works is that a thread blocks waiting for there to be any I/O,
input/output, reading input from the flow, and writing output to the
flow. So, mostly the thread that blocks has that there's one thread that
blocks on input, and when there's any input, then it reads or transfers
the bytes from the input, into buffers. That's its only job, and only
one thread can block on a given select/epoll selector, which is any
given number of ports, the connections, the idea being that it just
blocks until select returns for its keys of interest, it services each
of the I/O's by copying from the network interface's buffers into the
program's buffers, then other threads do the rest.
So, if a thread results waiting at all for any other action to complete
or be ready, it's said to "block". While a thread is blocked, the CPU or
core just skips it in scheduling the preemptive multithreading, yet it
still takes some memory and other resources and is in the scheduler of
the threads.
The idea that the I/O thread, ever blocks, is that it's a feature of
select/epoll that hardware results waking it up, with the idea that
that's the only thread that ever blocks.
So, for the other threads, in the decryption/decompression/decoding and
coding/compression/cryption, the idea is that a thread, runs through
those, then returns what it's doing, and joins back to a limited pool of
threads, with a usual idea of there being 1 core : 1 thread, so that
multithreading is sort of simplified, because as far as the system
process is concerned, it has a given number of cores and the system
preemptively multithreads it, and as far as the virtual machine is
concerned, is has a given number of cores and the virtual machine
preemptively multithreads its threads, about the thread-of-control, in
the flow-of-control, of the thing.
A usual way that the routine muliplexes and demultiplexes objects in the
protocol from a flow's input back to a flow's output, has that the
thread-per-connection model has that a single thread carries out the
entire task through the backend flow, blocking along the way, until it
results joining after writing back out to its connection. Yet, that has
a thread per each connection, and threads use scheduling and heap
resources. So, here thread-per-connection is being avoided.
Then, a usual idea of the tasks, is that as I/O is received and flows
into the decryption/decompression/decoding, then what's decoded, results
the specification of a task, the command, and the connection, where to
return its result. The specification is a data structure, so it's an
object or Object, then. This is added to a queue of tasks, where
"buffers" represent the ephemeral storage of content in transport the
byte-sequences, while, the queue is as usually a first-in/first-out
(FIFO) queue also, of tasks.
Then, the idea is that each of the cores consumes task specifications
from the task queue, performs them according to the task specification,
then the results are written out, as coded/compressed/crypted, in the
protocol.
So, to avoid the threads blocking at all, introduces the idea of
"asynchrony" or callbacks, where the idea is that the "blocking" and
"synchronous" has that anywhere in the threads' thread-of-control
flow-of-control, according to the program or the routine, it is current
and synchronous, the value that it has, then with regards to what it
returns or writes, as the result. So, "asynchrony" is the idea that
there's established a callback, or a place to pause and continue, then a
specification of the task in the protocol is put to an event queue and
executed, or from servicing the O/I's of the backend flow, that what
results from that, has the context of the callback and returns/writes to
the relevant connection, its result.
I -> flow -> protocol -> routine -> protocol -> flow -> O -v
O <- flow <- protocol <- routine <- protocol <- flow <- I <-
The idea of non-blocking then, is that a routine either provides a
result immediately available, and is non-blocking, or, queues a task
what results a callback that provides the result eventually, and is
non-blocking, and never invokes any other routine that blocks, so is
non-blocking.
This way a thread, executing tasks, always runs through a task, and thus
services the task queue or TQ, so that the cores' threads are always
running and never blocking. (Besides the I/O and O/I threads which block
when there's no traffic, and usually would be constantly woken up and
not waiting blocked.) This way, the TQ threads, only block when there's
nothing in the TQ, or are just deconstructed, and reconstructed, in a
"pool" of threads, the TQ's executor pool.
Enter the ReRoutine
The idea of a ReRoutine, a re-routine, is that it is a usual procedural
implementation as if it were synchronous, and agnostic of callbacks.
It is named after "routine" and "co-routine". It is a sort of co-routine
that builds a monad and is aware its originating caller, re-caller, and
callback, or, its re-routine caller, re-caller, and callback.
The idea is that there are callbacks implicitly at each method boundary,
and that nulls are reserved values to indicate the result or lack
thereof of re-routines, so that the code has neither callbacks nor any
nulls.
The originating caller has that the TQ, has a task specification, the
session+attachment of the client in the protocol where to write the
output, and the command, then the state of the monad of the task, that
lives on the heap with the task specification and task object. The TQ
consumers or executors or the executor, when a thread picks up the task,
it picks up or builds ("originates") the monad state, which is the
partial state of the re-routine and a memo of the partial state of the
re-routine, and installs this in the thread local storage or
ThreadLocal, for the duration of the invocation of the re-routine. Then
the thread enters the re-routine, which proceeds until it would block,
where instead it queues a command/task with callback to re-call it to
re-launch it, and throw a NullPointerException and quits/returns.
This happens recursively and iteratively in the re-routine implemented
as re-routines, each re-routine updates the partial state of the monad,
then that as a re-routine completes, it re-launches the calling
re-routine, until the original re-routine completes, and it calls the
original callback with the result.
This way the re-routine's method body, is written as plain declarative
procedural code, the flow-of-control, is exactly as if it were
synchronous code, and flow-of-control is exactly as if written in the
language with no callbacks and never nulls, and exception-handling as
exactly defined by the language.
As the re-routine accumulates the partial results, they live on the
heap, in the monad, as a member of the originating task's object the
task in the task queue. This is always added back to the queue as one of
the pending results of a re-routine, so it stays referenced as an object
on the heap, then that as it is completed and the original re-routine
returns, then it's no longer referenced and the garbage-collector can
reclaim it from the heap or the allocator can delete it.
Well, for the re-routine, I sort of figure there's a Callstack and a
Callback type
class Callstack {
Stack<Callback> callstack;
}
interface Callback {
void callback() throws Exception;
}
and then a placeholder sort of type for Callflush
class Callflush {
Callstack callstack;
}
with the idea that the presence in ThreadLocals is to be sorted out,
about a kind of ThreadLocal static pretty much.
With not returning null and for memoizing call-graph dependencies,
there's basically for an "unvoid" type.
class unvoid {
}
Then it's sort of figure that there's an interface with some defaults,
with the idea that some boilerplate gets involved in the Memoization.
interface Caller {}
interface Callee {}
class Callmemo {
memoize(Caller caller, Object[] args);
flush(Caller caller);
}
Then it seems that the Callstack should instead be of a Callgraph, and
then what's maintained from call to call is a Callpath, and then what's
memoized is all kept with the Callgraph, then with regards to objects on
the heap and their distinctness, only being reachable from the
Callgraph, leaving less work for the garbage collector, to maintain the
heap.
The interning semantics would still be on the class level, or for
constructor semantics, as with regards to either interning Objects for
uniqueness, or that otherwise they'd be memoized, with the key being the
Callpath, and the initial arguments into the Callgraph.
Then the idea seems that the ThreaderCaller, establishes the Callgraph
with respect to the Callgraph of an object, installing it on the thread,
otherwise attached to the Callgraph, with regards to the ReRoutine.
About the ReRoutine, it's starting to come together as an idea, what is
the apparatus for invoking re-routines, that they build the monad of the
IOE's (inputs, outputs, exceptions) of the re-routines in their
call-graph, in terms of ThreadLocals of some ThreadLocals that callers
of the re-routines, maintain, with idea of the memoized monad along the
way, and each original re-routine.
class IOE <O, E> {
Object[] input;
Object output;
Exception exception;
}
So the idea is that there are some ThreadLocal's in a static ThreadGlobal
public class ThreadGlobals {
public static ThreadLocal<MonadMemo> monadMemo;
}
where callers or originators or ReRoutines, keep a map of the Runnables
or Callables they have, to the MonadMemo's,
class Originator {
Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
}
then when it's about to invoke a Runnable, if it's a ReRoutine, then it
either retrieves the MonadMemo or makes a new one, and sets it on the
ThreadLocal, then invokes the Runnable, then clears the ThreadLocal.
Then a MonadMemo, pretty simply, is a List of IOE's, that when the
ReRoutine runs through the callgraph, the callstack is indicated by a
tree of integers, and the stack path in the ReRoutine, so that any
ReRoutine that calls ReRoutines A/B/C, points to an IOE that it finds in
the thing, then it's default behavior is to return its memo-ized value,
that otherwise is making the callback that fills its memo and re-invokes
all the way back the Original routine, or just its own entry point.
This is basically that the Originator, when the ReRoutine quits out,
sort of has that any ReRoutine it originates, also gets filled up by the
Originator.
So, then the Originator sort of has a map to a ReRoutine, then for any
Path, the Monad, so that when it sets the ThreadLocal with the
MonadMemo, it also sets the Path for the callee, launches it again when
its callback returned to set its memo and relaunch it, then back up the
path stack to the original re-routine.
One of the issues here is "automatic parallelization". What I mean by
that is that the re-routine just goes along and when it gets nulls
meaning "pending" it just continues along, then expects
NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it gets
relaunched when its input is satisfied.
This way then when routines serially don't depend on each others'
outputs, then they all get launched apiece, parallelizing.
Then, I wonder about usual library code, basically about Collections and
Streams, and the usual sorts of routines that are applied to the
arguments, and how to basically establish that the rule of re-routine
code is that anything that gets a null must throw a
NullPointerException, so the re-routine will quit until the arguments
are satisfied, the inputs to library code. Then with the Memo being
stored in the MonadMemo, it's figured that will work out regardless the
Objects' or primitives' value, with regards to Collections and Stream
code and after usual flow-of-control in Iterables for the for loops, or
whatever other application library code, that they will be run each time
the re-routine passes their section with satisfied arguments, then as
with regards to, that the Memo is just whatever serial order the
re-routine passes, not needing to lookup by Object identity which is
otherwise part of an interning pattern.
void rr1(String s1) {
List<String> l1 = rr2.get(s1);
Map<String, String> m1 = new LinkedHashMap<>();
l1.stream().forEach(s -> m1.put(s, rr3.get(s)));
return m1;
}
See what I figure is that the order of the invocations to rr3.get() is
serial, so it really only needs to memoize its OE, Output|Exception,
then about that putting null values in the Map, and having to check the
values in the Map for null values, and otherwise to make it so that the
semantics of null and NullPointerException, result that satisfying
inputs result calls, and unsatisfying inputs result quits, figuring
those unsatisfying inputs are results of unsatisfied outputs, that will
be satisfied when the callee gets populated its memo and makes the callback.
If the order of invocations is out-of-order, gets again into whether the
Object/primitive by value needs to be the same each time, IOE, about the
library code in Collections, Streams, parallelStream, and Iterables, and
basically otherwise that any kind of library code, should throw
NullPointerException if it gets an "unexpected" null or what doesn't
fulfill it.
The idea though that rr3 will get invoked say 1000 times with the rr2's
result, those each make their call, then re-launch 1000 times, has that
it's figured that the Executor, or Originator, when it looks up and
loads the "ReRoutineMapKey", is to have the count of those and whether
the count is fulfilled, then to no-op later re-launches of the
call-backs, after all the results are populated in the partial monad memo.
Then, there's perhaps instead as that each re-routine just checks its
input or checks its return value for nulls, those being unsatisfied.
(The exception handling thoroughly or what happens when rr3 throws and
this kind of thing is involved thoroughly in library code.)
The idea is it remains correct if the worst thing nulls do is throw
NullPointerException, because that's just a usual quit and means another
re-launch is coming up, and that it automatically queues for
asynchronous parallel invocation each the derivations while resulting
never blocking.
It's figured that re-routines check their inputs for nulls, and throw
quit, and check their inputs for library container types, and checking
any member of a library container collection for null, to throw quit,
and then it will result that the automatic asynchronous parallelization
proceeds, while the re-routines are never blocking, there's only as much
memory on the heap of the monad as would be in the lifetime of the
original re-routine, and whatever re-calls or re-launches of the
re-routine established local state in local variables and library code,
would come in and out of scope according to plain stack unwinding.
Then there's still the perceived deficiency that the re-routine's method
body will be run many times, yet it's only run as many times as result
throwing-quit, when it reaches where its argument to the re-routine or
result value isn't yet satisfied yet is pending.
It would re-run the library code any number of times, until it results
all non-nulls, then the resulting satisfied argument to the following
re-routines, would be memo-ized in the monad, and the return value of
the re-routine thus returning immediately its value on the partial monad.
This way each re-call of the re-routine, mostly encounters its own monad
results in constant time, and throws-quit or gets thrown-quit only when
it would be unsatisfying, with the expectation that whatever
throws-quit, either NullPointerException or extending
NullPointerException, will have a pending callback, that will queue on a
TQ, the task specification to re-launch and re-enter the original or
derived, re-routine.
The idea is sort of that it's sort of, Java with non-blocking I/O and
ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O
and thread local storage, then for the abstract or interface of the
re-routines, how it works out that it's a usual sort of model of
co-operative multithreading, the re-routine, the routine "in the language".
Then it's great that the routine can be stubbed or implemented agnostic
of asynchrony, and declared in the language with standard libraries,
basically using the semantics of exception handling and convention of
re-launching callbacks to implement thread-of-control flow-of-control,
that can be implemented in the synchronous and blocking for unit tests
and modules of the routine, making a great abstraction of
flow-of-control.
Basically anything that _does_ block then makes for having its own
thread, whose only job is to block and when it unblocks, throw-toss the
re-launch toward the origin of the re-routine, and consume the next
blocking-task off the TQ. Yet, the re-routines and their servicing the
TQ only need one thread and never block. (And scale in core count and
automatically parallelize asynchronous requests according to satisfied
inputs.)
Mostly the idea of the re-routine is "in the language, it's just plain,
ordinary, synchronous routine".
Protocol Establishment
Each of these protocols is a combined sort of protocol, then according
to different modes, there's established a protocol, then data flows in
the protocol (in time).
stream-based (connections)
sockets, TCP/IP
sctp SCTP
message-based (datagrams)
datagrams, UDP
The idea is that connections can have state and session state, while,
messages do not.
Abstractly then there's just that connections make for reading from the
connection, or writing to the connection, byte-by-byte,
while messages make for receiving a complete message, or writing a
complete message. SCTP is sort of both.
A bit more concretely, the non-blocking or asychronous or vector I/O,
means that when some bytes arrive the connection is readable, and while
the output buffer is not full a connection is writeable.
For messages it's that when messages arrive messages are readable, and
while the output buffer is not full messages are writeable.
Otherwise bytes or messages that pile up while not readable/writeable
pile up and in cases of limited resources get lost.
So, the idea is that when bytes arrive, whatever's servicing the I/O's
has that the connection has data to read, and, data to write.
The usual idea is that an abstract Reader thread, will give any or all
of the connections something to read, in an arbitrary order,
at an arbitrary rate, then the role of the protocol, is to consume the
bytes to read, thus releasing the buffers, that the Reader, writes to.
Inputting/Reading
Writing/Outputting
The most usual idea of client-server is that
client writes to server then reads from server, while,
server reads from client then writes to client.
Yet, that is just a mode, reads and writes are peer-peer,
reads and writes in any order, while serial according to
that bytes in the octet stream arrive in an order.
There isn't much consideration of the out-of-band,
about sockets and the STREAMS protocol, for
that bytes can arrive out-of-band.
So, the layers of the protocol, result that some layers of the protocol
don't know anything about the protocol, all they know is sequences of
bytes, and, whatever session state is involved to implement the codec,
of the layers of the protocol. All they need to know is that given that
all previous bytes are read/written, that the connection's state is
synchronized, and everything after is read/written through the layer.
Mostly once encryption or compression is setup it's never toredown.
Encryption, TLS
Compression, LZ77 (Deflate, gzip)
The layers of the protocol, result that some layers of the protocol,
only indicate state or conditions of the session.
SASL, Login, AuthN/AuthZ
So, for NNTP, a connection, usually enough starts with no layers,
then in the various protocols and layers, get negotiated to get
established,
combinations of the protocols and layers. Other protocols expect to
start with layers, or not, it varies.
Layering, then, either is in the protocol, to synchronize the session
then establish the layer in the layer protocol then maintain the layer
in the main protocol, has that TLS makes a handsake to establish a
encryption key for all the data, then the TLS layer only needs to
encrypt and decrypt the data by that key, while for Deflate, it's
usually the only option, then after it's setup as a layer, then
everything other way reads/writes gets compressed.
client -> REQUEST
RESPONSE <- server
In some protocols these interleave
client -> REQUEST1
client -> REQUEST2
RESPONSE1A <- server
RESPONSE2A <- server
RESPONSE1B <- server
RESPONSE2B <- server
This then is called multiplexing/demultiplexing, for protocols like IMAP
and HTTP/2,
and another name for multiplexer/demultiplexer is mux/demux.
So, for TLS, the idea is that usually most or all of the connections
will be using the same algorithms with different keys, and each
connection will have its own key, so the idea is to completely separate
TLS establishment from TLS cryptec (crypt/decryp), so, the layer need
only key up the bytes by the connection's key, in their TLS frames.
Then, most of the connections will use compression, then the idea is
that the data is stored at rest compressed already and in a form that it
can be concatenated, and that similarly as constants are a bunch of the
textual context of the text-based protocol, they have compressed and
concatenable constants, with the idea that the Deflate compec
(comp/decomp) just passes those along concatenating them, or actively
compresses/decompresses buffers of bytes or as of sequences of bytes.
The idea is that Readers and Writers deal with bytes at a time,
arbitrarily many, then that what results being passed around as the
data, is as much as possible handles to the data. So, according to the
protocol and layers, indicates the types, that the command routines, get
and return, so that the command routines can get specialized, when the
data at rest, is already layerized, and otherwise to adapt to the more
concrete abstraction, of the non-blocking, asynchronous, and vector I/O,
of what results the flow-machine.
When the library of the runtime of the framework of the language
provides the cryptec or compec, then, there's issues, when, it doesn't
make it so for something like "I will read and write you the bytes as of
making a TLS handshake, then return the algorithm and the key and that
will implement the cryptec", or, "compec, here's either some data or
handles of various types, send them through", it's to be figured out.
The idea for the TLS handshake, is basically to sit in the middle, i.e.
to read and write bytes as of what the client and server send, then
figuring out what is the algorithm and key and then just using that as
the cryptec. Then after TLS algorithm and key is established the rest is
sort of discarded, though there's some idea about state and session, for
the session key feature in TLS. The TLS 1.2 also includes comp/decomp,
though, it's figured that instead it's a feature of the protocol whether
it supports compression, point being that's combining layers, and to be
implemented about these byte-sequences/handles.
mux/demux
crypt/decrypt
comp/decomp
cod/decod
codec
So, the idea is to implement toward the concrete abstraction of
nonblocking vector I/O, while, remaining agnostic of that, so that all
sorts the usual test routines yet particularly the composition of layers
and establishment and upgrade of protocols, is to happen.
Then, from the byte sequences or messages as byte sequences, or handles
of byte sequences, results that in the protocol, the protocol either way
in/out has a given expected set of alternatives that it can read, then
as of derivative of those what it will write.
So, after the layers, which are agnostic of anything but byte-sequences,
and their buffers and framing and chunking and so on, then is the
protocol, or protocols, of the command-set and request/response
semantics, and ordering/session statefulness, and lack thereof.
Then, a particular machine in the flow-machine is as of the "Recognizer"
and "Parser", then what results "Annunciators" and "Legibilizers", as it
were, of what's usually enough called "Deserialization", reading off
from a serial byte-sequence, and "Serialization, writing off to a serial
byte-sequence, first the text of the commands or the structures in these
text-based protocols, the commands and their headers/bodies/payloads,
then the Objects in the object types of the languages of the runtime,
where then the routines of the servicing of the protocol, are defined in
types according to the domain types of the protocol (and their
representations as byte-sequences and handles).
As packets and bytes arrive in the byte-sequence, the Recognizer/Parser
detects when there's a fully-formed command, and its payload, after the
Mux/Demux Demultiplexer, has that the Demultiplexer represents any given
number of separate byte-sequences, then according to the protocol
anything their statefulness/session or orderedness/unorderedness.
So, the Demultiplexer is to Recognize/Parse from the combined input
byte-stream its chunks, that now the connection, has any number of
ordered/unordered byte-sequences, then usually that those are ephemeral
or come and go, while the connection endures, with the most usual notion
that there's only one stream and it's ordered in requets and ordered in
responses, then whether commands gets pipelined and requests need not
await their responses (they're ordered), and whether commands are
numbers and their responses get associated with their command sequence
numbers (they're unordered and the client has its own mux/demux to
relate them).
So, the Recognizer/Parser, theoretically only gets a byte at a time, or
even none, and may get an entire fully-formed message (command), or not,
and may get more bytes than a fully-formed message, or not, and the
bytes may be a well-formed message, or not, and valid, or not.
Then the job of the Recognizer/Parser, is from the beginning of the
byte-sequence, to Recognize a fully-formed message, then to create an
instance of the command object related to the handle back through the
mux/demux to the multiplexer, called the attachment to the connection,
or the return address according to the attachment representing any
routed response and usually meaning that the attachment is the user-data
and any session data attached to the connection and here of the
mux/demux of the connection, the job of the Recognizer/Parser is to work
any time input is received, then to recognize and parse any number of
fully-formed messages from the input, create those Commands according to
the protocol, that the attachment includes the return destination, and,
thusly release those buffers or advance the marker on the Input
byte-sequence, so that the resources are freed, and later
Recognizings/Parsing starts where it left off.
The idea is that bytes arrive, the Recognizer/Parser has to determine
when there's a fully-formed message, consume that and service the
buffers the byte-sequence, having created the derived command.
Now, commands are small, or so few words, then the headers/body/payload,
basically get larger and later unboundedly large. Then, the idea is that
the protocol, has certain modes or sub-protocols, about "switching
protocols", or modes, when basically the service of the routine changes
from recognizing and servicing the beginning to ending of a command, to
recognizing and servicing an arbitrarily large payload, or, for example,
entering a mode where streamed data arrives or whatever sort, then that
according to the length or content of the sub-protocol format, the
Recognizer's job includes that the sub-protocol-streaming, modes, get
into that "sub-protocols" is a sort of "switching protocols", the only
idea though being going into the sub-protocol then back out to the main
protocol, while "switching protocols" is involved in basically any the
establishment or upgrade of the protocol, with regards to the stateful
connection (and not stateless messages, which always are according to
their established or simply some fixed protocol).
This way unboundedly large inputs, don't actually live in the buffers of
the Recognizers that service the buffers of the Inputters/Readers and
Multiplexers/Demultiplexers, instead define modes where they will be
streaming through arbitrarily large payloads.
Here for NNTP and so on, the payloads are not considered arbitrarily
large, though, it's sort of a thing that sending or receiving the
payload of each message, can be defined this way so that in very, very
limited resources of buffers, that the flow-machine keeps flowing.
Then, here, the idea is that these commands and their payloads, have
their outputs that are derived as a function of the inputs. It's
abstractly however this so occurs is the way it is. The idea here is
that the attachment+command+payload makes a re-routine task, and is
pushed onto a task queue (TQ). Then it's figured that the TQ represents
abstractly the execution of all the commands. Then, however many Task
Workers or TW, or the TQ that runs itself, get the oldest task from the
queue (FIFO) and run it. When it's complete, then there's a response
ready in byte-sequences are handles, these are returned to the attachment.
(The "attachment" usually just means a user or private datum associated
with the connection to identify its session with the connection
according to non-blocking I/O, here it also means the mux/demux
"remultiplexer" attachment, it's the destination of any response
associated with a stream of commands over the connection.)
So, here then the TQ basically has the idea of the re-routine, that is
non-blocking and involves the asynchronous fulfillment of the routine in
the domain types of the domain of object types that the protocol adapts
as an adapter, that the domain types fulfill as adapted. Then for NNTP
that's like groups and messages and summaries and such, the objects. For
IMAP its mailboxes and messages to read, for SMTP its emails to send,
with various protocols in SMTP being separate protocols like DKIM or
what, for all these sorts protocols. For HTTP and HTTP/2 it's usual HTTP
verbs, usually HTTP 1.1 serial and pipelined requests over a connection,
in HTTP/2 mutiplexed requests over a connection. Then "session" means
broadly that it may be across connections, what gets into the attachment
and the establishment and upgrade of protocol, that sessions are
stateful thusly, yet granularly, as to connections yet as to each request.
Then, the same sort of thing is the same sort of thing to back-end,
whatever makes for adapters, to domain types, that have their protocols,
and what results the O/I side to the I/O side, that the I/O side is the
server's client-facing side, while the O/I side is the
server-as-a-client-to-the-backend's, side.
Then, the O/I side is just the same sort of idea that in the
flow-machine, the protocols get established in their layers, so that all
through the routine, then the domain type are to get specialized to when
byte-sequences and handles are known well-formed in compatible
protocols, that the domain and protocol come together in their
definition, basically so it results that from the back-end is retrieved
for messages by their message-ID that are stored compressed at rest, to
result passing back handles to those, for example a memory-map range
offset to an open handle of a zip file that has the concatenable entry
of the message-Id from the groups' day's messages, or a list of those
for a range of messages, then the re-routine results passing the handles
back out to the attachment, which sends them right out.
So, this way there's that besides the TQ and its TW's, that those are to
never block or be long-running, that anything that's long-running is on
the O/I side, and has its own resources, buffers, and so on, where of
course all the resources here of this flow-machine are shared by all the
flow-machines in the flow-machine, in the sense that they are not shared
yet come from a common resource altogether, and are exclusive. (This
gets into the definition of "share" as with regards to "free to share,
or copy" and "exclusive to share, a.k.a. taking turns, not cutting in
line, and not stealing nor hoarding".)
Then on the O/I side or the backend side, it's figured the backend is
any kind of adapters, like DB adapters or FS adapters or WS adapters,
database or filesystem or webservice, where object-stores are considered
filesystem adapters. What that gets into is "pools" like client pools,
connection pools, resource pools, that a pool is usually enough
according to a session and the establishment of protocol, then with
regards to servicing the adapter and according to the protocol and the
domain objects that thusly implement the protocol, the backend side has
its own dedicated routines and TW's, or threads of execution, with
regards to that the backend side basically gets a callback+request and
the job is to invoke the adapter with the request, and invoke the
callback with the response, then whether for example the callback is
actually the original attachment, or it involves "bridging the unbounded
sub-protocol", what it means for the adapter to service the command.
Then the adapter is usually either provided as with intermediate or
domain types, or, for example it's just another protocol flow machine
and according to the connections or messaging or mux/demux or
establishing and upgrading layers and protocols, it basically works the
same way as above in reverse.
Here "to service" is the usual infinitive that for the noun means "this
machine provides a service" yet as a verb that service means to operate
according to the defined behavior of the machine in the resources of the
machine to meet the resource needs of the machine's actions in the
capabilities and limits of the resources of the machine, where this "I/O
flow-machine: a service" is basically one "node" or "process" in a usual
process model, allocated its own quota of resources according to the
process and its environment model in the runtime in the system, and
that's it. So, there's servicing as the main routine, then also what it
means the maintenance servicing or service of the extended routine.
Then, for protocols it's "implement this protocol according to its
standards according to the resources in routine".
You know, I don't know where they have one of these anywhere, ....
So, besides attachment+command+payload, also is for indicating the
protocol and layers, where it can inferred for the response, when the
callback exists or as the streaming sub-protocol starts|continues|ends,
what the response can be, in terms of domain objects, or handles, or
byte sequences, in terms of domain objects that can result handles to
transfer or byte-sequences to read or write,
attachment+command+payload+protocols "ACPP" data structure.

Another idea that seems pretty usual, is when the payload is off to the
side, about picking up the payload when the request arrives, about when
the command, in the protocol, involves that the request payload, is off
to the side, to side-load the payload, where usually it means the
payload is large, or bigger than the limits of the request size limit in
the protocol, it sort of seems a good idea, to indicate for the
protocol, whether it can resolve resource references, "external", then
that accessing them as off to the side happens before ingesting the
command or as whether it's the intent to reference the external
resource, and when, when the external resource off to the side, "is",
part of the request payload, or otherwise that it's just part of the
routine.

That though would get into when the side effect of the routine, is to
result the external reference or call, that it's figured that would all
be part of the routine. It depends on the protocol, and whether the
payload "is" fully-formed, with or without the external reference.


Then HTTP/2 and Websockets have plenty going on about the multiplexer,
where it's figured that multiplexed attachments, or "remultiplexer
attachment", RMA, out from the demultiplexer and back through the
multiplexer, have then that's another sort of protocol machine, in terms
of the layers, and about whether there's a thread or not that
multiplexing requires any sort of state on otherwise the connections'
attachment, that all the state of the multiplexer is figured lives in a
data structure on the actual attachment, while the logic should be
re-entrant and just a usual module for the protocol(s).

It's figured then that the attachment is a key, with respect to a key
number for the attachment, then that in the multiplexing or muxing
protocols, there's a serial number of the request or command. There's a
usual idea to have serial numbers for commands besides, for each
connection, and then even serial numbers for commands for the lifetime
of the runtime. Then it's the usual metric of success or the error rate
how many of those are successes and how many are failures, that
otherwise the machine is pretty agnostic that being in the protocol.

Timeouts and cancels are sort of figured to be attached to the monad and
the re-routine. It's figured that for any command in the protocol, it
has a timeout. When a command is received, is when the timeout countdown
starts, abstractly wall-clock time or system time. So, the ACPP has also
the timeout time, so, the task T has an ACPP
attachment-command-payload-protocol and a routine or reroutine R or RR.
Then also it has some metrics M or MT, here start time and expiry time,
and the serial numbers. So, how timeouts work is that when T is to be
picked up to a TW, first TW checks whether M.time is past expiry, then
if so it cancels the monad and results returning howsoever in the
protocol the timeout. If not what's figured is that before the
re-routine runs through, it just tosses T back on the TQ anyway, so that
then whenever it comes up again, it's just checked again until such time
as the task T actually completed, or it expires, or it was canceled, or
otherwise concluded, according to the combination of the monad of the
R/RR, and M.time, and system time. Now, this seems bad, because an
otherwise empty queue, would constantly be thrashing, so it's bad. Then,
what's to be figured is some sort of parameter, "toss when", that then
though would have timeout priority queues, or buckets of sorts with
regards to tossing all the tasks T back on the TQ for no other reason
than to check their timeout.

It's figured that the monad of the re-routine is all the heap objects
and references to handles of the outstanding command. So, when the
re-routine is completed/canceled/concluded, then all the resources of
the monad should be freed. Then it's figured that any routine to access
the monad is re-entrant, and so that it results that access to the monad
is atomic, to build the graph of memos in the monad, then that access to
each memo is atomic as after access to the monad itself, so that the
access to the monad is thread-safe (and to be non-blocking, where the
only thing that happens to the monad is adding re-routine paths, and
getting and setting values of object values and handles, then releasing
all of it [, after picking out otherwise the result]).

So it's figured that if there's a sort of sweeper or closer being the
usual idea of timeouts, then also in the case that for whatever reason
the asynchronous backend fails, to get a success or error result and
callback, so that the task T

T{
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RR routine; // routine / re-routine (monad)
MT metrics; // metrics/time
}

has that timeouts, are of a sort of granularity. So, it's not so much
that timeouts need to be delivered at a given exact time, as delivered
within a given duration of time. The idea is that timeouts both call a
cancel on the routine and result an error in the protocol. (Connection
and socket timeouts or connection drops or closures and so on, should
also result cancels as some form of conclusion cleans up the monad's
resources.)

There's also that timeouts are irrelevant after conclusion, yet if
there's a task queue of timeouts, not to do any work fishing them out,
just letting them expire. Yet, given that timeouts are usually much
longer than actual execution times, there's no point keeping them around.

Then it's figured each routine and sub-routine, has its timing, then
it's figured to have that the RR and MT both have the time, then as with
regards to, the RR and MT both having a monad, then whether it's the
same monad what it's figured, is what it's figured.

TASK {
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RRMT routine; // routine / re-routine, metrics / time (monad)
}

Then it's figured that any sub-routine checks the timeout overall, and
the timeouts up the re-routine, and the timeout of the task, resulting a
cancel in any timeout, then basically to push that on the back of the
task queue or LIFO last-in-first-out, which seems a bad idea, though
that it's to expeditiously return an error and release the resources,
and cancel any outstanding requests.

So, any time a task is touched, there's checking the attachment whether
it's dropped, checking the routine whether it's canceled, with the goal
of that it's all cleaned up to free the resources, and to close any
handles opened in the course of building the monad of the routine's results.

Otherwise while a command is outstanding there's not much to be done
about it, it's either outstanding and not started or outstanding and
started, until it concludes and there's a return, the idea being that
the attachment can drop at any time and that would be according to the
Inputter/Reader or Recognizer/Parser (an ill-formed command results
either an error or a drop), the routine can conclude at any time either
completing or being canceled, then that whether any handles are open in
the payload, is that a drop in the attachment, disconnect in the
[streaming] command, or cancel in the routine, ends each of the three,
each of those two, or that one.

(This is that the command when 'streaming sub-protocol' results a bunch
of commands in a sub-protocol that's one command in the protocol.)

The idea is that the RMA is only enough detail to relate to the current
state in the attachment of the remultiplexing, the command is enough
state to describe its command and payload and with regards to what
protocol it is and what sub-protocols it entered and what protocol it
returns to, and the routine is the monad of the entire state of the
routine, either value objects or open handles, to keep track of all the
things according to these things.

So, still it's not quite clear how to have the timeout in the case that
the backend hangs, or drops, or otherwise that there's no response from
the adapter, what's a timeout. This sort of introduces re-try logic to
go along with time-out logic.

The re-try logic, involves that anything can fail, and some things can
be re-tried when they fail. The re-try logic would be part of the
routine or re-routine, figuring that any re-tries still have to live in
the time of the command. Then re-tries are kind of like time-outs, it's
usual that it's not just hammering the re-tries, yet a usual sort of
back-off and retry-count, or retry strategy, and then whether that it
involves that it should be a new adapter handle from the pool, about
that adapter handles from the pool should be round-robin and when there
are retry-able errors that usually means the adapter connection is
un-usable, that getting a new adapter connection will get a new one and
whether retry-able errors plainly enough indicate to recycle the adapter
pool.

Then, retry-logic also involves resource-down, what's called
circuit-breaker when the resource is down that it's figured that it's
down until it's back up. [It's figured that errors by default are _not_
retry-able, and, then as about the resource-health or
backend-availability, what gets involved in a model of critical
resource-recycling and backend-health.]


About server-push, there's an idea that it involves the remultiplexer
and that the routine, according to the protocol, synthesizes tasks and
is involved with the remultiplexer, to result it makes tasks then that
run like usual tasks. [This is part of the idea also of the mux or
remux, about 1:many commands/responses, and usually enough their
serials, and then, with regards to "opportunistic server push", how to
drop the commands that follow that would otherwise request the
resources. HTTP/2 server-push looks deprecated, while then there's
WebSocket, which basically makes for a different sort of use-case
peer-peer than client-server. For IMAP is the idea that when there are
multiple responses to single commands then that's basically in the
mux/remux. For pipelined commands and also for serial commands is the
mux/remux. The pipelined commands would result state building in the
mux/remux when they're returned disordered, with regards to results and
the handles, and 'TCB' or 'TW' driving response results.]


So, how to implement timeout or the sweeper/closer, has for example that
a connection drop, should cancel all the outstanding tasks for that
connection. For example, undefined behavior of whatever sort results a
missed callback, should eventually timeout and cancel the task, or all
the tasks instances in the TQ for that task. (It's fair enough to just
mark the monads of the attachment or routine as canceled, then they'll
just get immediately discarded when they come up in the TQ.) There's no
point having timeouts in the task queue because they'd either get
invoked for nothing or get added to the task queue long after the task
usually completes. (It's figured that most timeouts are loose timeouts
and most tasks complete in much under their timeout, yet here it's
automatic that timeouts are granular to each step of the re-routine, in
terms of the re-routine erroring-out if a sub-routine times-out.)


The Recognizer/Parser (Commander) is otherwise stateless, the
Inputter/Reader and its Remultiplexer Attachment don't know what results
Tasks, the Task Queue will run (and here non-blockingly) any Task's
associated routine/re-reroutine, and catch timeouts in the execution of
the re-routine, the idea is that the sweeper/closer basically would only
result having anything to do when there's undefined behavior in the
re-routine, or bugs, or backend timeouts, then whether calls to the
adapter would have the timeout-task-lessors or "TTL's", in its task
queue, point being that when there's nothing going on that the entire
thing is essentially _idle_, with the Inputter/Reader blocked on select
on the I/O side, the Outputter/Writer or Backend Adapter sent on the O/I
side, the Inputter/Reader blocked on the O/I side, the TQ's empty (of,
the protocol, and, the backend adapters), and it's all just pending
input from the I/O or O/I side, to cascade the callbacks back to idle,
again.

I.e. there shouldn't be timeout tasks in the TQ, because, at low load,
they would just thrash and waste cycles, and at high load, would arrive
late. Yet, it is so that there is formal un-reliability of the routines,
and, formal un-reliability of the O/I side or backend, [and formal
un-reliability of connections or drops,] so some sweeper/closer checks
outstanding commands what should result canceling the command and its
routines, then as with regards to the backend adapter, recycling or
teardown the backend adapter, to set it up again.

Then the idea is that, Tasks, well enough represent the outstanding
commands, yet there's not to be maintaining a task set next to the task
queue, because it would use more space and maintenance in time than the
queue itself, while multiple instances of the same Task can be in the
Task queue as point each to the state of the monad in the re-routine,
then gets into whether it's so, that, there is a task-set next to the
task-queue, then that concluding the task removes it from the set, while
the sweeper/closer just is scheduled to run periodically through the
entire task-set and cancel those expired, or dropped.

Then, having both a task-set TS and task-queue TQ, maybe seems the thing
to do, where, it should be sort of rotating, because, the task-queue is
FIFO, while the task-set is just a set (a concurrent set, though as with
regards to that the tasks can only be marked canceled, and resubmitted
to the task queue, with regards to that the only action that removes
tasks from the task-set is for the task-queue to result them being
concluded, then that whatever task gets tossed on the task queue is to
be inserted into the task-set).

Then the task-set TS would be on the order of outstanding tasks, while,
the task-queue TQ would be on the order of outstanding tasks' re-routines.

Then the usual idea of sweeper/closer is to iterate through a view of
the TS, check each task whether its attachment dropped or command or
routine timed-out or canceled, then if dropped or canceled, to toss it
on the TQ, which would eventually result canceling if not already
canceled and dropping if dropped.

(Canceling/Cancelling.)

Most of the memory would be in the monads, also the open or live handles
would be in the routine's monads, with the idea being that when the task
concludes, then the results, that go out through the remultiplexer,
should be part of the task.

TASK {
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RRMT routine; // routine / re-routine, metrics / time (monad)
RSLT result; // result (monad)
}

It's figured that the routine _returns_ a result, which is either a
serializable value or otherwise it's according to the protocol, or it's
a live handle or specification of handle, or it has an error/exception
that is expected to be according to the protocol, or that there was an
error then whether it results a drop according to the protocol. So, when
the routine and task concludes, then the routine and metrics monads can
be released, or de-allocated or deleted, while what live handles they
have, are to be passed back as expeditiously as possible to the
remultiplexer to be written to the output as on the wire the protocol,
so that the live handles can be closed or their reference counts
decremented or otherwise released to the handle pool, of a sort, which
is yet sort of undefined.

The result RSLT isn't really part of the task, once the task is
concluding, the RRMT goes right to the RMA according to the PCP, that
being the atomic operation of concluding the task, and deleting it from
the task-set. (It's figured that outstanding callbacks unaware their
cancel, of the re-routines, basically don't toss the task back onto the
TQ if they're canceled, that if they do, it would just sort of
spuriously add it back to the task-set, which would result it being
swept out eventually.)

TASK {
RMA attachment; // return/remultiplexer attachment
PCP command; // protocol command/payload
RRMT routine; // routine / re-routine, metrics / time (monad, live handles)
}

TQ // task queue
TS // task set

TW // task-queue worker thread, latch on TQ
TZ // task-set cleanup thread, scheduled about timeouts

Then, about what threads run the callbacks, is to get figured out.

TCF // thread call forward
TCB // thread call back

It's sort of figured that calling forward, is into the adapters and
backend, and calling back, is out of the result to the remultiplexer and
running the remultiplexer also. This is that the task-worker thread
invokes the re-routines, and the re-routine callbacks, are pretty much
called by the backend or TCF, because all they do is toss back onto the
TQ, so that the TW runs the re-routines, the TCF is involved in the O/I
side and the backend adapter, and what reserves live handles, while the
TCB returns the results through the I/O side, and what recycles live
handles.

Then it's sort of figured that the TCF result thread groups or whatever
otherwise results whatever blocks and so on howsoever it is that the
backend adapter is implemented, while TCB is pretty much a single
thread, because it's driving I/O back out through all the open
connections, or that it describes thread groups back out the I/O side.
("TCB" not to be confused with "thread control block".)


Nonblocking I/O, and, Asynchronous I/O

One thing I'm not too sure about is the limits of the read and write of
the non-blocking I/O. What I figure is that mostly buffers throughout
are 4KiB buffers from a free-list, which is the usual idea of reserving
buffers and getting them off a free-list and returning them when done.
Then, I sort of figure that the reader, gets about a 1MiB buffer for
itself, with the idea being, that the Inputter when there is data off
the wire, reads it into 1MiB buffer, then copies that off to 4KiB buffers.

BFL // buffer free-list, 1
BIR // buffer of the inputter/reader, 1
B4K // buffer of 4KiB size, many

What I figure that BIR is "direct memory" as much as possible, for DMA
where native, while, figuring that pretty much it's buffers on the heap,
fixed-size buffers of small enough size to usually not be mostly sparse,
while not so small that usual larger messages aren't a ton of them, then
with regards to the semantics of offsets and extents in the buffers and
buffer lists, and atomic consumption of the front of the list and atomic
concatenation to the back of the list, or queue, and about the
"monohydra" or "slique" data structure defined way above in this thread.

Then about writing is another thing, I figure that a given number of
4KiB buffers will write out, then no longer be non-blocking while
draining, about the non-blocking I/O, that read is usually non-blocking
because if nothing is available then nothing gets copied, while write
may be blocking because the UART or what it is remains to drain to write
more in.

I'm not even sure about O_NONBLOCK, aio_read/aio_write, and overlapped I/O.

Then it looks like O_NONBLOCKING with select and asynchronous I/O the
aio or overlapped I/O, sort of have different approaches.

I figure to use non-blocking select, then, the selector for the channel
at least in Java, has both read and write interest, or all interest,
with regards to there only being one selector key per channel (socket).
The issue with this is that there's basically that the Inputter/Reader
and Outputter/Writer are all one thread. So, it's figured that reads
would read about a megabyte at a time, then round-robin all the ready
reads and writes, that for each non-blocking read, it reads as much as a
megabyte into the one buffer there, copies the read bytes appending it
into the buffer array in front of the remux Input for the attachment,
tries to write as many as possbile for the buffer array for the write
output in front of the remux Output for the attachment, then proceeds
round-robin through the selector keys. (That each of those is
non-blocking on the read/write a.k.a. recv/send then copying from the
read buffer into application buffers is according to as fast as it can
fill a free-list given list of buffers, though that any might get
nothing done.)

One of the issues is that the selector keys get waked up for read, when
there is any input, and for write, when the output has any writeable
space, yet, there's no reason to service the write keys when there is
nothing to write, and nothing to read from the read keys when nothing to
read.

So, it's figured the read keys are always of interest, yet if the write
keys are of interest, mostly it's only one or the other. So I'd figure
to have separate read and write selectors, yet, it's suggested they must
go together the channel the operations of interest, then whether the
idea is "round-robin write then round-robin read", because all the
selector keys would always be waking up for writing nothing when the way
is clear, for nothing.

Then besides non-blocking I/O is asynchronous I/O, where, mostly the
idea is that the completion handler results about the same, ..., where
the completion handler is usually enough "copy the data out to read,
repeat", or just "atomic append more to write, repeat", with though
whether that results that each connection needs its own read buffers, in
terms of asynchronous I/O, not saying in what order or whether
completion handlers, completion ports or completion handlers, would for
reading each need their own buffer. I.e., to scale to unbounded many
connections, the idea is to use constant size resources, because
anything linear would grow unbounded. That what's to write is still all
these buffers of data and how to "deduplicate the backend" still has
that the heap fills up with tasks, that the other great hope is that the
resulting runtime naturally rate-limits itself, by what resources it
has, heap.

About "live handles" is the sort of hope that "well when it gets to the
writing the I/O, figuring to transfer an entire file, pass it an open
handle", is starting to seem a bad idea, mostly for not keeping handles
open while not actively reading and writing from them, and that mostly
for the usual backend though that does have a file-system or
object-store representation, how to result that results a sort of
streaming sub-protocol routine, about fetching ranges of the objects or
otherwise that the idea is that the backend file is a zip file, with
that the results are buffers of data ready to write, or handles, to
concatenate the compressed sections that happen to be just ranges in the
file, compressed, with concatenating them together about the internals
of zip file format, the data at rest. I.e. the idea is that handles are
sides of a pipe then to transfer the handle as readable to the output
side of the pipe as writeable.

It seems though for various runtimes, that both a sort of "classic
O_NONBLOCKING" and "async I/O in callbacks" organizations, can be about
same, figuring that whenever there's a read that it drives the Layers
then the Recognizer/Parser (the remux if any and then the
command/payload parser), and the Layers, and if there's anything to
write then the usual routine is to send it and release to recycle any
buffers, or close the handles, as their contents are sent.

It's figured to marshal whatever there is to write as buffers, while,
the idea of handles results being more on the asynchronous I/O on the
backend when it's filesystem. Otherwise it would get involved partially
written handles, though there's definitely something to be said for an
open handle to an unbounded file, and writing that out without breaking
it into a streaming-sub-protocol or not having it on the heap.

"Use nonblocking mode for this operation; that is, this call to preadv2
will fail and set errno to EAGAIN if the operation would block. "

The goal is mostly being entirely non-blocking, then with that the
atomic consume/concatenate of buffers makes for "don't touch the buffers
while their I/O is outstanding or imminent", then that what services I/O
only consumes and concatenates, while getting from the free-list or
returning to the free-list, what it concatenates or consumes. [It's
figured to have buffers of 4KiB or 512KiB size, the inputter gets a 1MiB
direct buffer, that RAM is a very scarce resource.]

So, for the non-blocking I/O, I'm trying to figure out how to service
the ready reads, while, only servicing ready writes that also have
something to write. Then I don't much worry about it because ready
writes with nothing to write would result a no-op. Then, about the
asynchronous I/O, is that there would always be an outstanding or
imminent completion result for the ready read, or that, I'm not sure how
to make it so that reads are not making busy-work, while, it seems clear
that writes are driven by there being something to write, then though
not wanting those to hammer when the output buffer is full. In this
sense the non-blocking vector I/O with select/epoll/kqueue or what, uses
less resources for services that have various levels of load, day-over-day.


https://hackage.haskell.org/package/machines
https://clojure.org/reference/transducers
https://chamibuddhika.wordpress.com/2012/08/11/io-demystified/


With non-blocking I/O, or at least in Java, the attachment, is attached
to the selection key, so, they're just round-robin'ed. In asynchronous
(aio on POSIX or overlapped I/O on Windows respectively), in Java the
completion event gets the attachment, but doesn't really say how to
invoke the async send/recv again, and I don't want to maintain a map of
attachments and connections, though it would be alright if that's the
way of things.

Then it sort of seems like "non-blocking for read, or drops, async I/O
for writes". Yet, for example in Java, a SocketChannel is a
SelectableChannel, while, an AsyncSocketChannel, is not a SelectableChannel.

Then, it seems pretty clear that while on Windows, one might want to
employ the aio model, because it's built into Windows, then as for the
sort of followup guarantees, or best when on Windows, that otherwise the
most usual approach is "O_NONBLOCKING" for the socket fd and the fd_set.

Then, what select seems to guarantee, is, that, operations of interest,
_going to ready_, get updated, it doesn't say anything about going to
un-ready. Reads start un-ready and writes start ready, then that the
idea is that select results updating readiness, but not unreadiness.
Then the usual selector implementation, for the selection keys, and the
registered keys and the selected keys, for the interest ops (here only
read and write yet also connect when drops fall out of it) and ready ops.

Yet, it doesn't seem to really claim to guarantee, that while working
with a view of the selection keys, that if selection keys are removed
because they're read-unready (nothing to do) or nothing-to-write
(nothing to do), one worries that the next select round has to have
marked any read-ready, while, it's figured that any something-to-write,
should add the corresponding key back to the selection keys. (There's
for that if the write buffer is full, it would just return 0 I suppose,
yet not wanting to hammer/thrash/churn instead just write when ready.)

So I want to establish that there can be more than one selector,
because, otherwise I suppose that the Inputter/Reader (now also
Outputter/Writer) wants read keys that update to ready, and write keys
that update to ready, yet not write keys that have nothing-to-do, when
they're all ready when they have nothing-to-do. Yet, it seems pretty
much that they all go through one function, like WSPSelect on Windows.

I suppose there's setting the interest ops of the key, according to
whether there's something to write, figuring there's always something to
read, yet when there is something to write, would involve finding the
key and setting its write-interest again. I don't figure that any kind
of changing the selector keys themselves is any kind of good idea at
all, but I only want to deal with the keys that get activity.

Also there's an idea that read() or write() might return -1 and set
EAGAIN in the POSIX thread local error number, yet for example in the
Java implementation it's to be avoided altogether calling the unready as
they only return >0 or throw an otherwise ambiguous exception.

So, I'm pretty much of a mind to just invoke select according to 60
seconds timeout, then just have the I/O thread service all the selection
keys, what way it can sort of discover drops as it goes through then
read if readable and write if write-able and timeout according to the
protocol if the protocol has a timeout.

Yet, it seems instead that when a read() or write() returns until read()
or write() returns 0, there is a bit of initialization to figure out,
must be. What it seems that selection is on all the interest ops, then
to unset interest on OP_WRITE, until there is something to write, then
to set interest on OP_WRITE on the selector's keys, before entering
select, wherein it will populate what's writable, as where it's
writable. Yet, there's not removing the key, as it will show up for
OP_READ presumably anyways.

Anyways it seems that it's alright to have multiple selectors anyways,
so having separate read and write selectors seems fine. Then though
there's two threads, so both can block in select() at the same time.
Then it's figured that the write selector is initialized by deleting the
selected-key as it starts by default write-able, and then it's only of
interest when it's ever full on writing, so it comes up, there's writes
until done and its' deleted, then that continues until there's nothing
to do. The reads are pretty simple then and when the selected-keys come
up they're read until nothing-to-do, then deleted from selected-keys.
[So, the writer thread is mostly only around to finish unfulfilled writes.]


Remux: Multiplexer/Demultiplexer, Remultiplexer, mux/demux

A command might have multiple responses, where it's figured it will
result multiple tasks, or a single task, that return to a single
attachment's connection. The multiplexer mostly accepts that requests
are mutiplexed over the connection, so it results that those are
ephemeral and that the remux creates remux attachments to the original
attachment, involved in any sort of frames/chunks. The compression layer
is variously before or after that, then encryption is after that, while
some protocols also have encryption of a sort within that.

The remux then results that the Recognizer/Parser just gets input, and
recognizes frames/chunks their creation, then assembling their contents
into commands/payloads. Then it's figured that the commands are
independent and just work their way through as tasks and then get
chunked/framed as according to the remux, then also as with regards to
"streaming sub-protocols with respect to the remux".

Pipelined commands basically result a remux, establishing that the
responses are written in serial order as were received.

It's basically figured that 63 bit or 31 bit serial numbers would be
plenty to identify unique requests per connection, and connections and
so on, about the lifetime of the routine and a serial number for each thing.



IO <-> Selectors <-> Rec/Par <-> Remux <-> Rec/Par <-> TQ/TS <-> backend
Loading...