Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
4lDO2
rfcs
Commits
dfee1e24
Verified
Commit
dfee1e24
authored
Feb 02, 2021
by
4lDO2
🖖
Browse files
Remove the now-unused "cycle state".
parent
38aa5e6e
Changes
1
Hide whitespace changes
Inline
Side-by-side
text/0000-io_uring.md
View file @
dfee1e24
...
...
@@ -193,34 +193,32 @@ elements when that occurs. Any invalid bit pattern in the `sts` field will
result in the
`BROKEN`
flag being set, marking the ring unrecoverable both for
the producer and the consumer.
After that, the head and tail indices are fetched to later compare them. The
most-significant-bit of the respective indices represent the _partial cycle
state_. Namely, when the incrementing either index results in a new index that
is greater than or equal to the fixed number of entries in the ring, the index
part of the index field will be set back to zero, with the cycle flag toggled.
If the index part of any of the index fields would be equal to or greater than
the entry count _when fetched_, the ring will transition into the
`BROKEN`
state.
Once both indices are fetched, with their partial cycle bits extracted from
them, the cycle bits are XOR:ed, resulting in the _cycle state_.
The indices, represent regular array indices in the entries array. Therefore,
the byte offset of the entry is simply calculated by multiplying the size of
the entry, with the index.
TODO: Ownership of the ring parts. The sender is allowed to do anything with
the entries that are not available for popping, and the receiver is also
allowed to manipulate the entries that are not yet popped. The current
algorithm for pushing and popping simply pushes or pops a single item, but this
can be made more efficient this way.
After that, the raw head and tail indices are fetched to later compare them.
They are converted to the actual indices, by calculating the bitwise AND
between the raw indices, and the index mask, which itself is calculated by
subtracting the entry count by one. Effectively, this also constrains the
possible entry counts to powers of two; however, it comes with various
benefits, such as simplifying push and pop operations, as well as eliminating
relatively expensive division instructions. Furthermore, it makes handling of
integer overflows trivial, as it only needs to rely on natural wrapping. Hence,
the entry count must also not exceed half of the largest number representable
by the word size, even though this limit is very unlikely to be met in practice.
The computed indices, represent regular array indices in the entries array.
Therefore, the byte offset of the entry is simply calculated by multiplying the
size of the entry, with the index.
TODO: Document ownership of the ring parts. The sender is (or should in theory,
be) allowed to do anything with the entries that are not available for popping,
and the receiver is also allowed to manipulate the entries that are not yet
popped. The current algorithm for pushing and popping simply pushes or pops a
single item, but it can be made more efficient this way.
#### Calculating the number of available and empty entry slots
The number of entries that can be popped by the receiver of a ring, is determined as follows:
*
_cycle state = false_: _available count = tail index - head_index_.
*
_cycle state = true_: _available count = total entry count - (head index - tail index)_
The number of entries that can be popped by the receiver of a ring, is
determined by subtracting the raw tail index by the raw head index. Note that
this may underflow, since those indices can wrap.
Similarly, the number of entry slots that can be pushed into, is calculated as
the total number of entries, subtracted by the available count.
...
...
@@ -238,12 +236,10 @@ receiver can handle.
#### Pushing
When the ring is not full, the sender may write arbitrary values to the region
between the tail and head index, when the cycle state is false, or between the
head and tail index, when the cycle state is true. Once the desired entries are
written to this region, the sender may then increment the tail index, wrapping
around the total count, as well as setting the partial cycle state accordingly.
After that, the sender is no longer allowed to write to that entry, until the
ring has cycled and it has become available again.
between the tail and head index. Once the desired entries are written to this
region, the sender may then increment the tail index, possibly wrapping around
to zero. After that, the sender is no longer allowed to write to that range of
entries, until the ring has cycled back, and it has become available again.
Since the ring is strictly SPSC, the producer can assume exclusive access to
the sender-owned region.
...
...
@@ -272,6 +268,12 @@ in order to wake the executor from external wakers. If no kernel-mode polling
has been set up, then the ring must be entered for the epoch updates to take
effect.
TODO: Given that the ring indices now use natural wrapping and are masked,
could this instead be replaced by a flag, that other threads can set to awake
the process which has entered the ring, in polling mode? Otherwise, if a system
call is needed anyway when not polling, this could be replaced by triggering an
interrupting signal.
## Interface layer
The Redox
`io_uring`
implementation comes with a new scheme,
`io_uring:`
. The
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment