Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
W
website
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
redox-os
website
Commits
91519ba8
Verified
Commit
91519ba8
authored
4 years ago
by
Jacob Lorentzon
Browse files
Options
Downloads
Patches
Plain Diff
Fix io_uring-1 typos.
parent
b78726f9
No related branches found
No related tags found
1 merge request
!245
Second io_uring blog post
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
content/news/io_uring-1.md
+28
-27
28 additions, 27 deletions
content/news/io_uring-1.md
with
28 additions
and
27 deletions
content/news/io_uring-1.md
+
28
−
27
View file @
91519ba8
+++
title = "RSoC: improving drivers and kernel - part 1 (largely io_uring)"
author = "4lDO2"
date = "2020-07-0
1
T1
2:44
:00+02:00"
date = "2020-07-0
2
T1
9:26
:00+02:00"
+++
# Introduction
This week has been quite productive for the most part. I continued updating
[
the RFC
](
https://gitlab.redox-os.org/redox-os/rfcs/-/merge_requests/15
)
, with
some newer ideas that I came up while working on the implementation, which
obviously requires figuring out more details on the design, that I had not
really thought about previously, for example when the kernel is going to
interfere.
some newer ideas that I came up while working on the implementation, most
imporantly how the kernel is going to be involved in
`io_uring`
operation.
I also came up with a set of standard opcodes, that schemes are meant to use
unless in some special scenarios (like general-purpose IPC between processes),
which can be found
[
here
](
0.0.0.0
)
.
when using
`io_uring`
, unless in some special scenarios (like general-purpose
IPC between processes).<!-- The opcodes at this point in time, can be found
[
here
](
0.0.0.0
)
.-->
## The three attachment modes
The most notable change that I made, is that instead of always attaching an
`io_uring`
between two userspace processes, there can be attachments directly
from the userspace to the kernel (and vice versa), which is much more similar
to how Linux works, except that Redox has two additional
"attachment modes".
The three of them are:
to how
`io_uring`
on
Linux works, except that Redox has two additional
"attachment modes".
The three of them are:
*
userspace-to-kernel, where the userspace is the producer and the kernel is
the consumer. In this mode, the ring can (or rather, is supposed to be able
to) either be polled by the kernel at the end of scheduling, for e.g. certain
ultra-low-latency drivers, or the default: the kernel only processes the
entries
us
ing the
`SYS_ENTER_IORING`
syscall.
If the
`io_uring`
interface is
going to be used more by the Redox userspace, it may not be that efficient to
have one ring per consumer process per producer process; with this mode,
there only has to be one ring (or more)
from the userspace to kernel, and
then the kernel can designate syscalls
directed to other schemes, when those
are used by the file descriptors. Then,
there will be only one ring from the
kernel to that producer scheme.
entries
dur
ing the
`SYS_ENTER_IORING`
syscall.
The reason behind this mode,
is that if the
`io_uring`
interface is going to be used more by the Redox
userspace, it may not be that efficient to have one ring per consumer process
per producer process; with this mode,
there only has to be one ring (or more)
from the userspace to kernel, and
then the kernel can designate syscalls
directed to other schemes, when those
are used by the file descriptors. Then,
there will be only one ring from the
kernel to that producer scheme.
*
kernel-to-userspace, which is nothing but the opposite of the
userspace-to-kernel mode. Schemes can be attached by other userspace
processes, or the kernel (as mentioned above);
processes, or the kernel (as mentioned above), and when attached, they
function the exact same way as with regular scheme handles; they
*
userspace-to-userspace, the both the producer and consumer of an
`io_uring`
are regular userspace processes. Just as with the userspace-to-kernel mode,
these are attached with the
`SYS_ATTACH_IORING`
syscall, and except for
...
...
@@ -50,14 +50,14 @@ This was probably the least fun part of this week. Not that it is required for
`io_uring`
s to function properly, but async/await could really help in some
situtations, for example when storing pending submissions to handle. While
async/await has been there since stable 1.39, only recently has it worked in
`#![no_std]`
. I
saw
that the nightly version that Redox used for
everything,
was nigthly-2019-11-25, and so I decided to use the latest version
(also for
the newer
`asm!`
macro).
It turned out that
the ma
inline
branch from the
official
rust repository was capable of compiling all of Redox (there may be
some parts
that require patching anyways, but I could run the system
as with
the older compiler). I hope that it won't be too hard to correctly
submit the
patches to every repo with the
`llvm_asm`
change, and get it to
integrate with
the cookbook. Anyways, hooray!
`#![no_std]`
. I
t turned out
that the nightly version that Redox used for
everything,
was nigthly-2019-11-25, and so I decided to use the latest version
(also for
the newer
`asm!`
macro).
Somehow
the ma
ster
branch from the
official
rust repository was capable of compiling all of Redox (there may be
some parts
that require patching anyways, but I could run the system
out-of-the-box, just
like with
the older compiler). I hope that it won't be too hard to correctly
submit the
patches to every repo with the
`llvm_asm`
change, and get it to
integrate with
the cookbook. Anyways, hooray!
## TODO
Currently only a few opcodes are implemented by the kernel, and my next goal is
...
...
@@ -69,11 +69,12 @@ nvmed and xhcid already use async, but it'd be nicer not having to write your
own executor for every driver).
With this executor, I'm going to try getting
`usbscsid`
to be completely async
and talk to
`xhcid`
uring
`io_uring`
, and let
`xhcid`
to
mask MSI interrupts by
and talk to
`xhcid`
uring
`io_uring`
, and let
`xhcid`
mask MSI interrupts by
talking to
`pcid`
with
`io_uring`
as well.
I'll also see whether at some point in the future, it could be possible to be
compatible with the Linux
`io_uring`
API; perhaps it won't have to be syscall
compatible, but porting
`liburing`
would certainly benefit.
compatible (even if that would work), but porting
`liburing`
would certainly
benefit.
I'd really appreciate any kind of feedback if possible.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment