1. 09 Jul, 2021 1 commit
  2. 01 Jul, 2021 1 commit
  3. 30 Jun, 2021 1 commit
    • 4lDO2's avatar
      Fix producer wakeup. · 5e7a9683
      4lDO2 authored
      I have now done a minor refactor of the state of io_uring handles. But
      most importantly, it not longer caches indices it is not supposed to
      cache, for example in userspace-to-userspace rings where the kernel does
      not have an exclusive right to either side of the ring, in fact it is
      not supposed to have any (direct) access at all.
      5e7a9683
  4. 29 Jun, 2021 5 commits
  5. 23 Jun, 2021 2 commits
  6. 21 Jun, 2021 2 commits
  7. 18 Jun, 2021 2 commits
  8. 17 Jun, 2021 6 commits
  9. 15 Jun, 2021 3 commits
    • 4lDO2's avatar
      Always push double CQEs atomically. · c4ca645b
      4lDO2 authored
      Now that CQEs have shrunk to the size Linux uses, we might sometimes
      need to push two CQEs. Now we can make sure that they are pushed
      atomically, and in fact redox-iou panics directly if it sees the first
      one but not the second (when coming from the kernel).
      c4ca645b
    • 4lDO2's avatar
      Update syscall. · c6f39627
      4lDO2 authored
      c6f39627
    • 4lDO2's avatar
      Refactor io_uring, simplifying a lot. · 3a4cd4ca
      4lDO2 authored
      Most importantly, this commit makes use of the newer syscall io_uring
      API, which removed the epochs that did not really have any purpose, and
      which also supports fetching many entries from the rings in bulk.
      
      Therefore, it also reduces the number of possible SQE/CQE types to one
      and one respectively, thus eliminating the need for way-too-generic
      code, and helping with kernel code size. If a CQE needs full 64 bit
      values, then we will simply send two CQEs, with a special flag
      indicating that it is larger. Now, the entry types are exactly as large
      as on Linux.
      
      Soon, I will probably also implement the SQEs array, making the actual
      ring buffer used for the SQ __much__ smaller, as it will instead only
      submit indices as opposed to entire entries. This will probably only be
      done for userspace-to-kernel rings, or _maybe_ userspace-to-userspace
      rings, but it makes no sense at all to do this for kernel-to-userspace
      rings (which I will also implement).
      3a4cd4ca
  10. 14 Jun, 2021 1 commit
  11. 12 Jun, 2021 2 commits
  12. 13 May, 2021 1 commit
  13. 10 May, 2021 1 commit
  14. 07 May, 2021 3 commits
  15. 06 May, 2021 9 commits