The `size` argument to `FileDesc::read()` is not checked against the
length of the buffer, so `libc::read()` could end up writing past the
buffer if we passed a size that's too large. However, we always pass
exactly the size of the buffer, so that doesn't happen. Let's just
remove the argument since it's not currently needed, thereby removing
the risk of bugs if the function is used incorrectly by future
callers.
This came up in review of `unsafe` Rust code at my company.
The SetColors command was executing SetForegroundColor and then
SetBackgroundColor, which writes 2 extra characters per cell compared to
writing both colors in one command. This resulted in about 15-25% more
FPS (19->24 fps) on a fullscreen (171x51) app that writes every cell
with a different foreground and background color, compared to separately
using the SetForegroundColor and SetBackgroundColor commands (iTerm2, M2
Macbook Pro).
The app is the colors_rgb example in Ratatui, which writes every cell
with a different foreground and background color in a loop. The
CrosstermBackend was changed to use SetColors instead of
SetForegroundColor and SetBackgroundColor.
Previously, the only ways to construct new `Attributes` values were
`Attributes::default()` or `Attributes::from(_)`, neither of can be
made `const fn` due to limitations of these standard library traits.
When double clicking on Windows, the crossterm_winapi emits the first
click with `EventFlags::PressOrRelease` and the second click with
`EventFlags::DoubleClick`. Previously this code explicitly ignored mouse
events with `EventFlags::DoubleClick` because "double click not
supported by unix terminals." This change captures the double click and
surfaces them as normal click events.
This change does two things:
- add the serial_test crate to run selected tests serial rather
than in parallel. This is done because they use global state
so running them in parallel leads to race conditions and flaky
results (sometimes they pass, sometimes they fail). Running
them serialy avoids this flakiness.
- create a screen buffer within the test. This avoids changing
the terminal (screen buffer) which is running the test. for
example, a test that changes the terminal size to 20 x 20 can
leave the developer running the test with a resized terminal.
Creating a separate screen buffer for the test avoids this.
It is possible to render images in terminals with protocols such as Sixel,
iTerm2's, or Kitty's. For a basic sixel or iTerm2 image printing, it is
sufficient to print some escape sequence with the data, e.g. cat image just
works, the image is displayed and enough lines are scrolled.
But for more sophisticated usage of images, such as TUIs, it is necessary to
know exactly what area that image would cover, in terms of columns/rows of
characters. Then it would be possible to e.g. resize the image to a size that
fits a col/row area precisely, not overdraw the image area, accommodate layouts,
etc.
Thus, provide the window size in pixel width/height, in addition to cols/rows.
The windows implementation always returns a "not implemented" error. The
windows API exposes a font-size, but in logical units, not pixels.
This could be further extended to expose either "logical window size",
or "pixel font size" and "logical font size".
The "report alternate keys" part of the Kitty keyboard protocol will
send an additional codepoint containing the "shifted" version of a
key based on the keyboard layout. This is useful for downstream
applications which set up keybindings based on symbols instead of
exact keys being pressed.
For example, underscore (_) with the Alt modifier is sent as minus (-)
with Alt and Shift modifiers. A terminal will send the underscore
codepoint as an alternate though, and we can use that information and
the presence of the Shift modifier to resolve the symbol. Other
examples are 'A-(' (sent as 'A-S-9') and 'A-)' (sent as 'A-S-0').
This change allows pushing the "report alternate keys" flag and
overwrites the keycode and modifiers for any shifted keys sent by the
terminal.