Optimising Raspberry Pi 5’s software environment

In this latest addition to our Raspberry Pi 5 development diary, Principal Software Engineer Tim Gover joins Gordon Hollingworth and Eben to discuss how we developed Raspberry Pi 5’s software environment.

It’s most recognisably an evolution of the Raspberry Pi 4 platform, but elements of the Raspberry Pi 5 software environment have been in development since Raspberry Pi 1 and 2.

“It only took us eleven years and seven hardware generations, but we got there in the end”: good stuff doesn’t come easy.

Click here to read a transcript of Tim, Gordon, and Eben’s conversation about developing Raspberry Pi 5’s software environment

Once more, the below is a machine transcription that a human has reviewed. If we’ve missed any bothersome mistakes, let us know in the comments.

Eben 0:08: Raspberry Pi five hasn’t just been a hardware effort. We always say every hardware company is a software company; we probably have two software engineers working on all these things for every hardware engineer. It’s kind of, I guess it’s an evolution, right, of the Pi 4, it’s an evolution of the previous Pi platforms, particularly the Pi 4 platform. Perhaps we could talk a little bit about the evolution of the Pi 4 software environment from launch, because we grew a lot of features over the course of the Pi 4 programme, right?

Tim 0:41: So when we did the Pi 4 launch, we kind of used the Pi 3 software as a baseline. But between then and Pi 5, we’ve been pushing lots of software out of the VPU firmware, and into Linux. Biggest example of that is probably the display driver, the KMS driver stack. So Pi 4 has, I think, at least three display driver implementations there.

Eben 1:09: It’s got classic, so, dispmanx; it’s got fake KMS?

Tim 1:15: Yes.

Eben 1:16: Firmware — FKMS, which is either firmware or fake, depending on your view; and then real KMS.

Tim 1:21: Yeah.

Gordon 1:21: Which is called full KMS, which I always get confused with FKMS!

Tim 1:28: So: Pi 5, we think — well, KMS was pretty mature, so we decided, okay, let’s just have that as the display driver. It’s a little bit of a challenge whilst bringing up the chip because we didn’t have the driver. So how do you verify a chip without a driver? So you end up writing test things and evolving that. So we kind of had very cut-down display drivers to verify the hardware, whilst we were simultaneously bringing up the real software thing. Similar, the ISP. So the ISP for various historical reasons had to all run on the VideoCore; Pi 5, full Linux —

Eben 2:08: And that was ejected, that’s been ejected over the course of Pi 4’s lifetime. We slowly ejected it.

Gordon 2:17: So there’s like a bit on Pi 4 which is still in the firmware, which is actually — if you’re, like, programming the ISP, but there’s then — the rest of libcamera talks a consistent interface across there. And that was, like, this has been planned. Like we say it’s, oh, since Pi 4; actually, we’ve planned this from quite a long time ago, you know, starting with Pi 1 ish, 2 ish when we start with the GPU stuff. We wanted to start to make an open source GPU so that we can do all of our GPU stuff from the Arm, not, you know —

Eben 2:23: But it’s this gradual whittling away of the much-loathed blob. You know, people talk about start.elf, the firmware that runs on the VPU, which in 2012 ran almost everything in the system, you know: ran your HDMI, would negotiate with HDMI for you, and it would render your triangles for you, and decode your video, and process your camera images. And things have gradually been carved away from this, to the point — so I guess in Pi 4 you’re in this sort of liminal space where GPU’s gone, so 3D is gone, that’s in, that’s Mesa; display scanout and ISP have gone over the course of the Pi 4 generation; encode is still in start.elf. And then Pi 5, we’ve turfed out all of those things, including of course, encode, because there is no encode hardware on the platform.

Tim 3:47: Pi 4 had H.264 decode, not in Pi 5, and HEVC has always been on the Arm for Pi 4. So that gets us to an interesting point, where actually all of the multimedia stuff is now running on the Arm. So for Pi 5, why do we need start.elf? And we came to the conclusion: we don’t need it. So we still have some firmware, but that firmware is essentially just power, clocks, DDR in it; it’s pretty much as small as we can make it. So it’s completely self-contained.

Eben 4:18: And all of that lives in the, does all of that live in the SPI flash?

Tim 4:22: Yep. The SPI flash has everything that it needs to load the Linux kernel.

Eben 4:26: So no remaining — you run on a Pi 5, no remaining stuff in the SD card, in the SD card file system. Yeah. Yeah, super cool! Right — we’ve made it! It only took us eleven —

Gordon 4:41: … Winding out! Yeah, no —

Eben 4:43: Eleven years and seven hardware generations, but we got there in the end.

Gordon 4:45: Now the other interesting thing I think we’re — one of the things I really pushed for right when we first started doing — with Project Y was — is making sure that when we test Project Y, what we do is actually write Linux drivers. Because what our tendency is, is whilst you’re testing in silicon, what you do is, you’ve just, somebody writes a really simple driver just — punches some registers, check some things, does some stuff. And of course what you end up with then is, you get your chip back and you check it and you go, yeah, it still works. And they’re, like, great! How do I run this on Linux? Oh, no, you now to need write a thousand lines of code, and you’re like, oh. Right? So, doing that meant that as soon as we got it back, we were actually able to run code, and —

Eben 5:30: Do all-up tests.

Gordon 5:32: Yeah, absolutely.

Eben 5:33: So on the multimedia side, multimedia has largely migrated from being the responsibility of the VPU, the responsibility of the blob, to being the responsibility of the Arms — the big Arms on the application processor — through a variety of standard interfaces: KMS, Mesa, libcamera… V4L2 for HEVC decode?

Gordon 5:58: For HEVC decode, yes, V4L2 stateless…

Eben 6:03: Stateless. So these are all Linux standard — these are all Linux standards. So that’s really standards-based. As you said, the other big change in the platform is that all of the interfacing — so multimedia’s gone one way; all the interfacing has kind of gone in another direction, right, in that it’s gone off the core chip, onto RP1.

Gordon 6:21: Yeah. Yeah, so — that’s really good, because it means that our drivers become completely standard. And, you know — a lot of our drivers are standard anyway, because they use standard interfaces, they use standard modules, right. So your USB and your SDIO and that kind of stuff, they’re using standard drivers with some, maybe, little modifications here and there. But the great thing is that you’ve got, yes, you’ve set up that PCI connection, and then all your devices there on the other side, you have a driver running. And it just talks to those as if they are on the same chip; but obviously they’re not, it talks down a PCI link. So that changes some things, but also makes it much easier. And it also means, of course, you can then use that wherever you want. And you could — as you know, we could use Project Y standalone if we wanted to. Not that we would, but you can plug it into any —

Eben 7:17: RP1. RP1 now. We’ve launched the chip, it’s got a name.

Gordon 7:23: There’s a chip, it’s RP1, yes okay. Yeah, yeah. Project is — yeah, fair enough.

Eben 7:26: And in fact, when we were developing those drivers, we even had PCs, cause we had PCs — We had a few development platforms: we had PCs connected to the FPGA prototyping platform; I think we had PCs with actual RP1 silicon?

Gordon 7:44: Yeah. Yes, that’s right. We’ve got that card that you can plug into the PC, yeah.

Eben 7:47: We had 2711, we had CM4, you know, for a more Arm-like environment, I think we had a CM4 connected to the —

Gordon 7:54: And with a PC, it was like — the drivers are slightly more difficult because they don’t have, because x86 doesn’t have device tree. So it’s slightly different in terms of configuration and that kind of stuff. So it’s actually, it’s not as — people are like, well, could you do that? Can you make up an x86 Raspberry Pi? it’s like, oh, God, it’d be a lot of work. You could, but it —

Eben 8:15: You could put RP1 down with a, you know, an AMD APU or something, and you’d end up with a thing that felt, in terms of — in interfacing land, felt very Raspberry Pi-like, but it wouldn’t feel quite like a Raspberry Pi.

Gordon 8:27: That’s it, yeah. So, but that’s actually really, really great, because it does mean that we’ve got this one bit that does, then, the — our interfacing. And it’s all the drivers are there, you know, we’ve tested them all, tested them all in simulation with those drivers as well. So, you know, yeah, we’ve had a couple of years now of those drivers. So, yeah, it’s been good.

Eben 8:53: And again, mostly standards-based interfaces, so, USB is xHCI.

Gordon 9:00: Yep.

Eben 9:00: Ethernet is the Cadence core, but there are other SoCs that have that — it’s not standards-based, but there are other SoCs that have that core in, so we’re able to use drivers for that core. I guess the MIPI stuff is rather more — that’s rather more custom. So the interface to the CSI and DSI is custom to us.

Gordon 9:21: Now that was just — the great thing about that, as well, is just how easy it seemed to be. I mean, I say easy; I’m sure if I spoke to someone like Nick and said, how easy was that? I’m sure he wouldn’t — but it just, everything just worked.

Tim 9:32: Everything seemed to bring up, didn’t seem to be that difficult, where we were kind of waiting for the horrendous thing, cause — didn’t happen —

Gordon 9:32: Yeah: when’s it gonna get really hard? And it just didn’t happen, and things just kinda worked.

Tim 9:40: I mean, there was a lot of it! Yeah.

Gordon 9:41: How many times did you ask me, you know, does DSI work? Like, has anybody checked DSI? I’m like, yeah, no, they’ve checked! They’ve tested it all!

Eben 9:49: And when we started to see these all-up tests, I guess in the springtime, when we started to see these all-up tests where you’d have a dual 4Kp60 desktop, and a 1080p DPI monitor, and a DSI display, and the camera —

Gordon 10:08: And the camera, yeah!

Eben 10:09: And hammering the network, and hammering two USB 3 SSDs at the same time —

Gordon 10:14: And you’re like: that’s looking good!

Eben 10:14: You’re thinking: it’s not falling over! It’s a testament to the amount of all-up testing, to the amount of testing that went in, and in particular this kind of all-up testing that happened at the FPGA stage, right? Yeah, that works really nicely.

Gordon 10:28: I mean, we’ve always, like — You know, testing at silicon level, trying to — unit-testing blocks is one thing, but actually once they — because they interact in a system, and if you do not do that in simulation, that’s where your problems are, they’re always in those interactions. So trying to do as much as you can; you can’t always, I mean, you’re not gonna be able to run Linux in a simulation, but trying to do as much as you can to kind of reproduce that scenario. And you know, that’s, you know, it does work. Great.

Eben 11:05: What about boot modes? Do we have the full set of boot modes at launch?

Gordon 11:08: [Laughs]

Eben 11:10: We’re several weeks away from — we’re filming this in advance. So: do you believe we will have the full set of boot modes at launch?

Tim 11:17: Well I’m not writing them all!

Eben 11:20: So yes, then!

Tim 11:21: So yes, definitely, yes. No, so, SD boot works, obviously, because we’ve been using that for ages. USB: that was fun; we have two xHCI controllers on RP1, that makes enumerating USB devices slightly more entertaining, but, yep, that works — I mean, I’ve been using that for over six months. NVMe. That just worked. It’s the same code. And the nice thing about your standards-based interfaces is that there was a little bit of plumbing to talk to RP1 from the VPU, which has slightly harder access to PCIe than the Arms, but once we did that plumbing, the existing code just worked. There’s a little bit of work left to do for the Ethernet, so… but I think it’s pretty close.

Gordon 12:06: And then the only thing — it’s mostly just getting packets across that Ethernet interface —

Tim 12:11: Oh, it’s only the Ethernet MAC driver, and testing it, obviously.

Gordon 12:14: But all the rest of it’s already done.

Tim 12:16: Yeah, most of the code. I mean, it’s a big project, it was very complicated — what we couldn’t really afford to do is just go and do all these things in isolation, and then do a big integration — So basically, we had to do everything on the main code line all the time, and try not to break stuff. And we’ve been reasonably successful, because we haven’t had an integration nightmare.

Eben 12:39: That’s good.

Gordon 12:40: Yeah. How many functions does the button have?

Eben 12:45: The most controversial element!

Tim 12:46: It’s the most complicated button! That’s probably the most complicated bit for software actually.

Gordon 12:51: How do you make a button do lots and lots of things?

Eben 12:53: If I can’t play Tetris, just by pressing the power button, then you’ve not done it right. Tetris in the bootloader.

Gordon 12:58: You know, I have been kicking — every now and again, I’d say to Tim, so you know, like, that — you know the game that you could put into the boot ROM? It’s like: No! We don’t have the space for it!

Eben 13:07: We could at least have the little dinosaur that runs along and jumps over things.

Gordon 13:11: That’s what I said! Like, literally… The first time, he’s like: you can’t do it. There’s no — we haven’t got time, it won’t fit. The second time, it was kind of like: well, you could do it, but like, we haven’t got the… The third time, it was kinda like: well, I might think about certain — they’re, like: Yeah. I don’t know, I’m just not asking any more. I’m just like, I’m assuming one day…

Eben 13:30: The dinosaur appears.

Tim 13:30: There are many places —

Gordon 13:33: There’s a lot of other work to do.

Tim 13:34: There are many, many things that we can do there. I mean, a big change actually to the bootloader was the new power supply. We have a very sophisticated power supply; we can do lots of stuff with that.

Eben 13:43: We can, so we go out and we can interrogate — so we now have PD, we have a PD PHY on the CC pins, so we can go and interrogate the power supply and say, Are you a five amp? Are you a five amp power supply? And then we turn off — we have quite an aggressive USB current limit, by default, which you can turn off if you want, but we have quite an aggressive USB current limit that we then turn up. So that’s a chunk more complexity.

Gordon 14:17: It’s gonna be fun getting used to, but —

Tim 14:19: Well also, as I say, the PMIC itself has built-in ADC, so you can actually look at the core voltage and the core current, and you can use that to profile your usage. And, yeah, we’ve just got a little VideoCore gen command [vcgencmd]. So, yeah.

Gordon 14:20: We can draw a graph.

Tim 14:35: We can draw graphs, we can look at benchmarks, you can look at the 3.3 voltage and current and see all of that stuff, and —

Eben 14:42: Should we maybe talk a little bit about testing, actually, that — particularly the stress vectors, like, finding the stress vector’s always a challenge for us, right?

Gordon 14:52: Well cause you never know — one of the problems is you never know, have you actually found the stress vector, right? Because that’s what we’ve done in the past, right, is thinking, we found a stress — the correct stress test, and then finding that, you know, we release it and then somebody finds something harder.

Eben 15:07: When we say the stress test, we mean the thing you run in production. The thing you run in production testing, yeah. Which is Linpack again?

Tim 15:15: Which is Linpack again, yes. Although we moved — it’s 64-bit Linpack. We’ve kind of said that was… yeah.

Eben 15:23: Is it NEON Linpack or —

Tim 15:27: There is some NEON, but —

Eben 15:29: You’ve got those — each CPU’s now got eight — it’s got two full-width NEONs, right, so it’s got eight FPUs, and then —so you can light up 32 FPUs constantly. That’s kind of a, that’s a lot of power.

Tim 15:40: That’s a lot of power, yeah, and actually trying to manage the code, so — you don’t want it to be stalled, waiting for SDRAM, so you’re trying to optimise your test set to fit into the caches, and then at some point — is this really realistic? So just keep iterating on that and measuring it, and we’re pretty happy with that.

Gordon 15:59: Yeah. And that stress — because it, it’s what tells us whether or not a device — to trim out variations in process that you get from silicon, natural silicon has variations in process, and a lot — other companies will bin chips, they’ll have fast ones and slow ones, and we don’t do that, we try to make everything run at the same speed. So fast devices run —

Eben 16:24: This is AVS, adaptive voltage scaling. It feels like a much more, it feels like the software environment’s much more mature, right? Than Pi 4 was.

Tim 16:37: Yeah, actually, the — so we kind of started with the port of the Pi 4 software, which had quite complicated AVS implementation, and then actually going — as the project matured, we started realising, actually we don’t need to do all of this stuff. So we — there was kind of a satisfying thing of deleting lots of this code at the end, and we’ve actually got fairly simple voltage —

Eben 16:58: Simple rules. Run fast and then throttle back.

Tim 17:02: And, yeah, even at the kind of idle or maximum-throttle thing, it’s still dual 4Kp60, and probably getting on for twice as fast as a Pi 4.

Eben 17:13: It’s kind of fun. And all this is coming along, so this product is also coming along at an interesting time for us in software world, right, which is Bookworm. Debian… how many?

Gordon 17:24: Sorry?

Eben 17:24: Number 12? 11, 12?

[They talk over each other]

Gordon 17:32: So yeah, so that’s… I think… we tried — we wanted to get — obviously, one of the things that we wanted to do is to have Pi 5 — we wanted to start there. So that’s where we’ve been, trying to make sure that Bookworm and Pi 5 are there together, that’s the thing. We weren’t going to use Bullseye, I didn’t want to port Bullseye and make that work —

Eben 17:56: Cause Bullseye’s very old now, right? It’s two years since freeze, and so in practice three or four years since packages started to be selected.

Gordon 18:03: But also one of the things, the great things that happens with Bookworm is that a lot of stuff has changed in such a way that it allows us, number one, to use more recent Mesa versions, right, which then allows us to then use more, much more recently released implementations of compositor. So previously, when we went into Bullseye, what we did is we changed our compositor — we were using, er, compman. No, compman? Xcompman [xcompmgr]? — that we were using for doing that. And then we changed that to Mutter. And that changed a lot of stuff. It was kind of like a — and the idea is that we were trying to get —

Eben 18:46: When you say compositor, you mean the thing that assembles the screen. That puts the windows together to make a desktop.

Gordon 18:51: That draws — puts the graphics, draws the 2D graphics and puts it into a single window to then display on that screen. Separately from the compositor is also a window manager; that’s the thing that actually handles the fact where there are many windows —

Eben 19:07: Are you really gonna try and describe the difference between compositors and window managers?

Gordon 19:10: Well, no, what I was just gonna say is that — is that Mutter is both of those things. So Mutter actually contains — is like one thing that contains all of those. And because it contains all of those things it has, they’ve made choices that weren’t quite our choices. They were things that they’d made, which wouldn’t be — they’re not perfect for us. But it would give us the — it’s like a pathway that we could see to get from X Windows, from a completely X Windows display manager, through to Weston — oh sorry — Wayland, which — Wayland is, if you like, the future. That’s where people have been, you know, we’ve been working for a very long time to get a window manager which is more secure but also has the ability to use more of the hardware. Especially on a Raspberry Pi, we’ve had special hardware on Raspberry Pi, ever since Raspberry Pi 1, that can actually do most of that composition work in hardware, and that would allow you to accelerate it much more. So I’ve always wanted to be able to do that, to use that hardware, but there are many, many complexities. The idea is that —

Eben 20:19: And too many complexities in X world, right? In legacy world of X, too many complexities to ever really fight your way through it.

Gordon 20:26: Because they’re so intertwined. There’s so much, like, with all your multimedia systems, and your — everything from the format of the pixels all the way through to getting it through a GPU onto a screen; all that stuff is much more complex. Because they then separated, because at least the — what we’ve done is we’ve chosen a new window manager called Wayfire, and because that separated — really, the window management part and the composition part quite considerably separated — that allows us to look at the composition, on its own; it’s much simpler, allows us to kind of understand it ourselves and then to insert and edit and change it in such a way that we can use that hardware. And that’s what we’re trying to do, is use the hardware that we have available on Raspberry Pi and always have, to do that acceleration, that hardware acceleration task. And for example, what I would like to be able to do one day is to be able to play a video, a HDR video, and then see that on the screen and integrated into a full window. So you can have a windowed — a window on your screen that has HDR video, whilst the rest of your GUI looks completely normal, not all with different colours and things. And that is what the hardware should be able to do.

Eben 21:51: For the first time — we’re not there, but we’re within reach of it.

Gordon 21:54: Not there; we’re within reach, we have the —

Eben 21:55: Do we have benefits from it already though?

Gordon 21:56: Oh, absolutely. Yeah, it makes — the difference, the difference is,

Eben 22:02: On a Pi 4?

Gordon 22:03: Well, on anything. I mean, yeah. So yeah, any of the — definitely makes a big difference. We — but — we have a lot further to go, but, yes, it already it looks —

Eben 22:15: With Pi 5, you’re stacking together two things: you’re stacking together hardware which is two to three times faster, with a display subsystem which gives you a path to memory which is twice as fast — I’m sorry, a path to pixels to the screen which is roughly twice as fast. So you’re kind of seeing, all of this is arriving at the same time.

Gordon 22:31: Yeah, absolutely. So, yeah. I really want to see not just Pi 5 being awesome, I also want to see improvements in Pi 1; because that’s what I’m expecting is to get there, to the point where Pi 1 suddenly, you’re playing fullscreen video at 1080p. And, you know, that’s what you should be looking for.

Eben 22:51: Which you know you can do in a dispmanx layer using the classic scanout, you can absolutely play Big Buck Bunny as a sort of a detached full-screen overlay over the rest of everything on a Pi 1 at 1080p. The question is, why can’t I do that in VLC?

Gordon 23:05: Yes. Yeah, absolutely. And that’s always been difficult. It’s always been hard to get those pixels onto the screen.

Eben 23:11: If you touch the pixels, you die, is the rule. And even on the new modern fast devices at 1080 — at 4K, if you touch the pixels, you die. Even if you touch them with the GPU, you die. Or you’re at risk of death.

Gordon 23:23: Very difficult to — yeah, because so much data, so much data. And yeah, the — processors are very good at doing it but they are not as good as just pure hardware. So much faster. So yeah, so that’s kind of like the end of that, trying to get that in place. So that’s what we’ve been working on significantly. It has a lot of changes. There are a lot of differences, much as there were in previous releases. It’s got a whole bunch more differences. But we think actually, it’s looking pretty good. We ran a beta test of the software. So that actually we have a lot of people having a little play, so we get more feedback than we have done in previous years. So yeah, but that’s looking really good.

Eben 24:12: Excellent! Does feel like a desktop computer when you sit in front of it, right?

Tim 24:16: It was surprising how fast dual 4Kp60 was, cause even when we were doing the hardware bring-up: is it really going to be usable? But when we put Wayland on, the same time, then it’s — I’m dragging windows across screens like — yes, this shouldn’t be this fast!

Eben 24:31: My laptop, my Macbook struggles. When I plug it into a 4K television, it chugs, right. And it’s not — it’s a fairly recent piece of hardware. It’s remarkable how well this works with two 4K displays.

Gordon 24:44: My laptop, yeah, side by side, you know, Facebook or YouTube or whatever on there, you know, so it’s pretty kind of JavaScript-heavy website: I can’t really tell the difference between that and my PC. It is really good. Yeah. Not that computers are only meant to be doing browsers, but let’s be honest, that’s kind of what most people are —

Eben 25:02: It’s a really important use case!

12 comments
Jump to the comment form

Jack Chaney avatar

Still some rough edges… First attempt I selected Firefox, and got a surprise. The web cam was not recognized. Added Chrome to test if it was something in the OS or just Firefox, and the camera worked with Chrome. Previous articles hinted the Firefox implementation was still a little experimental. Otherwise I am really liking the upgrade.

Reply to Jack Chaney

InterestedIndividual avatar

The compositing is really nice… according to the docs it has support for window snapping but this isn’t enabled by default on pi. Personally I’d love to see this enabled by default.

Reply to InterestedIndividual

Ushitra Boyes avatar

It is the great improvement on raspberry Pi Desktop OS – from Debian 11 base to Debian 12 base. After moved to Debian 12 base, I found some “Not -compatibility” so Not executable.
1) Gnome screeenshot – appli
2) Xscreensaver and its module – appli
Pls advise me which replacement appli is best.
– screenshot and screensaver
Thanks in advance.

Reply to Ushitra Boyes

Eduardo Elpidio avatar

The VNC server cannot be activated. I try so many times and it does not work.

Reply to Eduardo Elpidio

Malcolm Harding avatar

From the RealVNC website:
https://help.realvnc.com/hc/en-us/articles/14110635000221-Raspberry-Pi-5-Bookworm-and-RealVNC-Connect#statement-0-0
If you run the raspi-config utility on the terminal and select ‘Advanced’ section you can change from Wayland to X11 (no, I don’t know either – but it works). VNC is then fine.
Wasted many happy hours to find this.

Reply to Malcolm Harding

jh21 avatar

tigervnc works

Reply to jh21

Robert M. Koretsky avatar

ZFS is easily installed on the Bookworm-based Raspberry Pi OS. On a Pi 4 b. A big improvement over Bullseye, where ZFS installation was at first supported, then got axed out for some reason.

Reply to Robert M. Koretsky

ardecaple avatar

First of all, congrats on the enormous amount of work that you have done! But (you guessed..) ..
I’m having a few issues on Pi 5 that seem to be related to large monitors (4K) and/or the video driver.
1. the mouse is a bit laggy even when turned up to max.
2. It’s difficult to set up a reasonable set of display defaults. The large screen default is OK, but applications (e.g. Chromium) tend to use a smaller font for the menu bar.
3. Apps like Chromium can be set to ‘zoom’ to 175%, but this does not affect the menu bar, as noted above.
4. The mouse is _very_ laggy when inside some apps. The worst offender here is Visual Code, but Chromium is also affected. Following tips on the web, I turned off hardware acceleration on these apps, and this improves things a lot, but is still slightly laggy.
5. Mathematica asks for activation on startup. This might be a video permissions issue; some people report that if Mathematica does not have access to some low-level video files, this occurs. However, the video driver implementation has changed, and one of these files is no longer present. The other possibility might be because I installed the OS with a different default user (not pi)
6. With the large-screen defaults for the main display, a VNC session then becomes unusable. Any chance of different settings for VNC?

Reply to ardecaple

Liz Upton avatar

Hmm – that mouse issue you’re seeing really doesn’t sound right. Can you let us know more about your setup please? If you use htop in the terminal that’d be helpful.

Reply to Liz Upton

Ardencaple avatar

It’s a Pi 5 8Gb, with the new PSU and case.
Monitor is an LG 3840*2160 (ahem, so 8k not 4k .. sorry)
Keyboard and mouse are no-name.
I’m not at my desk right now, but Htop didn’t show any big peaks, and I think temperature is less than 60 C. I’ll double check tomorrow.

Reply to Ardencaple

Tony King avatar

In Bookworm does anyone know why WayVNC shuts down when you turn of the monitor on the server pi? Making it impossible to remote access without its monitor on.

Reply to Tony King

Dick Bacon avatar

Getting my two screens as a like is proving a little problematic. “Appearance settings” and “Panel preferences” both allow positioning the Taskbar/Panel, but the do not cooperate. Result is if I have HDMI1 on the left and 2 on the right, and choose Taskbar location as desktop 2, then on reboot it is correctly place, but on sleep/wakeup it is wrong. How can I stop this happening, please?

Reply to Dick Bacon

Leave a Comment