Skip to main content

Linus Torvalds explodes at Intel for its recent patches

Given below is the raw mail from Linus Torvalds commenting on Intel's recent patches to fix the Spectre and Meltdown.
Have fun reading this...
Image result for linus and linux wallpaper
From: Linus Torvalds <>

Date: Sun, 21 Jan 2018 13:35:59 -0800

Subject: Re: [RFC 09/10] x86/enter: Create macros to restrict/unrestrict Indirect Branch Speculation

On Sun, Jan 21, 2018 at 12:28 PM, David Woodhouse <dwmw2@infradead.org> wrote:
> On Sun, 2018-01-21 at 11:34 -0800, Linus Torvalds wrote:
>> All of this is pure garbage.
>>
>> Is Intel really planning on making this shit architectural? Has
>> anybody talked to them and told them they are f*cking insane?
>>
>> Please, any Intel engineers here - talk to your managers.
>
> If the alternative was a two-decade product recall and giving everyone
> free CPUs, I'm not sure it was entirely insane.

You seem to have bought into the cool-aid. Please add a healthy dose
of critical thinking. Because this isn't the kind of cool-aid that
makes for a fun trip with pretty pictures. This is the kind that melts
your brain.

> Certainly it's a nasty hack, but hey — the world was on fire and in the
> end we didn't have to just turn the datacentres off and go back to goat
> farming, so it's not all bad.

It's not that it's a nasty hack. It's much worse than that.

> As a hack for existing CPUs, it's just about tolerable — as long as it
> can die entirely by the next generation.

That's part of the big problem here. The speculation control cpuid
stuff shows that Intel actually seems to plan on doing the right thing
for meltdown (the main question being _when_). Which is not a huge
surprise, since it should be easy to fix, and it's a really honking
big hole to drive through. Not doing the right thing for meltdown
would be completely unacceptable.

So the IBRS garbage implies that Intel is _not_ planning on doing the
right thing for the indirect branch speculation.

Honestly, that's completely unacceptable too.

> So the part is I think is odd is the IBRS_ALL feature, where a future
> CPU will advertise "I am able to be not broken" and then you have to
> set the IBRS bit once at boot time to *ask* it not to be broken. That
> part is weird, because it ought to have been treated like the RDCL_NO
> bit — just "you don't have to worry any more, it got better".

It's not "weird" at all. It's very much part of the whole "this is
complete garbage" issue.

The whole IBRS_ALL feature to me very clearly says "Intel is not
serious about this, we'll have a ugly hack that will be so expensive
that we don't want to enable it by default, because that would look
bad in benchmarks".

So instead they try to push the garbage down to us. And they are doing
it entirely wrong, even from a technical standpoint.

I'm sure there is some lawyer there who says "we'll have to go through
motions to protect against a lawsuit". But legal reasons do not make
for good technology, or good patches that I should apply.

> We do need the IBPB feature to complete the protection that retpoline
> gives us — it's that or rebuild all of userspace with retpoline.

BULLSHIT.

Have you _looked_ at the patches you are talking about?  You should
have - several of them bear your name.

The patches do things like add the garbage MSR writes to the kernel
entry/exit points. That's insane. That says "we're trying to protect
the kernel".  We already have retpoline there, with less overhead.

So somebody isn't telling the truth here. Somebody is pushing complete
garbage for unclear reasons. Sorry for having to point that out.

If this was about flushing the BTB at actual context switches between
different users, I'd believe you. But that's not at all what the
patches do.

As it is, the patches  are COMPLETE AND UTTER GARBAGE.

They do literally insane things. They do things that do not make
sense. That makes all your arguments questionable and suspicious. The
patches do things that are not sane.

WHAT THE F*CK IS GOING ON?

And that's actually ignoring the much _worse_ issue, namely that the
whole hardware interface is literally mis-designed by morons.

It's mis-designed for two major reasons:

 - the "the interface implies Intel will never fix it" reason.

   See the difference between IBRS_ALL and RDCL_NO. One implies Intel
will fix something. The other does not.

   Do you really think that is acceptable?

 - the "there is no performance indicator".

   The whole point of having cpuid and flags from the
microarchitecture is that we can use those to make decisions.

   But since we already know that the IBRS overhead is <i>huge</i> on
existing hardware, all those hardware capability bits are just
complete and utter garbage. Nobody sane will use them, since the cost
is too damn high. So you end up having to look at "which CPU stepping
is this" anyway.

I think we need something better than this garbage.

                Linus

Comments

Popular posts from this blog

Ceph Single Node Setup Ubuntu

Single Node Ceph Install A quick guide for installing Ceph on a single node for demo purposes. It almost goes without saying that this is for tire-kickers who just want to test out the software. Ceph is a powerful distributed storage platform with a focus on spreading the failure domain across disks, servers, racks, pods, and datacenters. It doesn’t get a chance to shine if limited to a single node. With that said, let’s get on with it. Inspired from:  http://palmerville.github.io/2016/04/30/single-node-ceph-install.html Hardware This example uses a VMware Workstation 11 VM with 4 disks attached (1 for OS/App, 3 for Storage). Those installing on physical hardware for a more permanent home setup will obviously want to increase the OS disks for redundancy. To get started create a new VM with the following specs: ·         Name: ceph-single-node ·         Type: Linux ·         Version: Ubuntu 16.04.03 (64-bit) ·         Memory: 4GB ·         Disk: 25GB (Dynamic) ·

How to expose your local server to Internet?

As a developer, we always have a wish to expose our work to internet, so that we can show those to our friends or teachers for testing. But, what we choose to use services of public cloud and sometimes it becomes a bit more expensive way for small projects. So, friends I have found a way to expose your localhost services to the internet without port forwarding through the NAT of your ISP. The solution is: NGROK What is ngrok? Ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels. How it works You download and run a program on your machine and provide it the port of a network service, usually a web server. It connects to the ngrok cloud service which accepts traffic on a public address and relays that traffic through to the ngrok process running on your machine and then on to the local address you specified. What it's good for Demoing web sites without deploying Building webhook consumers on your dev machine Testing m

Docker Overview

OVERVIEW Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud. Today’s businesses are under pressure to digitally transform but are constrained by existing applications and infrastructure while rationalizing an increasingly diverse portfolio of clouds, datacenters and application architectures. Docker enables true independence between applications and infrastructure and developers and IT ops to unlock their potential and creates a model for better collaboration and innovation. A little intro to LXC: - LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host. These isolation levels or containers can be used to either sandbox specific applications, or to emulate an entirely new host. LXC uses Linux’s cgroups functionality, which was introduced in version 2.6.24 to