Hacking a Linux PC at a Close Distance without Being Connected to a Network

The attack allows the attacker to execute arbitrary code on another PC running Linux. The exploit is possible due to an extremely serious vulnerability in Bluetooth stack inside Linux. The attacker literally can run an application of his choice on the other PC. The exploit was found by Andy Nguyen, a security researcher at Google. More info here.

The attacker can execute anything he wants on a second PC running Linux. It is a zero-click exploit: the victim does not need to do anything in particular, the PC just needs to be on.

What do We Learn From This

I have never EVER in the last 20 years believed that Linux could possibly ever be a secure trusted OS. The ecosystem is basically flawed.

First, it is clear that no security engineer have ever been involved in the design and maintenance of Linux, or it was already too late… Linux lacks any sort of defense in depth, and too many privileges are aggregated in too few places. This is a fatal mix from which it will maybe never recover.

Secondly, it is built around dangerous subversive ideology. It is based on the idea of free voluntary labor, which is in fact entirely illegal in many countries, e.g. in France, but is in fact tolerated (and frequently even promoted). Moreover the developers themselves sometimes behave like total losers. Some developers commit suicide on day one, through terms of various so called free software licenses they accept and promote. Then, all these super naive shame workers are ever asking for, is to be popular and famous, and for their names to be mentioned, which acknowledgment they don’t even get typically, work is just reused and authors are not always cited.

At the same time other people make a lot of money by reusing their work, to build and run powerful computer systems which are at the center of our economy, and which are huge profit makers.

It is NOT true that if I shared knowledge or some code with you I do not lost anything. There is an opportunity cost, human life is valuable, expertise is valuable. Almost every advanced business/tech activity is like this nowadays: it creates intangible goods which COULD be shared for free, or they COULD can benefit from sort of protection against theft and abuse.

In Linux we have an organised theft of intellectual property and it is a conspiracy against the same coders which are making Linux. Developers are tricked into working for some shrewd manipulators without being paid.

Is Open Source Secure?

In fact, possibly the contrary can be claimed. Open source means that malicious code can be injected by anyone. The long history of Linux shows that preventive security engineering failed at all times, and nobody noticed for 10 years or so. See for example here.

The supply chain infiltration is an interesting attack against Linux, against which it is, by design and by ideology, not defended (or not well). We should not and cannot trust open source developers. If they are not paid “officially”, why do they work so hard? One answer is of course, passion and hidden subsidies. But then another answer is that they are VERY likely to be recipients of some dark money from criminal or rogue state sources. Even when they are paid by Google out of altruism, this never was altruism. This was manipulation and exploitation worse than child labour, because in fact this is slave labor in disguise. There is a huge imbalance of power and information and profits made by Google from the tech developed and funded by others are here to prove, that the whole Linux community have probably been abused and infiltrated by influencer developers: Google will contribute a bit but of course they benefit a lot more. Profits or rather social and technical benefits from Linux development are basically privatized, and important work is supported by a larger unpaid community.

Facebook, is a business which is quite recent. It started making money only since around 2005, and not long ago, nobody was quite sure how it is possible that Facebook will ever be profitable. They have succeeded because they have literally hacked our society for their benefit: humans are hackable. They also have hacked our political system (by lobbying politicians behind the scenes) and our legal system (the whole planet was tricked into accepting the T&C based in California or similar). People were tricked to abandon their sovereignty and massively relinquished to be protected by their own governments laws and regulators. Facebook and similar Internet giant corporations have in particular hacked our social instincts and enrolled billions of naive individuals into a powerful money making machine.

In this process they were of course inspired by and imitated Linux! They have simply extended this perverse and subversive model, to a larger ecosystem of voluntary submission, digital censorship, manipulation and enslavement, for the sake of Facebook making a lot of money. Almost every aspect of our life is now prostituted for some Internet data hungry business to prosper at our expense. Transparency, or rather a one way transparency of the underdog population, implies that security such as strong cryptography is problematic, as it could potentially threaten the transparency which is an immense money maker.

Strong cryptography needs to be canalized for the benefit of the rich and powerful, but a larger population should rather be building and running systems which are somewhat rigged. Many open source projects have been built with powerful influencer participation which have worked hard in order to deceive a larger group of contributors and developers about who and how needs these systems and particular features, and who will profit from exploiting them, which is mainly large corporations. Being naive candid and generous contributors, and proud to be so, is at the very center of all this world of community developer tech. The situation is similar with how the press have evolved in the last 20 years. Nearly 100% of the press worldwide is in the pockets of corporate sponsors and journalists have very little freedom. The same applies to the so called benevolent computer tech. We are deceived about what we do, have hidden sponsor participants with deep pockets, and yet we somewhat naively believe that this tech is going to be neutral (and not malicious).

An interesting question is what is the impact of all this on information security. Maybe open source is secure because bugs are likely to be discovered? In fact opening your source code is sometimes just a placebo remedy in the area security. Security bugs are subtle type, and they are fundamentally extremely hard to find, and the amount of code to inspect and its complexity grows every day. We live in the world where a lot is hidden in plain sight and we are given a fake sense of security.

The problem of supply chain infiltration is particularly acute in bitcoin, when we do not even know who the developers are, you go there at your own risks and perils, and no one is blamed when something bad happens. Even though the mysterious Satoshi wrote just 2% of bitcoin code, all major and critical security decisions were made by this anonymous entity.

In reality , open source (e.g IBM PC, DES cryptography, SHA-256 etc) is almost never here for security reasons. It is rather a business decision, which is about managing the supply chain precisely. Open source allows businesses and governments to collaborate. However not all businesses and not all governments are equal, some benefit from this process, other are forced into submission and lose money. The winner takes it all again and again.

More critical discussion of open source, see slides 32-41 here. Open source is THE FAKE security mantra, and the real security principle is open design, [Saltzer and Schroeder 1975] and the two are NOT at all the same, see slide 51 here.

In 2005 Ross Anderson already claimed that open source and closed source are equivalent, see slide 57 here. Today and learning a bit more from history, and all the elaborate security deceptions we have known, and this dumb propaganda saying that Linux was very secure etc, for which have fallen so easily for decades, we should probably be a bit wiser.

Open source software can be truly dangerous, cf. slide 38 here. It makes it very easy to modify the software, which works both ways. It lowers entry barriers for improvement, but also for malicious versions to be produced (for example there have been many malicious versions of TrueCrypt). We help simultaneously those who want to improve security (yet poorly funded) and those who want to degrade it (typically more motivated and better funded). Given the imbalance in funding and motivation, and also because hacking is more fun than just building things, quite possibly, this is a working hypothesis, those who want to degrade the security of various systems will always prevail.

ADDED in May 2021: Researchers at University of Minnesota study how to insert malicious patches to Linux Kernel.

One Comment

  1. You seem to not know how code from an open source project is distributed to the user’s PC. You seem to assume a world where users browse the web, download projects and execute them, because that happens to be the habit in the Windows world (which has inherited this from the world of DOS and floppies.
    TrueCrypt, although ported to many platforms, is typical Windows software and suffers from those habits of downloading a setup.exe from some dubious website for free and execute it with high privileges. That is not how Linux user ever install software except with source code of programs that he actually wants to study and modify.
    Now I owe you an explanation how Linux users install software. I give you an example by Debian (Ubuntu is similar, even one additional step of validation because it pulls from Debian).
    1. Free Software Projects like Linux, GNOME etc. host their code in repositories like Git. Git signs all the commit history cryptographically, so a release commit ID is proof of all changes.
    2. Debian pulls release versions from those repositories into the Debian source code repository and reviewed. Always source code, not builds.
    3. Debian may add a number of distribution-specific patches where required (e.g. fix a bug on an exotic CPU that the original author does not support) in separate files to the source server. Each patch requires an explanation
    4. Packages for all CPU architectures are built from those identical source packages.
    4.1. Note that this process is reproducable. All parameters are selected in a way that any user can rebuild any package from the source package to get the identical binary. This is a priority goal. Things like different time stamps in binaries built from the same source etc. are considered a bug. The Debian community is completely aware that transparency and independent validation are key to security.
    5. Those binary packages end up installed on the user’s system.

    Any user at any time can valdiate the whole history of any program.
    He can easily identify any changes and patches and rebuild from source he has checked to verify a binary.
    This is by the way done by many parties, for example Google internally completely runs on Debian desktops. They mirror the entire Debian source history and do the build process themselves. All Google employees get Google-Debian workstations and need special permission to run Mac or Windows (e.g. developers for Windows apps).

    Example: GnuPG

    Source package listed here:

    There are four files:
    1. upstream sourcecode
    2. upstream PGP signature
    3. debian metadata files including patches patches
    4. signature by Debian maintainers

    5. Debian changelog
    6. list of Debian patches with detailed explanation including purpose and justification

    Not exactly security requirements, but also available there:
    4. copyright and license information
    5. build status for different ports (CPU architectures)
    6. links to upstream source
    7. etc.

    This is all used by the Debian toolchain of any Debian user to validate everything all the time. For example, if you build it yourself, it will pull the signed upstream GnuPG source and the signed Debian file, unpack both and build it from there etc.

    There is some point for improvement, where I agree with you much. For example, how does the Debian project get enough maintainers who are also experts in the field they are working? For example, a video codec maintainer should be a multimedia expert, and for security considerations, the maintainers of GnuPG should be security experts. There actually is a lack of manpower in many areas. And there are errors that happen, we all remember the Debian OpenSSL disaster.

    PS: Those people are not unpaid. They work for Ubuntu, Google, Amazon, Facebook etc., literally every IT giant except Apple and Microsoft. (Probably Microsoft has a lot of Linux staff though due to Azute cloud services.)

Leave a Reply

Your email address will not be published. Required fields are marked *