Hacking a brand new Mac remotely, right out of the box

“Apple’s supply chain is one of the most closely monitored and analyzed in the world, both because of the control the company exerts and keen interest from third parties,” Lily Hay Newman reports for Wired. “But there’s still never a guarantee that a mass-produced product will come out of the box totally pristine. In fact, it’s possible to remotely compromise a brand new Mac the first time it connects to Wi-Fi.”

“That attack, which researchers will demonstrate Thursday at the Black Hat security conference in Las Vegas, targets enterprise Macs that use Apple’s Device Enrollment Program and its Mobile Device Management platform,” Newman reports. “These enterprise tools allow employees of a company to walk through the customized IT setup of a Mac themselves, even if they work in a satellite office or from home.”

“DEP and MDM require a lot of privileged access to make all of that magic happen. So when Jesse Endahl, the chief security officer of the Mac management firm Fleetsmith, and Max Bélanger, a staff engineer at Dropbox, found a bug in these setup tools, they realized they could exploit it to get rare remote Mac access,” Newman reports. “The researchers notified Apple about the issue, and the company released a fix in macOS High Sierra 10.13.6 last month.”

Read more in the full article here.

MacDailyNews Take: And, thanks to researchers like these, the Mac gets even more secure!

[Thanks to MacDailyNews Reader “Ladd” for the heads up.]

10 Comments

  1. There seem to be more bugs with computer devices far outstripping those found in nature. And every update is a changing compromise in coding with the inevitable human mistakes and the emergence of yet more bugs. It may take AI to come along and kill every bug but if the bad guys also have AI they can probably still figure a way back in.

    1. I understand what you’re getting at, but this doesn’t really make sense.

      Technology doesn’t really evolve, not in the same way that nature handles evolution. Technology is engineered. As part of that process, bugs are introduced. Bugs are also fixed as they are discovered.

      In nature evolution happens precise as a consequence of bugs or mutations being introduced. If a bug happens to make a species more fit for survival and therefore more likely to reproduce, the bug simply becomes part of the normal. It’s no longer a bug if you will, it’s a feature. In fact, those that remain without the “feature” are suddenly the ones with the bug. They eventually die out.

      Consequently all of life on Earth is the result of bugs accumulating over 3.8 billion years. As such, we simply do not see the bugs in nature, because we see them as features.

      Technology is engineered when we desire to make it better. During that process bugs are introduced. Those that we cannot tolerate are fixed. Those that we can tolerate, or don’t notice, are not.

      While we are living proof that there are more bugs in nature, this is not to say that the amount of bugs in technology is not considerable. It is the bugs that we tolerate or don’t know about that provide the attack vectors for hackers. Presumably the more lines of code your system or application has, but more bugs.

      World of Warcraft has about 5.5 million lines of code.
      Windows has 50 million lines of code.
      macOS has over 90 million lines of code.

      The government’s health care website has over 500 million lines of code. Would make a joke, but it’s too easy.

      The human genome is said to have 3,300 billion lines of code, and that’s just humans.

      As far as AI is concerned, what if AI acted more like nature with certain types of systems? What if an AI was programmed to introduce mutations in a system to see what happens? I wouldn’t be surprised if that is happening right now.

    2. Interesting comparison/contrast – evolution of computer/software to evolution in nature.

      I suggest that one of the factors is diversity. The immense diversity of nature puts greater pressure on evolution to increase robustness. Nature also uses random elements, such as the way that DNA is split and recombined in reproduction, randomness in fertilization, and random mutations, to increase diversity. The likelihood of a successful evolutionary change is enhanced at the expense of an increase in evolutionary failures. On top of that, there are random events, such as natural disasters and meteoroid strikes, that can wipe out entire species and ecosystems.

      If we ever add an element of randomness to software that enables it to change/evolve over time then, perhaps, we will have made a concrete step towards actual AI.

      1. Nature can make mistakes in “coding” DNA (and sometimes those mistakes are actually helpful evolution changes) but doesn’t have the luxury of fixing bugs except breeding out the problem by natural selection.

        Mostly my point with regular coding is it’s a never ending battle with humans at the helm ever getting the coding error-free. Every change yields yet more bugs and weaknesses to squash from unwitting humans. AI or the promised “self-healing software” will help that but inevitably will have it’s own unforeseen downsides. Someday this problem needs to be resolved.

          1. And there’s the rub – coming up with a system that doesn’t almost guarantee new bugs with every new iteration because of unintended consequences or missed errors. We need to take some of the error-prone humanness out of the equation. With the term “beta” used only as an excuse to say “we know it ain’t perfect, caveat emptor.” When will that era end, if ever?

Add Your Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.