Technical background for: How to make the MacBook Air SuperDrive work with any Mac

Please note: this is the first, much too complicated way I tried (and succeeded) to get the Mac Book Air Superdrive to work with my MacBook PRO. In the meantime, I found a much better, safer and easier way to do it. I kept this description here for those interested in the technical details of searching for and eventually finding a solution. If you just want to make your Mac Book Air Superdrive work, please see this post, and don't confuse yourself with the techy details below.

Warning: this is a hack, and it's not for the faint at heart. If you do anything of what I'll describe below, you are doing it entirely on your own risk. If the description below does not make at least a bit of sense to you, I would not recommend to try the recipe in the end.

The story is this - a while ago I replaced the built-in optical disk drive in my MacBook Pro 17" by an OptiBay (in the meantime, there are also alternatives) which allows to connect a second harddrive, or in my case, a SSD.

To be able to continue using the SuperDrive (Apple's name for the CD/DVD read/write drive), the Optibay came with an external USB case which worked fine, but was ugly. And I didn't want to carry that around, so I left it at home and bought a shiny new MacBook Air SuperDrive for the office.

It just didn't occur to me that this thing could possibly not just work with any Mac, so I didn't even ask before buying. I knew that many third-party USB optical drives work fine, so I just assumed that would be the same for the Apple drive. But I had to learn otherwise. This drive only works for Macs which, in their original form, do not have an optical drive. Which are the MacBook Airs and the new Minis.

But why doesn't it work? Seaching the net, among a lot of inaccurate speculations, I found a very informative blog post from 2008 which pretty much explains everything and even provides a hardware solution - replacing the Apple specific USB-to-IDE bridge within the drive with a standard part.

However, I was challenged to find a software solution. I could not believe that there's a technical reason for not using that drive as-is in any Mac.

There are a lot of good reasons for Apple not to allow it - first and foremost avoiding complexity of possibly multiple CD/DVD drives, confusing users and creating support cases.

So I though it must be the driver intentionally blocking it, and a quick look into the console revealed that in fact it is, undisguised:

2011/10/27 5:32:37.000 PM kernel: The MacBook Air SuperDrive is not supported on this Mac.

Apparently the driver knows that drive, and refuses to handle it if it runs on the "wrong" Mac. From there, it was not too much work. The actual driver for Optical Disk Drives (ODD) is /System/Library/Extensions/ AppleStorageDrivers.kext/Contents/PlugIns/AppleUSBODD.kext

I fed it to the IDA evaluation version, searched in the strings for the message from the console, found where that text is used and once I saw the code nearby it was very clear how it works - the driver detects the MBA Superdrive, and then checks if it is running on a MBA or a Mini. If not, it prints that message and exits. It's one conditional jump that must be made unconditional (to tell the driver: no matter what Mac, just use the drive!). In i386 opcode this means replacing a single 0x75 byte with 0xEB. IDA could tell me which one this was in the 32-bit version of the binary, and the nice 0xED hex editor allowed me to patch it. Only, most modern Macs run in 64-bit mode, and the evaluation version of IDA cannot disassemble these. So I had to search the hexdump of the driver for the same code sequence (similar, but not identical hex codes) in the 64-bit part of the driver. Luckily, that driver is not huge, and there was a pretty unique byte sequence that identified the location in both 32 and 64 bit. So the other byte location to patch was found. I patched both with 0xED - and the MacBook Air SuperDrive was working instantly with my MBP!

Now for the recipe (again - be warned, following it is entirely your own risk, and remember sudo is the tool which lifts all restrictions, you can easily and completely destroy your OS installation and data with it):

  1. Make sure you have Mac OS X 10.7.2 (Lion), Build 11C74. The patch locations are highly specific for a build of the driver, it is very unlikely it will work without modification in any other version of Mac OS X.
  2. get 0xED or any other hex editor of your choice
  3. Open a terminal
  4. Go to the location where all the storage kexts are (which is within an umbrella kext called AppleStorageDrivers.kext)
    cd /System/Library/Extensions/AppleStorageDrivers.kext/Contents/PlugIns
  5. Make a copy of your original AppleUSBODD.kext (to the desktop for now, store in a safe place later - in case something goes wrong you can copy it back!)
    sudo cp -R AppleUSBODD.kext ~/Desktop
  6. Make the binary file writable so you can patch it:
    sudo chmod 666 AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
  7. Use the hex editor to open the file AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
  8. Patch:
    at file offset 0x1CF8, convert 0x75 into 0xEB
    at file offset 0xBB25, convert 0x75 into 0xEB
    (if you find something else than 0x75 at these locations, you probably have another version of Mac OS X or the driver. If so, don't patch, or it means asking for serious trouble)
  9. Save the patched file
  10. Remove the signature. I was very surprised that there's nothing more to it, to make the patched kext load:
    sudo rm -R AppleUSBODD.kext/Contents/_CodeSignature
  11. Restore the permissions, and make sure the owner is root:wheel, in case your hex editor has modified it.
    sudo chmod 644 AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
    sudo chown root:wheel AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
  12. Make a copy of that patched driver to a safe place. In case a system update overwites the driver with a new unpatched build, chances are high you can just copy this patched version back to make the external SuperDrive work again.
  13. Plug in the drive and enjoy! (If it does not work right away, restart the machine once).

PS: Don't ask me for a download of the patched version - That's Apple's code, the only way is DIY!

Secure Cloud Storage with deduplication?

Last week, dropbox's little problem made me realize how much I was trusting them to do things right, just because they once (admittedly, long ago) wrote they'd be using encryption such that they could not access my data, even if they wanted. The auth problem showed that today's dropbox reality couldn't be farther from that - apparently the keys are lying around and are in no way tied to user passwords.

So, following my own advice to take care about my data, I tried to understand better how the other services offering similar functionality to dropbox actually work.

Reading through their FAQs, there are a lot of impressive sounding crypto acronyms, but usually no explanation of their logic how things work - simple questions like what data is encrypted with what key, in what place, and which bits are stored where, remain unanswered.

I'm not a professional in security, let alone cryptography. But I can follow a logical chain of arguments, and I think providers of allegedly secure storage should be able to explain their stuff in a way that can be followed by logically thinking.

Failing to find these explanations, I tried the reverse. Below I try following logic to find out if or how adding sharing and deduplication to the obvious basic setup (private storage for one user) will or will not compromise security. Please correct me if I'm wrong!

Without any sharing or upload optimizing features, the solution is simple: My (local!) cloud storage client encrypts everything with a long enough password I chose and nobody else knows, before uploading. Result: nobody but myself can decrypt the data [1].

That's basically how encrypted disk images, password wallet and keychain apps etc. work. More precisely, many of them use two-stage encryption: they generate a large random key to encrypt the data, and then use my password to encrypt that random key. The advantage is performance, as the workhorse algorithm (the one that encrypts the actual data) can be one that needs a large key to be reasonably safe, but is more efficient. The algorithm that encrypts the large random key with my (usually shorter) password doesn't need to be fast, and thus can be very elaborate to gain better security from a shorter secret.

Next step is sharing. How can I make some of my externally stored files accessible to someone else, but still keeping it entirely secret from the cloud storage provider who hosts the data?

With a two stage encryption, there's a way: I can share the large random key for the files being shared (of course, each file stored needs to have it's own random key). The only thing I need to make sure is nobody but the intended receiver obtains that key on the way. This is easier said than done. Because having the storage provider manage the transfer, such as offering a convenient button to share a file with user xyz, this inevitably means I must trust the service about the identity of xyz. Best they can provide is an exchange that does not require the key be present in a decrypted form in the provider's system at any time. That may be an acceptable compromise in many cases, but I need to be aware I need a proof of identity established outside the storage provider's reach to really avoid they can possibly access my shared files. For instance, I could send the key in a GPG encrypted email.

So bottom line for sharing is: correctly implemented, it does not affect the security of the non-shared files at all, and if the key was exchanged securely directly between the sharing parties, it would even work without a way for the provider to access the shared data. With the one-button sharing convenience we're used to, the weak point is that the provider (or a malicious hacker inside the providers system) could technically forge identities and receive access to shared data in place of the person I actually wanted to share with. Not likely, but possible.

The third step is deduplication. This is important for the providers, as it saves them a lot of storage, and it is a convenience for users because if they store files that other users already have, the upload time is near zero (because it's no upload at all, the data is already there).

Unfortunately, the explanations around deduplication get really foggy. I haven't found a logically complete explanation from any of the cloud storage providers so far. I see two things that must be managed:

First, for deduplication to work, the service provider needs to be able to detect duplicates. If the data itself is encrypted with user specific keys, the same file from different users looks completely different at the provider's end. So neither the data itself, nor a hash over that data can be used to detect duplicates. What some providers seem to do is calculating a hash over the unencrypted file. But I don't really undstand why many heated forum discussions seem to focus on whether that's ok or not. Because IMHO the elephant in the room is the second problem:

If indeed files are stored encrypted with a secret only the user has access to, deduplication is simply not possible, even if detection of duplicates can work with sharing hashes. The only one who can decrypt a given file is the one who has the key. The second user who tries to upload a given file does not have (and must not have a way to obtain!) the first user's key for that file by definition. So even if encrypted data of that file is already there, it does not help the second user. Without the key, that data stored by the first user is just garbage for him.

How can this be solved? IMHO all attempts based on doing some implicit sharing of the key when duplicates are detected are fundamentally flawed, because we inevitably run into the proof of identity problem as shown above with user-initiated sharing, which becomes totally unacceptable here as it would affect all files, not only explicitly shared ones.

I see only one logical way for deduplication without giving the provider a way to read your files: By shifting from proof-of-identity for users to proof-of-knowledge for files. If I can present a proof that I had a certain file in my possession, I should be able to download and decrypt it from the cloud. Even if it was not me, but someone else who actually uploaded it in the first place. Still everyone else, including the storage provider itself, must not be able to decrypt that file.

I now imagine the following: instead of encrypting the files with a large random key (see above), my cloud storage client would calculate a hash over my file and use that hash as the key to encrypt the file, then store the result in the cloud. So the only condition to get that file back would be having had access to the original unencrypted file once before. I myself would qualify, of course, but anyone (totally unrelated to me) who has ever seen the same file could calculate the same hash, and will qualify as well. However, for whom the file was a secret, it remains a secret [2].

I wonder if that's what cloud storage providers claiming to do global deduplication actually do. But even more I wonder why so few speak clear text. It need not to be on the front page of their offerings, but a locigally conclusive explanation of what is happening inside their service is something that should be in every FAQ, in one piece, and not just bits spread in some forum threads mixed with a lot of guesses and wrong information!

 

[1] Of course, this is possible only within the limits of the crypto algorithms used. But these are widely published and reviewed, as well as their actual implementations. These are not absolutely safe, and implementation errors can make them additionally vulnerable. But we can safely trust that there are a lot of real cryptography expert's eyes on these basic building blocks of data security. So my working assumption is that the used encryption methods per se are good enough. The interesting questions are what trade offs are made to implement convenience features.

[2] Note that all this does not help with another fundamental weakness of deduplication: if someone wants to find out who has a copy of a known file, and can gain access to the storage provider's data, he'll get that answer out of the same information which is needed for deduplication. If that's a concern, there's IMHO no way except not doing deduplication at all.

 

The only one who really cares about your data is you!

Once more, yesterday's Dropbox authentication bug shows a fundamental weakness of centralized services. Dropbox is just a high profile example, but the underlying problem is that of unneeded centralisation.

Every teenager who starts using facebook is told how important it is to wisely choose what to put online and what not, and always be aware that nothing published on the internet can ever be completely deleted any more.

However, the way popular "cloud" services are built today unfortunately just ignore this, and choose a centralized implementation. Which means uploading everything first, and then trying (and sometimes sadly failing, like dropbox yesterday) to protect that data from unauthorized access.

Why? Because it is easier to implement. Yes, distributed systems are much harder to design and implement. But just choosing a centralized approach is inevitably generating single points of failure. I really don't think we can afford that risk for a long time any more.

It's not even only a technical problem. It's a mindset of delegating too much responsibility, which is fatal. Relying on a centralized storage to be "just secure" is delegating responsibility to others - responsibility that those others are unlikely to comply with.

The argument often goes: it's too hard for a smaller company to run their own servers and keep them secure, so better leave that to the big cloud providers, who are the experts and really care. That's simply not true. Like everyone else, they care about their profit. If they loose or expose your data, they care about the PR debacle this might be for them, but not for the data itself. The only one who really cares about what happens to your data - is you.

Even assuming the service provider was able to keep your data safe, there's another problem. As we have heard again in the discussion about Dropbox's TOS, there are legal constraints on what a cloud service may and may not do. For instance, they may not store your data encrypted such that "law enforcement" cannot access it under certain conditions. Which means that Dropbox can't offer encryption based on really private keys (only you have the key to your data, they don't) even if they wanted to.

What they could do, and IMHO must do in the long term, is offering a federated system. Giving you the choice to host the majority of data in a place where you are legally allowed to use strong encryption with your entirely private keys, such as your own server. Only for sharing with others, and only with the data actually being shared, smaller entities need to enter a bigger federation (which might be organized by a globally operating company).

That's how internet mail always worked - no mail sent among members of a organisation ever needs to leave the mail servers of that organisation. Same for Jabber/XMPP. This should become true for Dropbox, Facebook, Twitter etc. as well. They should really start structuring their clouds, and giving the option to keep critical data by yourself, without making this a decision against using the service at all.

Unfortunately, one of the few big projects that expressedly had federation on the agenda, Google Wave, has almost (but not entirely) disappeared after a big hype in 2009. Sadly, most probably exactly the fact that they did focus on federation and scalability so much and not on polishing their web interface, has made it a failure in the eye of the public.

Maybe we should really do away with that fuzzy term "the cloud" and start talking about small and big clouds, more and less private ones, and how and if they should or should not interact.

Still, one of the currently most opaque clouds is ahead of us - Apple's iCloud. Nothing at all was said in public about how security will work in the iCloud. And from what was presented, it seems for now it will have no cross-account sharing features at all.

The only thing that seem clear is that all of our data will be stored in that huge datacenter in North Carolina, so I guess that using iCloud when it launches in a few months will demand total trust on Apple to get it right (and as said above - this is a responsibility nobody can really take).

On the other hand, Apple could be foresighted enough to realize the need for federation in a later step, for example allowing future Time Capsules to act as in-house cloud server. After all, and unlike other players, Apple profits from selling hardware to us. And to base a speculation on my earlier speculation (still neither confirmed nor disproved), iCloud might be technically ready for that.

But whatever inherent motivation the big players may or may not have to improve the situation - it's up to us to realize there's no easy way around taking care of our data ourselves, and to ask for standards, infrastructure and services which make doing so possible.

iCloud sync speculation

Here's my last minute technical speculation what iCloud will be in terms of sync :-)

It'll be sync-enabled WebDAV on a large scale.

I spent the last 10 years working on synchronisation, in particular SyncML. SyncML is an open standard for synchronisation, created in 2000 by the then big players in the mobile phone industry together with some well known software companies.

SyncML remained a niche from a user perspective, despite the fact that almost every featurephone built in the last 9 years has SyncML built-in. And despite the fact that Steve Jobs himself pointed out Apples support for SyncML when he introduced iSync in July 2002 at Macworld NY.

As we have learnt by now, iSync (and with it, SyncML for the Apple universe) will be history with Lion. And featurephones are pretty much history as well, superseded by smartphones.

Unlike featurephones, smartphones never had SyncML built-in (a fact that allowed me earn my living by writing SyncML clients for these smartphone platforms...). The reason probably was that the vendors of the dominant smartphone operating systems, Palm and later Microsoft, already had their own, proprietary sync technologies in place (HotSync, ActiveSync). Only Symbian was committed to SyncML, but forced by the market share of ActiveSync-enabled enterprise software (Exchange) in 2005 they also licensed ActiveSync from Microsoft.

So did Apple for the iPhone. So did Google for Google calendar mobile sync. Third party vendors of collaboration server solutions did the same.

For a while, it seemed that the sync battle was won by ActiveSync. And by other proprietary protocols for other kinds of syncing, like dropbox for files, Google for docs, and a myriad of small "cloud" enabled apps which all do their own homebrew syncing.

Not a pleasant sight for someone like me who believes that seamless and standards based sync is as basic for mobile computing as IP connectivity was for the internet.

However, in parallel another standard for interconnecting calendars (not exactly syncing, see below) grew - CalDAV. CalDAV is an extension of WebDAV, which adds calendar specific queries and other functionality to WebDAV. And WebDAV is a mature and widely deployed extension of HTTP to allow clients not only reading from a web server, but also writing to it. Apple is a strong supporter of WebDAV since many years (iDisk is WebDAV storage), and is also a driving force behind CalDAV. Mac OS 10.5 Leopard and iOS 3.0 support CalDAV. And more recently, Apple implemented CardDAV in iOS 4.0 and proposed it as an internet draft to IETF, to support contact information the same way as CalDAV does for calendar entries.

This is all long and well known, and CalDAV is already widely used by many calendaring solutions.

There's one not-so-well-known puzzle piece however. I stumbled upon it a few month ago because I am generally interested in sync related stuff. But only now I realized it might be the rosetta stone for making iCloud. I did some extra googling today and found some clues that fit too nicely to be pure coincidence.

The puzzle piece is this: An IETF draft called "Collection synchronisation for WebDAV" [Update - by March 2012 it has become RFC6578]. The problem with WebDAV (and CalDAV, CardDAV) is that it is was designed as an access method, but not a sync method. While it is well possible to sync data via WebDAV, it does not scale well with large sync sets, because a client needs to browse through all the information available first just to detect the changes. With a large sync sets with possibly many hundred thousand files (think of your home folder) that's simply not working. The proposed extension fixes exactly this problem, and makes WebDAV and its derivates ready for efficient sync of arbitrarily huge sync sets, by making the server itself keep track of changes and report them to interested clients.

With this, a WebDAV based sync infrastructure reaching from small items like contacts and calendar entries to large documents and files (hello dropbox!) is perfectly feasible. Now why should iCloud be that infrastructure? That's where I started googling today for this blog entry.

I knew that the "Collection synchronisation for WebDAV" proposal was coming from Apple. But before I didn't pay attention to who was the author. I did now - it's Cyrus Daboo, who spent a lot of time writing Mulberry, an email client dedicated to make best possible use of the IMAP standard. Although usually seen as just another email protocol, IMAP is very much about synchronisation at a very complex level (because emails can be huge, and partial sync of items, as well as moving them wildly around within folder hierarchies must be handled efficiently), so Cyrus is certainly a true sync expert, with a lot of real-world experience. He joined Apple in 2006. Google reveals that he worked on the Calendar Server (part of Mac OS X server supporting CalDAV and CardDAV), and also contributed to other WebDAV related enhancements. It doesn't seem likely to me they hired him (or he would let them hire him) just for polishing the calendar server a bit...

Related to the imminent release of iCloud, I found a few events interesting: MobileMe users had to migrate to a new CalDAV based Calendar by May 11th, 2011. And just a month earlier, Cyrus issued the "WebDAV sync informal last call" before submitting the "Collection synchronisation for WebDAV" to IETF, and noted that there are "already several client and server implementations of this draft now". And did you notice how the iOS iWork apps just got kind of a document manager with folders? After becoming WebDAV aware only a few months ago?

So what I guess we'll see today:

  • a framework in both iOS5 and Mac OS X Lion which nicely wraps WebDAV+"Collection synchronisation for WebDAV" in a way that makes permanent incremental syncing for all sort of data a basic service of the OS every app can make use of.
  • a cloud based WebDAV+Sync storage - the iCloud
  • a home based WebDAV+Sync storage - new TimeCapsules and maybe AirPorts
  • and of course a lot of Apple Magic around all this. Like Back-to-my-Mac and FaceTime are clever mash-ups of many existing internet standards to make them work "magically", there will be certainly more to iCloud than just a WebDAV login (let alone all the digital media locker functionality many expect).

In about 5 hours we'll hopefully know...

How to Flattr charities?

Returning at home from vacation, I found my (physical) mailbox filled mostly with request from charities for money. Most of them I did pay something in the past and am willing to pay again. But digging through that heap of dead tree material made me angry, and made me realize more clearly than ever that there must be a better way for them to collect funds than that!

The motivation to donate comes from inspiring moments of reading, of being open to good thoughts and intents. These moments are mostly damaged by a feeling of being forced into dealing with heaps of letters each trying to address me friendly, but in their amount being a nuisance of too much information at the wrong time and in the wrong media.

What immediately came to my mind then was Flattr.

That's exactly the way how I'd more than happy (AND efficient!) to pay charities. I would like to answer each and every charity that sends me paper mail asking for money (or tries to urge me into a regular payment via those professionally enthusiatic young hired fundraising agents on the streets) with a suggestion to present their activities online (as many already do) and use Flattr to get funds.

To find out if that could work, I read through Flattr docs and was glad to find that they already support charities with a charity account status that has no fees. And subscriptions also fit nicely with the idea to support something on a ongoing basis.

I got stuck however in one regard: At least for me, and I assume for many others as well, donating to charities is amount-wise an entirely different category than donating to interesting web "things" like blog entries or podcasts.

For both, a monthly budget and the attention based distribution thereof, as Flattr provides it, is perfect.

But it's a significantly different donation chunk size for charity projects than for blogs. I want to give more to the latter per click (but not a fixed amount, as the donation feature would already allow).

Presently, the only way I see to work around that, would be having two Flattr accounts with two budgets. But that seems to me to be opposite to the entire Flattr idea of simply being logged in all the time to allow quick single click donations.

So I tried to imagine what extension of Flattr functionality could help. Basically, it boils down to an option to extend the donation beyond a single flattr, like subscriptions already provide on the time axis.

I'd imagine a flattr button that converts to "flattr more" instead of "subscribe". Clicking it would open a window like it does now, offering subscription (repeated donation) but additionally an option to donate a larger share of the budget, or a share of another budget.

The former (larger share) would be simple: Just offer a multiplier, so I can flatter a thing 5x, 10x, 20x instead of just 1x.

The latter (different budget) is certainly more complicated. Users would need to have the option to add more budgets for different purposes to their accounts, which is probably confusing for many. But it would help to keep separate topics apart.

These are just two of my ideas how it could work.

The point however is: I think the Flattr concept could revolutionize donations in many more areas (traditional charities is just one of them), but for that it needs to step beyond the current "all things are equal" mode, in one way or another.

Viele Arbeitsplätze = Wohlstand?

Ein paar Gedanken angeregt von "Sie schaffen Glück, keine Jobs" von Philipp Löpfe (TA vom 26.4.2011)

Sich mit der Frage nach dem Nutzen von Social Media auseinanderzusetzen wie es der Artikel verspricht, fände ich durchaus spannend. Leider scheint mir die ganze Argumentation an einem unreflektierten Dogma aufgehängt: Viele Arbeitsplätze = Wohlstand.

Doch wieso eigentlich soll hektische Aktivität an sich Wohlstand sichern?

Erst mal ist das Gegenteil der Fall. Wohlstand besteht darin, dass die Menschen sich eben *nicht* total abrackern müssen, um zu überleben.

Um das zu erreichen, gibt es zwei Wege: Erstens: Andere die Arbeit machen lassen - früher waren es Sklaven, später kamen die Maschinen mit dem Energieverbrauch dazu. Und zweitens: Verbesserung der Methoden, so dass mehr mit weniger Aufwand erreicht werden kann.

Auf beiden Wegen sind wir lange und weit vorwärtsgekommen. Während sich aber sich die Grenzen der Versklavungsmöglichkeiten und der Resourcenverschwendung überall unerbittlich zeigen, ist das Potential unendlich, neue Methoden zu finden, um das Vorhandene besser zu nutzen, und dann zu verbreiten. Die Natur macht das seit Millionen von Jahren, und wir nennen es Evolution.

Diesbezüglich sind wir an einem interessanten Punkt. Bis vor sehr wenigen Jahren hielt die Biologie das absolute Monopol in der Informationstechnologie.
Speicherdichten und Replikationsmechanismen wie in den Genen, Verarbeitungskapazitäten wie in Gehirnen waren technisch unvorstellbar. Nicht mehr so sehr heute. Dass deswegen die Roboter bald die Herrschaft übernehmen halte ich zwar für Quatsch. Nicht aber, dass die Informationstechnik für die Evolution (der Menschen) relevante Grössenordnung bekommen hat.

Vielleicht muss ich es noch etwas zuspitzen, damit der Gedanke klar wird: Was anderes, als eine schnelle Entwicklung des Bewusstseins der gesamten Menschheit kann uns noch retten? Und was anderes als die effiziente Verbreitung von Wissen und Erfahrung könnte dazu beitragen?

Oh gewiss, ein grosser Anspruch an Facebook &Co :-)

Aber auch weniger ausschweifend betrachtet - Social Media ernsthaft daran zu messen, wiewenige Menschen dadurch in einem Datencenter für Lohn Dienst schieben dürfen, ist absurd.

Oder zumindest irrelevant, sogar rein volkswirtschaftlich, im Vergleich zu den Auswirkungen dieser Informationsströme, z.B. in den vielen Firmen, die ihr Marketing total auf Social Media aufbauen, oder meinetwegen die an klassischen Arbeitsplätzen damit verplemperten Stunden (oder war das schon Aufbauarbeit für ein zweites Standbein?). Erst recht mit einem Seitenblick auf die kürzlichen Ereignisse im nahen Osten, die dürften wirtschaftlich relevanter sein als jede nur denkbare Anzahl von Arbeitsplätzen bei Facebook.

Übrigens - dass Twitter&Facebook keine Riesenapparate sind, ist eine gute Nachricht! Das heisst, dass sie noch nicht quasi unersetzlich sind. Denn wenn Social Media irgendwas bringen soll, darf es mittelfristig nicht von Herrn Zuckerbergs Laune abhängen, wie die Welt kommuniziert!

Weit hergeholt? Der Appstore und das Grundeinkommen

In fast jeder Diskussion zum bedingungslosen Grundeinkommen (BGE) kommt die Frage auf, weshalb denn der Steuerzahler das finanzieren soll. (Was ist BGE? S. z.B. hier oder hier)

Wer sich mal die Finanzierungsmodelle angeschaut hat, weiss, dass bei einem BGE nicht mehr umverteilt würde, sondern vor allem auf eine andere Weise.
Aber was bringt diese "andere" Weise (die Bedingungslosigkeit) denen, die heute kein Grundeinkommen brauchen? Diese fragen sich, weshalb sie Steuern zahlen sollen und einfach hoffen, dass die Menschen damit via BGE Sinnvolles tun, anstatt auf der Kontrolle, also den bedingten Sozialleistungen, zu bestehen.

Das ist eine sehr zentrale Frage, deren Beantwortung aber im Theoretischen sich nur aus dem Menschenbild der Antwortenden ergeben kann. Ob der Mensch sozial oder doch rein eigennützig sei, darüber lässt sich trefflich und endlos streiten.

Umso überraschender ist es, ein Grundprinzip des BGE, nämlich die Leistung nicht im Nachhinein zu entschädigen, sondern durch einen Vorschuss zu ermöglichen, an einem ganz unerwarteten Ort eingenistet zu entdecken.

Dieser Ort ist der AppStore, der on-line Laden für Zusatzprogramme (Apps) für das iPhone und das iPad. Dort ist in den zweieinhalb Jahren seit Eröffnung im Sommer 2008 eine neue Art von Software-Markt entstanden, den es vorher nicht gab. Zuvor kosteten Programme dieser Art, wie sie auf einem Mobiltelefon laufen, mehrere dutzend Geldeinheiten. Der AppStore hat aber mit einer (willkürlichen) Vorgabe gestartet, dass eine App nicht mal ein Zehntel davon kosten sollte, also meist weniger als eine Tasse Kaffee.

Die Freude beim Publikum war gross, und einige Apps liefen deshalb dermassen gut, dass die Hersteller auch mit dem Spottpreis so richtig reich wurden. Das sind die Goldgräbergeschichten, die durch die Medien breit gefeiert wurden.

Der Aufwand aber, eine App zu schreiben, ist nicht kleiner als früher. Deshalb setzte bei den Entwicklern bald Katzenjammer ein. Wie soll man von den Spottpreisen leben können, wenn man nicht einen Top-Hit landet? Das war der Tenor der Diskussion vor ca. einem Jahr.

Diese Frage ist natürlich nicht vom Tisch. Es ist die Frage danach, wer denn jetzt die 90% bekommt, um die der Durchschnittspreis gesunken ist. Nur wer 10 mal mehr verkauft als früher, hat gleichviel wie damals. Und, warum sollte das, ausser bei einzelnen Glücklichen, der Fall sein?

Eine Antwort: Weil sich das Verhalten der Kunden geändert hat. Früher (und heute noch beim klassischen Softwareverkauf) haben diese 10 kostenlose Demoversionen ausprobiert, und schliesslich diejenige, die am Besten gefiel, für z.B. 20 Geldeinheiten gekauft. Alle anderen hatten zwar Aufwand mit dem potentiellen Kunden, gingen aber leer aus. Im AppStore hingegen werden für eine Aufgabenstellung viel eher einfach einige App-Varianten für je ein, zwei Geldeinheiten gekauft. Eine davon ist dann schon die richtige. Die total ausgegebene Summe ist pro Kunde und Aufgabenstellung etwa dieselbe.

Man kann dieses lockere App-Kaufen nun bloss als weiter angeheiztes Konsumverhalten mit neuen Mitteln sehen. Ich glaube aber, da spielt sich etwas ab, das sich im Kontext des Grundeinkommens näher zu betrachten lohnt. Denn das lockere Kaufen, das sich hier zeigt, weicht die klassische 1:1-Forderung nach direkter Gegenleistung für jede Geldausgabe auf.

Ich zahle mehrmals einen geringen Beitrag fürs Ausprobieren, streue quasi Geld aus, ohne im Vornherein zu wissen, ob ich das gesuchte Ergebnis wirklich bekomme. Aber im Vertrauen, dass im Schnitt schon jemand die gewünschte Gegenleistung erbringen wird, und eine App liefert, die mein Bedürfnis erfüllt. Dass ich dabei 9 anderen, die mein Bedürfnis nicht erfüllt haben, auch Geld gegeben habe, vielleicht sogar jemandem, der wirklich lausige Arbeit gemacht hat, stört nicht oder kaum. Soweit: nichts verloren – ich komme immer noch auf meine Kosten.

Was aber habe ich gewonnen? Die Entlastung vom Kontrollaufwand! Keine Angst mehr, vielleicht das Falsche schmerzhaft teuer zu kaufen. Die konkrete Erfahrung, dass im Schnitt, trotz Betrügern, Blendern und Schmarotzern, ein brauchbares Return on Investment herausschaut. Und ich lerne mehr Produkte kennen – auch wenn sie für mich nichts taugen, so erfahre ich etwas darüber, was anderen gefällt.

Nun, an sich ist die Erkenntnis recht trivial. Risikokapitalgeber arbeiteten schon immer so. 10 Versuche, 9 in den Sand gesetzt, doch einer gelingt und bringt mehr.

Dass sich dieses Verhalten in einem hochkompetitiven Markt für Endkunden, denen man in letzter Zeit kaum mehr als "Geiz ist geil" zugetraut hat, herausbildet, finde ich jedoch interessant.

Und noch mehr die Anschlussfrage: was hat denn dazu geführt? Meiner Meinung nach im Wesentlichen ein einziger Parameter – die willkürliche Herabsetzung der Preise auf einen Zehntel. Oder anders herum betrachtet: die Tatsache, dass den Kunden potentiell die 10-fache Kaufkraft einfach mal gegeben wurde, ohne Bedingung. Das hat aber nicht dazu geführt, dass einfach 90% des Umsatzes in diesem Markt wegbrach, obwohl man das rechnerisch durchaus hätte befürchten können (und auch hat).

Massiv verringert hat sich nur die Abhängigkeit des einzelnen Kunden vom einzelnen Anbieter. Nicht aber die Bereitschaft der Kundschaft als Ganzes, der Anbieterschaft als Ganzes genug Geld zukommen zu lassen. Und auch nicht der Anteil dessen, was der Einzelne von seinem (klassisch durch eigene Leistung verdienten!) Einkommen dafür einsetzt.

Ich will diese Beobachtung nun nicht als "Beweis" ins Feld führen, dass ein Grundeinkommen funktionieren wird. Aber ich möchte dazu anregen, mit den Behauptungen und Befürchtungen auch in scheinbar weit entfernten Kontexten wie diesem intensiv zu spielen.

Zum Beispiel weiterzuspinnen, und anhand der zwei verschiedenen Kontexte auch mal zu versuchen, die Befriedigung durch Konsum etwas zu sezieren. Worin liegt dort die Befriedigung? Worin der Betrug, die Sucht? Könnte es etwa die (vermeintliche) Unabhängigkeit, die Selbstständigkeit des Kaufentscheids sein, die so attraktiv ist? Und wenn, könnte es gesellschaftlich nicht effizienter und gesünder sein, diese Unabhängigkeit tatsächlich zu gewähren, anstatt sie nur durch immer raffiniertere Taktiken vorzuschützen, gleichzeitig aber Entmündigung voranzutreiben?

Alle Jahre wieder – Taschenupdate

Vor einem Jahr war das iPhone 3GS neu, und brauchte eine Tasche, deren Herstellung ich damals in diesem Blog dokumentiert habe. Zum Bloggen bin ich zwar in der Zwischenzeit nicht gross gekommen (gut Ding will Weile haben oder so), jedoch ein nagelneues iPhone4 habe ich seit gestern vorliegen. Deshalb hier ein ganz kleines Taschenupdate.

Das iPhone4 ist ein wenig dünner, deshalb ändert sich die Breite des "Schlauches" den man nähen muss (wie gesagt, Detailanleitung zur Herstellung so einer Tasche gibts hier).

Statt 69mm beim 3G/3GS sind es 66mm fürs iPhone4, statt 73mm nur 70mm. Also total 6mm weniger Umfang:

Fertig sieht dann das etwa so aus:

Zeitaufwand etwa 30min. Viel Spass!

Why doesn't this exist already? [August 2010: now it does]

FakePad

Why doesn't this kind of device exist for a long time already?

Since I have a MacBook PRO which allows using multitouch gestures on the trackpad (especially scroll and zoom) I miss these a lot when I work on a desktop Mac.

The "photo" above is of course a very amateurish work of Photoshop editing my external keyboard and and the MacBook's trackpad together.

However, should I get access to a broken MacBook body with the trackpad still functional before Apple or someone else makes a real product like this, I'd probably try to create one myself.

As the internal trackpad is a regular USB device (only connected internally), all I'd have to do would be connecting it to a normal USB cable, cutting the Trackpad plus the needed frame material from the MB(P) body, and putting everything together in a decent housing, probably made from a thick sheet of aluminium. I guess I'll be better at doing that in the workshop than in photoshop…

Donations of broken MBP cover plates are welcome - pointers to external trackpad products that might already exist as well, of course! But remember, it's the multi-touch I look for, not just an external trackpad.

[Update: just saw this product - altough it is for PC only and looks ugly to me, it is a step into the right direction. Still, I guess Apple's rumoured multi-touch mouse is more likely to provide what I am looking for]

[Update2: Indeed, it looks like Apple just released (kind of) what I was looking for: The Magic Mouse.]

[Update3 - August 2010: The Magic Mouse was a first step, but apparently they have really listened (to me? ;-)) and thus created the Magic Trackpad. I already got one and yes, it's exactly that what I wanted]

Über CRE (Chaosradio Express)

Ich schreibe diesen Blogeintrag, um Tim Pritlove zu bestechen, neue Folgen von seinem Podcast "Chaosradio Express", oder eben kurz CRE, vorzeitig zu releasen. Das auf ausdrücklichen Wunsch, s. seinen Blogeintrag.

Nun, das Ganze ist natürlich ein Spiel mit der Aufmerksamkeit und den Möglichkeiten des Netzes. Auftragsblogging? Virtueller Flashmob? Gratiswerbung? Gefolgschaft beweisen? Community testen? Ob Absicht oder nicht, von alledem hat es etwas. Und für uns, die mitmachen? Ich müsste lügen wenn ich sagte, es ginge mir nicht auch um Aufmerksamkeit - eine Gelegenheit in einen Kontext verlinkt zu werden, den ich durchaus bewundere. So ist das für mich mit dem "Bestechungsbloggen" nicht halb so wild - ich tu's nicht uneigennützig.

Aber ich würde nicht darüber schreiben, wenn ich CRE nicht für einen wirklich hervorragenden Podcast hielte.

Pro Sendung geht es um ein Thema, manchmal rein technisch, manchmal im weiteren gesellschaftlichen Umfeld von Technikthemen. Besonders schätze ich, dass der Podcast so lange dauert, wie das Thema etwas hergibt, mal anderthalb, mal zweieinhalb Stunden. Ohne Pause, ohne Musik, einfach ein Gast oder mehrere Gäste, die Tim interviewt.

Inhaltlich finde ich die rein technischen wie die gesellschaftlichen Themen meist sehr interessant - bei den zweiten gibt es für mich als Schweizer auch viel zu lernen über das politische System in Deutschland (trotz Nachbarland und gleicher Sprache haben wir hier erstaunlich wenig Ahnung davon).

Ich finde aber schon, dass Tim als Interviewer bei den technischen und techniknahen Themen am meisten zu bieten hat - ich fühle mich selten so gut vertreten bei einem Interview (in dem Sinne, dass dem Gast die Fragen gestellt werden, die ich selber gern stellen würde) wie bei so richtig technischen Themen. Es ist für mich genau die richtige Mischung aus Rekapitulieren und präzise Nachfragen. Der breite Erfahrungshintergrund im Nerd-Bereich den Tim hat macht es für mich aus, und das echte Interesse (so kommte es wenigstens rüber) an den Themen, auch an den nifty details.

Insofern bin ich etwas ambivalent, wenn die Rede davon ist die Themenauswahl für CRE allzusehr zu verbreitern. Insbesondere dann, wenn es darum ginge, das nur zu tun weil der "Markt es verlangt". Es ist vielleicht eine unrealistische Forderung (s. unten), aber ich hoffe, dass Tim auch weiterhin die Themen entsprechend seinen eigenen brennenden Interessen sucht und auch in der Tiefe entsprechend behandelt - und nicht nach Einschaltquoten.

Damit bin ich an einem Punkt, der mich bei CRE auch sehr interessiert aber bisher kaum zur Sprache kam - und das wäre die Arbeitsweise, wie man denn als selbständiger Podcaster sich organisiert und überlebt. Oder wie das die Gäste machen, z.B. neben oder mit ihrem OpenSource-Engagement. Eine Folge "Arbeitsmodelle für Nerds"? Ich bin einfach neugierig was z.B. Tims extrem trockenes "so siehts aus" als Antwort auf Götz Werner's Frage "Sie sind doch auch ein Freelancer?" (eine Episode in der dieGesellschafter.de-Podcast-Serie u.a über das Grundeinkommen) eigentlich heisst.

CRE Blogkette: vorheriger (Nr. 13), nächster(Nr.15)