NASA released a plan the other day to build a manned space station orbiting the moon. I’ve already seen a lot of talk about how bad a plan it is. And it is a pretty poor plan – but not for the reasons everyone says. A lunar orbiting base isn’t stupid in and of itself. It’s only a bad idea because of how NASA’s doing it. The critics say that this won’t produce enough science. They have it exactly backwards. This station produces too much science – like everything else NASA does.
Understand something important: NASA is really, really good at science. They do a lot of wonderful work. I have friends and family who do some of this work for NASA, and it’s brilliant. But NASA’s focus on science prevents the agency from focusing on what should be its primary mission: making access to space regular, easy, and cheap.
The biggest cost contributor, launch costs, will already fall dramatically over the next ten years. The private space race and companies like Space-X, Virgin Galactic, and Blue Origin, are already winning that battle. Space-X’s rocket system is already far cheaper than competitors, and as they make it more and more reusable it will become even cheaper.
But launch costs still won’t become “trivial.” As such, we’ll need to ensure that we’re using the mass we launch effectively. And the best way to do that – as I’ve noted before – is to build space infrastructure.
That is what NASA’s primary mission should be. Private industry will likely redo everything NASA does on the infrastructure front – and do it better and cheaper. Eventually. But planting the seed of that infrastructure would have huge payoffs.
One core piece of that infrastructure, as I’ve also discussed before, is that a system should be in place for earth-moon transit. And that system should largely consist of a ferry that travels only between two space stations – one in Earth orbit, and one in Lunar orbit. We already have a station in Earth orbit, so NASA’s new lunar orbit station could fulfill the role for the second part of that, right?
Possibly. But it would be a pretty crappy system if we built it that way, even by government standards. Both stations really need to serve two purposes, and only two purposes:
Basically, we need two giant truck stops in the sky.
The ISS is horrible at both of these tasks. It wasn’t built for it – because it was built to do science. And NASA’s new lunar orbit station looks poised to be built for science, also. As others have complained, there’s not enough science it could do to justify the cost.
But if we built it to support infrastructure, then the future science done – not by it, but by those who use it as a layover – could more than justify the cost.
Alas, NASA is too good at science to follow the better path.
I have completed the server migration should all five sites (this blog, Morgon’s blog, Spirit Made Steel, Silver Empire, and Lyonesse). It’s possible that due to the vagaries of DNS you might still see one of the old sites on your end through tomorrow, but that should be it.
I’ve also spent a lot of time working with various caching and optimization plugins for WordPress. I’m not quite done with everything yet. However, the dojo site has achieved the Holy Grail: a 100 score on Google’s PageSpeed Insights test. I’ve read guides that have said that’s nearly impossible with WordPress and that it’s a waste of time to try. In fact, it’s all just a matter of getting the right caching and optimization plugins and tuning them correctly. I’ll have a guide up for how to do it within the next few weeks.
I’ve replicated some of that on the other sites already – they should be loading MUCH faster than before. I’ll be finishing that up over the next few days.
I currently run five web sites: my blog, my wife’s blog, my dojo web page, Silver Empire, and Lyonesse. My old hosting provider had one gigantic advantage when I first signed with them: they were cheap. Nowadays, there are lots of other providers in a similar price range, and some of them provide far better service. I’ve been eyeballing a server migration for some time now. The preparation for launching Lyonesse didn’t quite force me to go ahead and migrate, but migrating now does have some major technical and logistical advantages.
In any event, all four sites will be migrating over the next few days. Silver Empire and the Lyonesse placeholder have already moved, although due to the vagaries of DNS you might still be getting the old server for another day or so. If this is the newest message you see, then you’re still seeing the old server for this blog. The others will move at various times, but the entire process should be finished by Monday.
Side note for my various authors and other folks: my Spirit Made Steel and Silver Empire e-mail addresses may have minor hiccups over the next few days as part of the transfer. I should still get everything, but there might be a mild delay. If you have my straight gmail address, that one should work fine throughout the transition.
CNN tells us tonight that Russia has recently tested an anti-satellite weapon.
The US tracked the weapon and it did not create debris, indicating it did not destroy a target, the source said.
The Russian test, coming as President-elect Donald Trump prepares to enter the White House next month, could be seen as a provocative demonstration of Moscow’s capability in space.
Russia has demonstrated the ability to launch anti-satellite weapons in the past, including its Nudol missile.
Emphasis is mine, and not in the original article. I remain suspicious that DMSP-13 was shot down by the Russians, although it is clearly unconfirmed at this point. Nevertheless, we know that Russia and China both badly want anti-satellite technology. They have invested quite a bit of money and time into various technologies for it. They know quite well that US space technology puts them at a huge strategic disadvantage. Both nations desperately want to eliminate that advantage if at all possible.
CNN strongly implies that Putin used this test to demonstrate capability and intimidate the incoming President Elect. I remain convinced that Russia has done so before, more than once. I have zero doubt that they will do so again.
What’s the right move for the US? Developing countermeasures is expensive and clunky, so that’s probably what we will do. What we should do is focus on lowering launch costs so that we can replace satellites so cheaply that destroying them is of little gain. The current “commercial space race” is already meeting success on that front. We should do all we can to speed the process. Of course, a little bit of space infrastructure wouldn’t hurt, either.
I ran across this in my Twitter feed this morning:
Please let hyper loops be real. Please, please please let hyper loops be real.
— Megan McArdle (@asymmetricinfo) May 12, 2016
The funny thing is, I found this other article about why hyperloops won’t work.
But I think there are a number of problems with this. First of all, many of the people flying between Dallas and Houston are not actually ending up in those cities; they’re going somewhere else, because Dallas is a major hub. When I want to fly up to see my family in upstate New York, I don’t take Amtrak to Penn Station and then trek out to LaGuardia, even though I much prefer rail travel to air travel. So high speed rail doesn’t readily substitute for air travel unless you have a lot of connections running out of Dallas. I don’t think it’s an accident that the two places in America where rail kind of works–the northeast corridor, and the LA-San Diego route–are coastal runs where the regional links run down a basically straight line. And the reason that they are conveniently in a straight line is that both regions happen to be sandwiched on a narrow strip between the coastline and a big mountain range that limited inland development during the formative years. In the middle of the country, where you need to add an east-west axis to your planning, things rapidly get more expensive.
The other reason I don’t think that rail is going to compete with air in most places is the very thing that makes air travel so environmentally problematic: frequency of service. For high speed rail–or any sort of rail, really–to be an environmental boon, the trains have to run pretty full.
But wait, you say. Ms McArdle (yes, the same Ms McArdle) is talking about high speed rail, not hyperloops!
From a technological perspective, hyperloops are new and cool and really awesome and totally not trains. But from a business perspective, it’s just a glorified train. It moves a lot faster and it’s more efficient, but those aren’t really the problems with trains. Maybe speed, but we already have a faster-than-trains alternative: it’s called the airplane, and there’s already a lot of infrastructure in place for it in the US.
But trains are already pretty efficient, especially electric trains. The reason we don’t have more of them in the US is because the infrastructure cost is too high. For hyperloops to become a thing (outside of a few limited areas), there would have to be trillions (yes, trillions) of dollars worth of loop built. If you don’t believe me on the cost, check out the latest highway bill – and remember that that’s just for maintenance, not for building the whole Interstate Highway System from scratch. Trillions of dollars isn’t exactly the kind of fixed cost that you recoup quickly or easily.
What about the speed? Well, what about it? The Concorde had speed, too, and it didn’t catch on either. The thing is, I can already cross the country coast to coast in about five or six hours. There are very few reasons why I’d need to do it faster. I might like to, sure – but not enough to pay twice as much to do it. Some businessmen might, but they already have a nice way to do it faster: private jets that can go point-to-point and shave an hour or two off of that (more if you factor in layovers and TSA checkpoints).
The thing is, the faster you’re already going, the less of an advantage more speed is. If you can cross the country in four hours, you’d have to double your speed to make it really worth paying more. Even then… how many of us value our time so highly that shaving two hours off of a four hour trip is worth thousands of dollars? Again, not many. The extra speed just isn’t worth much. Which means that hyperloops would be competing with an industry that’s already hyper-competitive.
The tech is cool. But the market simply isn’t there.
Nate Silver first made a name for himself by using statistics to make sports predictions. But like most, I became aware of him after he accurately called 49 out of 50 states in the 2008 Presidential election. His fame rose when he called the 2012 election accurately as well, despite many on the right not quite having faith in his numbers.
The core of his technique is nothing magical, although neither will I shortchange him by calling it “obvious” as so many people are wont to do after somebody clever does something new. It’s obvious in hindsight; it wasn’t so obvious before he did it. He’s published a general outline of his methodology after every Presidential election, and you don’t have to be an actual statistician to follow what he’s doing. You do, however, need to have a basic understanding of the underlying statistical methodology. Any undergraduate stats course should let you follow along – conceptually, at least, if not in the details.
Before I dive into my main point, let me emphasize that Mr. Silver’s methodology will work brilliantly the vast majority of the time. His methodology is just about as truly data driven as it’s possible to be. He uses the best data that’s out there. And given his reputation, he can now get access to that data easily. He also uses standard and sound statistical methods.
However, the day will eventually come when Nate Silver will fail – and when it does, he will fail big.
To understand why, we first need to have a basic understanding of his methods. A decade or two ago, somebody had the keen insight that although any individual poll taken during an election season had to be taken with a huge grain of salt, if you average all the polls together you end up with numbers that are pretty reliable. I’m not sure who had the eureka moment first, but Real Clear Politics popularized the concept with their RCP poll average in the early 2000s and it’s been a staple of politics ever since.
Mr. Silver took the concept even further and improved upon it in several ways.
First, he realized that in Presidential politics it was the state polls that mattered – not the national polls. So he computed polling averages for each individual state.
Second, he did historical analysis of each polling company and concluded that some were more reliable than others. He quantified this using standard statistical techniques, and then adjusted his averages by weighting each poll according to its historical reliability. This alone is a big improvement to the RCP model, and its validity shouldn’t be discounted.
Third, he added other factors into his model: the general state of the economy and how it favors the incumbent; endorsements; experience of the candidate; and several other factors. The predictive value of these factors is less, so they’re weighted less in his model – but their value counted.
Fourth, he improved the whole thing by running Monte Carlo simulations. This is also a giant improvement over the RCP average. Basically, it works like this: you write a simple computer program that takes the poll numbers given and, using the model you’ve devised (in this case, points 1 through 3 above) you simulate a given election. With the polls, endorsements, etc as given, you also account for some randomness in the actual results. To do this, you account for the historical error of the polls – if a candidate is polling at, say, 45% then history might suggest that his actual vote could be anywhere from 40% to 50%, and you can compute a probability curve that matches that range.
Then you run this simulation – a lot. Thousands of times or tens of thousands of times. Let’s say you run it ten thousand times, and out of those ten thousand times, Candidate A wins the election five thousand times: exactly half. You then say that candidate has a 50% chance of winning the election.
The methodology is pretty sound. But it has some serious flaws, and because of these, eventually Nate Silver will fail. Here are the problems.
First, the model requires that the input polling data be good. If the polls aren’t good, then Silver’s model isn’t any good either. Note that it doesn’t require any individual poll to be perfect. But it does require a few things. Each poll should be generally within or close to it’s historical margin of error. The polls should be canceling out each others’ errors. In other words, if one poll gets Candidate A’s share of the vote too high, the competitor’s poll should get it too low. If both polls are wrong in the same direction then averaging them doesn’t help.
There is strong evidence – even documented by Silver himself – that the polls are getting worse. Indeed, the polling companies are having so much trouble that Gallup has stopped polling the Presidential races altogether. There’s also evidence that the polls have started to weight their data so that they match more closely to other polls. That skews their value and makes them less reliable. So the polls themselves are a problem – and a growing one.
Second, polling long before an election is hugely inaccurate. Accuracy increases greatly the closer a poll is taken to the actual election. This is why Mr. Silver’s 2008 and 2012 predictions weren’t magic: the “predictions” relied on polls taken within days of the election. With respect to Mr. Silver, this accomplishment isn’t as big as many made it out to be. At that point, the polls are generally pretty accurate. His achievement was simply to look at the right polls.
To be fair to Mr. Silver, he’s quite aware of this problem and has discussed it at some length. He refrains from even making predictions before certain points in the campaign, and he’s the first to tell you that they’re of little value even when he begins them. However, having his predictions become accurate only days or a very few weeks before the actual election robs them of much of the value of a prediction. It doesn’t make them worthless, mind you, just of small utility for most of us.
But the real problem isn’t even those issues, as bad as they might be. The real problem is that the map is not the territory. Mr. Silver has constructed a wonderful model of elections. But it’s just that: a model. It is not the reality.
The biggest area where this will eventually bite him is in the non-polling factors that he includes. For instance, months ago Mr. Silver was claiming that Donald Trump’s low favorability ratings put a cap on the support he’d manage to get at the polls. He made the claim in several places, but this piece from July 2015 is the one I managed to find with a few seconds of Googling. In it he claims that candidates with Trump’s net favorability ratings rarely grow beyond 20 or 30% of the vote. As of this writing, the RCP average has Trump at 29% in Iowa (about to break that ceiling), 32.2% in New Hampshire (broke the ceiling) and 34.8% nationally (shattered it). A poll released today shows that he’s nearing 50% in Florida.
What happened? Trump’s favorability changed – a lot. Gallup last week showed him at +27% among Republicans, up 23 points from where Silver had him in the July piece listed above. That’s yuge.
Again, as I noted above – the map is not the territory. Silver’s model, as good as it is, doesn’t account for this kind of thing to happen. Now, it’s easy to say, “let’s update the model to allow for the off chance of someone increasing his favorability.” Fine. But the underlying problem is that favorability doesn’t directly predict anything. It’s a proxy.
Think of it this way: there’s no ironclad law of physics that says that a candidate with low favorability ratings can’t win. Mr. Silver has merely observed that so far, in the election’s we’ve seen, this hasn’t happened. It seems to have a strong correlation with the winner. But correlation does not equal causation. In this particular case, the variables are probably weakly linked. That is, how favorably the electorate views a candidate probably does have some impact on how they eventually vote. But it’s not a perfect match.
Mr. Silver will readily admit this, and that’s why the value is weighted relatively small compared to other data. But the problem is that all of Mr. Silver’s data is intrinsically a proxy, including the actual polling data. How people say they’ll vote is not the same thing as how they’ll actually vote. The correlation is high, but it’s not a causation.
Someday we’ll hit a point in the territory where the map doesn’t agree with it. For that case, we’ll have no choice but to conclude that the map is wrong. As they say in sports, there’s a reason they play the games.
There’s good reason to suspect that this election cycle may be it. Mr. Silver has been giving Mr. Trump roughly 5% odds of winning the nomination, based mostly on his model. Personally, I think his model is wrong in this specific case. “This time is different” is called a lot and is rarely true. But… sometimes it’s true, this time really is different. By all outside appearances, this election certainly seems to be one of those cases. I believe that Mr. Silver has too much invested in his model for him to be able to step back and honestly admit that it may not cover this case. Again, to be fair to Mr. Silver, I don’t believe this is a conscious choice. But I think it’s real.
But this may not be the time, either. It may well be that this time Mr. Silver is right again and I am wrong. I fully accept that, and I’ll admit it here if it’s the case. But even if this time isn’t the one, sooner or later Nate Silver will fail – and it will be yuge.
Harvard Law School professor and quixotic Democratic presidential candidate Lawrence Lessig thinks that future technology will solve our privacy issues.
The average cost per user of a data breach is now $240 … think of businesses looking at that cost and saying “What if I can find a way to not hold that data, but the value of that data?” When we do that, our concept of privacy will be different. Our concept so far is that we should give people control over copies of data. In the future, we will not worry about copies of data, but using data. The paradigm of required use will develop once we have really simple ways to hold data. If I were king, I would say it’s too early. Let’s muddle through the next few years. The costs are costly, but the current model of privacy will not make sense going forward.
If I ping a service, and it tells me someone is over 18, I don’t need to hold that fact. … The level of security I have to apply … [is not] the same [that] would be required if I was holding all of this data on my servers. This will radically change the burden of security that people will have.
Back in the nineties and early aughts when the Internet was a new and wondrous beast, all of us in the tech sector believed that technology would open up society. The internet would make everyone anonymous. File sharers and pirates couldn’t be caught – but also, free speech would reign supreme and everyone could speak their mind without fear of reprisal. Web sites would give a voice to the little people, and big corporations couldn’t compete in the data sphere.
We were wrong.
What we forgot – or never knew in the first place because we were young and naive – is that technology isn’t the decisive factor in society. Human beings are. And human beings, in aggregate, are ridiculously predictable.
“People can violate the law all they want to on the Internet, because nobody can track them!” we thought. Until the government decided to get serious about it and start tracking people.
“We can say what we want without fear of the government reading it – nobody has the resources to track it back to us!” we thought. Until the NSA proved that they do have the resources to do exactly that.
“Big corporations won’t be able to lock down their data! Data wants to be free!” we thought. Until DRM and the DMCA came about.
“We can say what we want in this big free speech paradise!” we thought. Until SJWs started doxxing people and getting them fired over social media posts.
Technology will not solve the privacy problem because big corporations and big government don’t want you to have any privacy. They want all your data to be easily accessible. Standards like Lessig describes, where corporations discard your data after they’ve made use of it, won’t catch on for the simple fact that they don’t want to discard your data.
If they wanted to, they already could. There’s no reason they have to keep most of it around already. They do it because they want your data. Big government or big corporation, it makes no difference. They want to know everything about you. Big government wants to market to you in every possible way. Big government wants to milk every penny of tax revenue and regulatory compliance they can from you. Neither cares if this is to your benefit or not.
Privacy will not improve until the people who have the power to improve it want to improve it. This is not likely to happen anytime soon.
While I absolutely agree with FuturePundit that we are not yet in the space age, I do disagree with him about what it will take to get there.
We entered the jet age decades ago. To enter a space age in the same sense in which we entered the jet age would require much cheaper energy to power the rockets, better propulsion systems for moving between planets, and an assortment of technological advances to make a space colony viable on another planet or moon. So we aren’t in the space age yet.
No, I’m not sure that we do need any of these technologies. At the $100 per pound price point that he describes earlier in the article, a lot of things already begin to change. The energy systems we have are actually incredibly cheap. The propulsion technology that we have is fine. We basically have most of the tech that we actually need to make colonies viable on other planets.
What we don’t have at all is space infrastructure. I hinted at this some with a few throwaway lines in “The Fourth Fleet,” but there’s a whole lot more detail that could be had. As Robert Heinlein famously said, once you’re in orbit you’re halfway to anywhere.
To put it more simply: the amount of energy it takes to get into low Earth orbit (LEO) is staggeringly huge. But once you’re there, it takes a whole lot less energy to go anywhere else. Note that the same general statement applies if you’re leaving any other planet or moon.
At $100 a pound, a lot of things become economically feasible that haven’t been in the past. And some of the most important things that become feasible are infrastructure. Right now, there is absolutely no infrastructure for doing anything outside of LEO – and there’s not really much infrastructure in place for LEO, either.
Start with LEO, where there actually is some infrastructure. NORAD is there to track everything around you and alert you to dangers. In a sense, there’s a kind of rudimentary “air traffic control” there. But it’s very rudimentary, and that’s not really it’s mission. Existing GPS units probably don’t, but one could build GPS receivers that provide adequate services in LEO. The GPS satellites orbit in geosynchronous orbits that are far higher, so you’d still be able to work the math out right. And there’s at least one orbital space station up there right now, even though its capacity is trivial.
Outside of LEO there’s none of that. No orbital stations, no navigational systems, no traffic control, no debris tracking. Nevermind all the other infrastructure you’d want for true solar system exploration. What kinds of things would you want?
None of this has happened yet, but all of it could happen with currently existing tech. We don’t need any major science breakthroughs. The only thing we’re really missing is the key to all of it: reliable, regular, and affordable transit to Low Earth Orbit. $100 a pound is still expensive. But it’s a price point at which some or all of the things listed above will begin to be built, because there will be a market for them. As more of the things above come online, more entrepreneurs will step up to begin creating the others – and charging for them.At $100 a pound, a person could take a trip to LEO for about the price of an average car. That’s a price that’s still too high for people to take regular trips. But an awful lot of people – not rich people, but moderately affluent – would pay to make that trip once or twice in their lifetimes. And remember: the key to cheaper LEO transit is not the propulsion technology. Fuel is a very small portion of the cost of a rocket launch. It’s primarily human factors. As the launches become more frequent, they will also become cheaper. This was the original promise of the Space Shuttle – a promise that was never delivered upon. But this wasn’t a flaw in our current engineering capabilities or known science. It was the fact that government employees and contractors, given the chance, always opted to make everything more expensive rather than cheaper. As for-profit businesses become sustainable, they’ll be looking for every way possible to cut costs.I don’t expect to see all of the above items in my lifetime. But I do expect to see at least some of them forming. As FuturePundit notes, SpaceX’s Falcon 9 should get the price point down to $250 per pound. I suspect that within 10 years that price will cut in half again – if not by SpaceX then by somebody else. And again in another 10 years.Key point: the technology industry didn’t witness massive price/performance changes because tech was improving so fast. OK, that was part of it. But the bigger reason was that it started out as an incredibly immature industry. Look at what Ford did to the price of automobiles. Space is likewise an immature industry. When it begins to grow, look for it to explode
If you’re like me then you use online reviews to help you pick an awful lot of things. Books, movies, electronics, services, restaurants… you name it. Unfortunately, there are some very serious issues with online reviews in their current form. A couple of examples:
Let’s start with the GoodReads page for my latest published work, Ghost of the Frost Giant King. Now, Silver Empire is still a pretty small publishing house. We’ve only been in operation since January – less than three months. So our sales are pretty small at this point, and the number of advance preview copies we’ve sent out is pretty small as well. And most of our sales have been direct or nearly so. What this means is that we have customer data for about 90% of the copies of this work that are “in the wild” currently.
So, back to the GoodReads page. As of this writing, there are two ratings – neither with a text review. One of those ratings is from a contest winner who won the book on a GoodReads.com book giveaway. The only problem is, that rating was entered before the book shipped. The person had not yet received it before leaving a rating. The second review was left by someone who is not on my customer data. Now, it’s possible – possible – that it’s a legit review. But no reviews have been left at any of the locations where that purchase would have been made if it were legit, so it’s a tad odd that there would be a review on GoodReads and not the purchase site. Possible. Just odd.
Now, I’m not complaining. These two reviews averaged together give us a 4 star rating. And it’s better to have a 4 star rating from two reviewers than no rating at all. So hey, it’s a win for us. But this is completely unhelpful for our potential customers, which is uncool. It also completely fails to give us any actual feedback – constructive or otherwise – about the product. That’s kind of frustrating, because we’d really like to know if what we put together is any good or not.
Second example: Facebook reviews for my dojo. Out of all of the reviews on our page, we currently have two that are not five star reviews. One is a two star review from a man who explicitly acknowledged that he didn’t mean to leave it… and yet he also hasn’t removed it or changed it. One is a one star review from someone whom I have tracked down and shown to be a student at another dojo. The reviewer has never set foot in my dojo. I’ve even talked to his sensei about it. And yet the review is still there.
I’m still not really complaining. This one is a bit more annoying than the first, as these are definitely bringing down my dojo’s rating on Facebook. But… they bring it down to a 4.8 star rating. I can’t complain about that. And there are also a number of five star ratings from friends who haven’t been students but who are trying to help me out (thanks, by the way – I really appreciate that from all of you!). Which is great – but Facebook really shouldn’t allow it. Hell, Facebook allowed me to leave a rating, even though it’s my own page. Talk about a fail.
Not all online reviews are created equal. Amazon, for instance, does a lot to help things out. If a customer leaves a review on an item that they purchased through Amazon, it gets flagged with a “verified purchase” note. That lets you know that that person actually got that item. Amazon will not allow me to leave ratings on any items that I’ve published, which is good. That keeps at least some honesty in the system. But even there the system isn’t perfect.
Do I still use online reviews? Definitely. But be aware that there are issues, and try to actually read some of the reviews if you get a chance.
Editor’s note: this post was originally published on another blog in 2011. In the wake of the “net neutrality” decision, it seems relevant once more. It has been reposted here with minor modifications.
Once upon a time, in the Good Ol’ Days we refer to as the 1990s, this newfangled thing called The Internet made a jump from an obscure tool that only academics and computer geeks even knew about to a mainstream tool that everybody was using. The world was full of promise. The Internet would set us free! Information wants to be free! You can’t control the ‘Net! Finally we have an end to all censorship! Power to the little man!
I got caught up in it pretty easily. After all, I was young. I had Internet access in high school, a few years before it was really known to the public. It was just the right age to get caught up in all the libertarian utopian ideas of how great the Internet would be.
I’ve spent my whole adult life working with computers, and in recent years I’ve come to an entirely different conclusion. In the long run, the Internet will lessen our freedoms, not increase them. Yes, the Internet of yesteryear was a wild, wild west where anything went. The Internet of today is already being tamed, and the Internet of tomorrow is going to trend toward fascist land. Here are some things we can expect in the future of the Internet, many of which are already here or coming:
The world is changing, my friends. And not to the digital utopia we all thought it would be. The only reason it hasn’t happened already is that the Internet originated in the United States, a country that still has some serious constitutional protections for free speech, free assembly, free press, and freedom from search and seizure. Other countries have been trying for a decade to remove Internet control from the US government’s hands. And how long will the US government and its people retain the will to maintain these freedoms? If history is any judge the answer is certainly, “not forever.” Indeed, we’ve already witnessed the willingness of our fellow citizens to give up all kinds of freedoms in the name of “security,” “health care,” and “safety” – nevermind the almighty “profit.”
My vision of the future is not inevitable. It can be stopped. But only if the people have the will to stop it. I’m no longer convinced they do.