Tuesday, December 22, 2015

I'm sorry you're mad

This is mostly a disappointing response that tries to deflect with "hey, we do good stuff too!" which is a PR move, not an argument against the events which caused the accusations against Product Hunt.

I don't care how nice he is in person, just as I don't care how mean Steve Jobs was. I judge people by the products they put out. If his product perpetuates the power structures of Silicon Valley and gives insiders special access then he's actually not a very nice guy.

The guy is a politician. So many smokey words and mirrors, so little straight talk.

Wow what a crock of shit. They got called out for back room deals and ignoring products; and it's only after a revolt ready to make their entire operation obsolete that they publish a half hearted PR response that admits no responsibility at all.

Am I the only one that read (most of) this and didn't really see any direct responses to the criticisms leveled at it last week?

from http://ift.tt/1NIwtBm , about http://ift.tt/1Zmv6Q2



from lizard's ghost http://ift.tt/1S6XXpx

nvme, u2

a discussion about
http://ift.tt/1k6sBRS

velox_io 1 day ago

The most exciting recent development in SSDs (until 3DXpoint is released), is bypassing the SATA interface, connecting drives straight into the PCIe bus (no more expensive raid controllers). Just a shame hardly any server motherboards come with M.2 slots right now.
The 4x speed increase and lower CPU overhead means it is now possible to move RAM only applications (for instance in-memory databases) to SSDs, keeping only the indexes in memory. Yeah, we've been going that way for a while, just seems we've come a long way from expensive Sun e6500's I was working with in just over a decade ago.
reply

wtallis 1 day ago

M.2 slots don't make much sense for servers, at least if you're trying to take advantage of the performance benefits possible with a PCIe interface. Current M.2 drives aren't even close to saturating the PCIe 3.0 x4 link but they're severely thermally limited for sustained use and they're restricted in capacity due to lack of PCB area. Server SSDs should stick with the traditional half-height add-in card form factor with nice big heatsinks.
reply

lsc 1 day ago

most of the NVMe backplanes I've seen give full 'enterprise 2.5" drive clearance to the thing, so if they are actually as thick as consumer SSDs, as most current SATA 'enterprise SSD' are, there's plenty of room for a heatsink without expanding the slot. The supermicro chassis (and I've only explored the NVMe backplanes from supermicro) usually put a lot of effort into drawing air through the drives, so assuming you put in blanks and stuff, the airflow should be there, if the SSD are setup to take advantage of it.
reply

wtallis 1 day ago

You need to be more precise with your terminology. There is no such thing as a NVMe backplane. NVMe is a software protocol. The backplanes for 2.5" SSDs to which you refer would be PCIe backplanes using the SFF-8639 aka U.2 connector. None of the above is synonymous with the M.2 connector/form factor standard, which is what I was talking about.
reply

lsc 1 day ago

edit: okay, I re-read what you said and yes, these won't support M.2 drives, if I understand what's going on here, and it's possible I still don't. (I have yet to buy any non-sata SSD, though I will soon be making experimental purchases.)
I was talking about these:
http://ift.tt/1OrRi67
Note, though, it looks like if you are willing to pay for a U.2 connected drive, you can get 'em with the giant heatsinks you want:
http://ift.tt/1MsAesA...
further edit:
http://ift.tt/1RC3epK...
available; not super cheap, but I'm not sure you'd want the super cheap consumer grade stuff in a server, anyhow.
further edit:
but I object to the idea of putting SSDs on PCI-e cards for any but disposable "cloud" type servers (unless they are massively more reliable than any that I've seen, which I don't think is the case here.) just because with a U.2 connected hard drive in a U.2 backplane, I can swap a bad drive like I would swap a bad sata drive; an alert goes off and I head off to the co-lo as soon as convenient and I can swap the drive without disturbing users, whereas with a low-profile PCI-e card, I've pretty much gotta shut down the server, de-rack it, then make the swap, which causes downtime that must be scheduled, even if I have enough redundancy that there isn't any data loss.
reply

wtallis 1 day ago

Take a look at how much stricter the temperature and airflow requirements are for Intel's 2.5" U.2 drives compared to their add-in card counterparts. (And note that the U.2 drives are twice as thick as most SATA drives.)
M.2 has almost no place in the server market. U.2 does and will for the foreseeable future, but I'm not sure that it can serve the high-performance segment for long. It's not clear whether it will reach the limits on capacity, heat, or link speed first, but all of those limits are clearly much closer than for add-in cards.
reply

lsc 1 day ago

M.2 has almost no place in the server market. U.2 does and will for the foreseeable future, but I'm not sure that it can serve the high-performance segment for long. It's not clear whether it will reach the limits on capacity, heat, or link speed first, but all of those limits are clearly much closer than for add-in cards. No argument on m.2 - it's a consumer grade technology. No doubt, someone in the "cloud" space will try it... I mean, if you rely on "ephemeral disk" - well, this is just "ephemeral disk" that goes funny sooner than spinning disk.
But the problem remains, If your servers aren't disposable, if your servers can't just go away at a moment's notice, the form factor of add-in cards is going to be a problem for you, unless the add-in cards are massively more reliable than I think they are. Taking down a whole server to replace a failed disk is a no-go on most non-cloud applications...
reply

kijiki 1 day ago

You're probably tired of hearing this from me, but if you distribute the storage, you can evacuate all the VMs off a host, down it, do whatever, bring it back up, and then unevacuate.
reply

lsc 1 day ago

You're probably tired of hearing this from me, but if you distribute the storage, you can evacuate all the VMs off a host, down it, do whatever, bring it back up, and then unevacuate. Yes. The same conversation is happening right now in the living room, because security has forced three reboots in the last year, after having several years where simply by being xen and pv and using pvgrub (rather than loading a user kernel from the dom0) we haven't been vulnerable to the known privilege escalations. This is a lot more labor (and customer pain) when you can't move people.
No progress on that front yet, though.
reply

wtallis 1 day ago

And if uptime is that important, you can just buy a server that supports PCIe hotswap.
reply

lsc 1 day ago

can you point me at a chassis designed for that?
reply

wmf 1 day ago

OTOH if you can fit 24 U.2s but only 6 AICs in a server, maybe RAIDed U.2 will still win.
reply

wtallis 1 day ago

Only if you postulate that your drives won't be able to saturate their respective number of PCIe lanes. The total number of lanes in the system is fixed by the choice of CPU, and having drives with a total lane count in excess of that only helps when your drives can't fully use the lanes they're given or if you're trying to maximize capacity rather than performance.
reply

scurvy 1 day ago

2.5" NVMe is fine and more scalable than add-in cards (more slots) with increased serviceability. The equivalent add-in card is usually much more expensive than the 2.5" version.
reply

djsumdog 1 day ago

I thought this was the point of SATA Expresses, but the last I read, SATA Express actually holds on par and even falls behind M.2.
reply

tw04 1 day ago

How exactly does connecting to the PCIe bus obviate the need for RAID? RAID isn't about connecting drives, it's about not losing data. If you don't need RAID for your application today, you can purchase a non-RAID SAS adapter for a couple hundred bucks, or just use the onboard ports that are sure to be there.
reply

tehwalrus 1 day ago

RAID isn't about connecting drives, it's about not losing data. RAID-0 is used as a way to get faster performance from spinning disk drives, as you can return parts of each read request from different (striped) drives.
You also get better write performance, as your writes are split across the drives.
reply

lsc 1 day ago

uh, this shouldn't be voted down. Now, I don't use RAID for performance, I use raid for reliability, like parent said, and I've never actually been in a position where it would make sense, but people do use raid0 to increase performance. It happens, even if it's not nearly as common as using raid to prevent data loss.
reply

mirimir 1 day ago

I like RAID10. It's almost as reliable as RAID1, and you get about half the capacity and performance boost of RAID0. With four drives, RAID10 gives you the same capacity as RAID6. It's only ~30% slower, and it rebuilds much faster after drive failure and replacement. With more drives, you get 100% of added drive capacity for RAID6 vs 50% for RAID10, but rebuild time gets crazy huge.
reply

toomuchtodo 1 day ago

Back in the day, folks would RAID0 AWS EBS volumes to get the level of performance they needed (this was long before SSD EBS volumes).
reply

bigiain 1 day ago

Now you're making me feel old - when "back in the day" and "AWS" get used in the same sentence... :-)
reply

MichaelGG 1 day ago

This is still (AFAIK) the recommended way to get higher IOPS out of Azure SSDs.
reply

burntwater 1 day ago

High-end video servers used in the entertainment industry typically use RAID 0. My last project, for example, used 10 250GB SSDs in RAID 0.
reply

ersii 1 day ago

What kind of video servers are you talking about? For ingesting from a feed/camera in? Streaming out video? Curious.
reply

burntwater 21 hours ago

While these servers can ingest video feeds, they wouldn't (typically) be saving the incoming feeds. These servers play out video over many projectors blended into a single seamless image. If you've watched the past few Olympics opening ceremonies, you most certainly saw the projections.
My recent project, which was relatively small, played back a single video over 4 blended projectors. Each video frame was a 50MB uncompressed TGA file, 30 times a second. On a more complex show, you could be trying to play back multiple video streams simultaneously.
D3 - http://ift.tt/KGuwZd and Pandora - http://ift.tt/1yUdsHN
are two of the big players in the industry.
reply

pjc50 15 hours ago

Each video frame was a 50MB uncompressed TGA file, 30 times a second.
Ouch! Not even a PNG?
reply

tracker1 8 hours ago

I'd guess the time to do compression from TGA to PNG may be more than 1/30th of a second. Or at least reliably/consistently so.
reply

bigiain 1 day ago

A friend of mine does a lot of programming and installations using Dataton Watchout - where they use 4 or more 4K projectors stitched together into one big image. He's regularly got war stories about problems with the latest batch of SSDs not playing well in RAID0 causing stutters on simultaneous playback of multiple 4K video streams.
reply

im2w1l 17 hours ago

Is there an advantage compared to putting each stream on a different drive?
reply

otakucode 17 hours ago

Sure. My first purchase of SSDs were 3 80GB Intel SSDs which I put into RAID-0 and used for my primary system drive on my gaming machine. RAID-0 provides very-near 100% performance boost per drive added. Of course, it also provides the same bonuses to the likelihood of total data loss... but that was a risk I was OK with taking (and which never bit me!).
reply

tracker1 8 hours ago

Mine was the same drive, my first SSD was the first 80GB Intel one, and just over a year of use, it started reporting itself as an 8MB drive. You never know for sure what will remain good, or be good consistently.
reply

dajonker 1 day ago

RAID 0 does not have any level of redundancy, so you might as well remove the 'R' in RAID and replace it by 'S' for striping or something. However, people would probably get confused if you start calling it SAID.
reply

bigiain 1 day ago

Complaining about industry-standard terminology like this is mostly a waste of time.
The sort of people who need to know that but don't aren't going to learn it from a post on HN, and the sort of people who'll read it on HN mostly don't need to be told.
reply

mirimir 22 hours ago

Sad but true. I gather that RAID0 has been the default for notebooks with two drives. And for the first 1TB MyBooks.
reply

tracker1 8 hours ago

0 == no redundancy...
reply

fulafel 15 hours ago

RAID [...] it's about not losing data It's a pretty ineffective way to guard against data loss, you need backups.
RAID [1-6] is an availability solution, saving the downtime of restoring from backup in the case of the most predictable disk failures. It doesn't help with all the other cases of data loss.
reply

derefr 1 day ago

If we can connect SSDs directly to the PCIe bus... why are there no cute little Thunderbolt flash drives that run at full NVMe speed?
reply

wtallis 1 day ago

Windows has had astoundingly bad Thunderbolt hot-plug support given that ExpressCard and CardBus existed. NVMe support for non-Apple SSDs under OS X can be problematic. Thunderbolt PHYs are expensive. USB 3.0 is fast enough for now, and USB 3.1 exists.
reply

something2324 20 hours ago

This is a bit of a question from my lack of understanding of disk IO:
But would this in practice play well with the CPU prefetcher? If you're crunching sequential data can you expect the data in the L1 cache after the initial stall?
reply

dragontamer 19 hours ago

SSDs are still grossly slower than RAM. The fastest SSD I know of is the Intel 750, which is like ~2.0 GigaBYTES/second (or for what is more typical for storage benchmarks: about 16Gbps or so over PCIe x4).
Main DDR3 RAM is something like 32GigaBYTES per second, and L1 cache is even further.
What I think the poster was talking about, is moving from disk-based databases to SSD-based databases. SSDs are much faster than hard drives.
L1 Cache, L2 Cache, L3 Cache, and main memory are all orders of magnitude faster than even the fastest SSDs today. Thinking about the "CPU prefetcher" when we're talking about SSDs or Hard Drives is almost irrelevant due to the magnitudes of speed difference.
reply

japaw 1 day ago

"2015 was the beginning of the end for SSDs in the data center." is quit a bold statement especial when not discussing any alternative. I do not see us going back to magnetic disk, and most new storage technology are some kind of ssd... reply

scurvy 1 day ago

My thoughts exactly. The article is quite inflammatory and tosses out some bold statements without really deep diving into them. My favorite:
"Finally, the unpredictable latency of SSD-based arrays - often called all-flash arrays - is gaining mind share. The problem: if there are too many writes for an SSD to keep up with, reads have to wait for writes to complete - which can be many milliseconds. Reads taking as long as writes? That's not the performance customers think they are buying." This is completely false in a properly designed server system. Use the deadline scheduler with SSD's so that reads aren't starved from bulk I/O operations. This is fairly common knowledge. Also, if you're throwing too much I/O load at any storage system, things are going to slow down. This should not be a surprise. SSD's are sorta magical (Artur), but they're not pure magic. They can't fix everything.
While Facebook started out with Fusion-io, they very quickly transitioned to their own home-designed and home-grown flash storage. I'd be wary of using any of their facts or findings and applying them to all flash storage. In short, these things could just be Facebook problems because they decided to go build their own.
He also talks about the "unpredictability of all flash arrays" like the fault is 100% due to the flash. In my experience, it's usually the RAID/proprietary controller doing something unpredictable and wonky. Sometimes the drive and controller do something dumb in concert, but it's usually the controller.
EDIT: It was 2-3 years ago that flash controller designers started to focus on uniform latency and performance rather than concentrating on peak performance. You can see this in the maturation of I/O latency graphs from the various Anandtech reviews.
reply

projct 6 hours ago

deadline seems less useful now that a bunch of edge cases were fixed in cfq (the graphs used to look way worse than this): http://ift.tt/1MsAesC...
reply

fleitz 1 day ago

There is unpredictablilty in SSDs however, its most like whether an IOP will take 1 ns or 1 ms, instead of 10 ms, or 100 ms with an HD.
The variability is an order of magnitude greater but the worst case is an is several orders of magnitude better. Quite simply no one cares that you might get 10,000 IOPS or 200,000 IOPS from an SSD when all you're going to get from a 15K drive is 500 IOPS
reply

wtallis 1 day ago

Best-case for a SSD is more like 10µs, and the worst-case is still tens of milliseconds. Average case and 90th percentile are the kind of measures responsible for the most important improvements.
And the difference between a fast SSD and a slow SSD is pretty big: for the same workload a fast PCIe SSD can show an average latency of 208µs with 846µs standard deviation, while a low-end SATA drive shows average latency of 1782µs and standard deviation of 4155µs (both are recent consumer drives).
reply

seabrookmx 1 day ago

Yeah I thought that too. I think the author is specifically referring to 2.5" form factor SATA SSD's though.
reply

ak217 1 day ago

This article has a lot of good information, but its weirdly sensationalistic tone detracts from it. I appreciate learning more about 3D Xpoint and Nantero, but SSDs are not a "transitional technology" in any real sense of the word, and they won't be displaced by anything in 2016, if nothing else because it takes multiple years from volume capability to stand up a product pipeline on a new memory technology, and more years to convince the enterprise market to start deploying it. The most solid point the article makes is that the workload-specific performance of SSD-based storage is still being explored, and we need better tools for it.
reply

nostrademons 1 day ago

I got the sense that it was a PR hit for Nantero, bought and paid for. Notice the arc of the article: it says "[Popular hot technology] is dead. [Big vendors] have recently introduced [exciting new product], but [here are problems and doubts about those]. There's also [small startup you've never heard of] which has [alternative product] featuring [this list of features straight from their landing page]."
Usually these types of articles are designed to lead people directly to the product that's paying for the article. Sensationalistic is good for this; it gets people to click, it gets people to disagree, and then the controversy spreads the article across the net. Seems like it's working, in this case.
reply

Laforet 1 day ago

Robin Harris has been advocating for the abolishment of block abstraction layer for a couple of years now and this piece is consistent with his usual rhetoric
reply

Dylan16807 1 day ago

Eh, it listed it as "promises" and talked the same way about Adesto. It's reasonable to say "this is the basic claim of the product; we'll see if they get there" without it being PR.
reply

_ak 1 day ago

At one of my previous employers, they built a massive "cloud" storage system. The underlying file system was ZFS, which was configured to put its write logs onto an SSD. With the write load on the system, the servers burnt through an SSD in about a year, i.e. most SSDs started failing after about a year. The hardware vendors knew how far you could push SSDs, and thus refused to give any warranty. All the major SSD vendors told us SSDs are strictly considered wearing parts. That was back in 2012 or 2013.
reply

rsync 1 day ago

Just a note ... we use SSDs as write cache in ZFS at rsync.net and although you should be able to withstand a SLOG failure, we don't want to deal with it so we mirror them.
My personal insight, and I think this should be a best practice, is that if you mirror something like an SLOG, you should source two entirely different SSD models - either the newest intel and the newest samsung, or perhaps previous generation intel and current generation intel.
The point is, if you put the two SSDs into operation at the exact same time, they will experience the exact same lifecycle and (in my opinion) could potentially fail exactly simultaneously. There's no "jitter" - they're not failing for physical reasons, they are failing for logical reasons ... and the logic could be identical for both members of the mirror...
reply

dboreham 1 day ago

This is good advice, and fwiw the problem it addresses can happen in spinning drives too. We had a particular kind of WD drive that had a firmware bug where the drive would reset after 2^N seconds of power-up time (where that duration was some number of months).
reply

leonroy 17 hours ago

We ship voice recording and conferencing appliances based on Supermicro hardware, a RAID controller and 4x disks on RAID 10.
We tried to mitigate the failure interval on the drives by mixing brands. Our Supermicro distributor tried to really dissuade us from using mixed batches and brands of SAS drives in our servers. Really had to dig in our heels to get them to listen.
Even when you buy a NAS fully loaded like a Synology it comes with the same brand, model and batch of drives. In one case we saw 6 drive failures in two months for the same Synology NAS.
Wonder whether NetApp or EMC try mixing brands or at least batches on the appliances they ship?
reply

watersb 1 day ago

FWIW I bricked a very cheap consumer SSD by using it as write log for my ZFS array. This was my experiment machine, not a production server.
Fortunately I had followed accepted practice of mirroring the write cache. (I'd also used dedicated, separate host controllers for each of these write-cache SSDs, but for this cheap experiment that probably didn't help.)
So yes this really happens.
reply

andrepd 1 day ago

In a sense, so is all storage hardware. It's just a matter of how long it takes before it fails. This goes for SSDs, HDDs, Flash cards, etc.
reply

justin66 1 day ago

If your SSDs were wearing out after a year, and were warrantied for a year, I'm guessing you weren't using "enterprise" SSDs?
reply

skrause 1 day ago

Even enterprise SSDs like Samsung's have a guarantee like "10 years or up to x TB of written data". So if you write a lot you can lose the guarantee after a year even with enterprise SSDs.
reply

_ak 7 hours ago

They were enterprise models (i.e. not cheap), but they had no warranty in the first place. Every single hardware supplier simply refused to give any. I guess because of the expected wear and tear.
reply

justin66 5 hours ago

That's interesting. I don't even know how to buy these things without a warranty. Were they direct from the manufacturer?
reply

bluedino 1 day ago

Logging to flash storage is just asking for issues. We bought a recent model of LARGEFIREWALLVENDOR's SoHo product, and enabling the logging features will destroy the small (16GB?) flash storage module in a few weeks (!).
The day before we requested the 3rd RMA, the vendor put a notice on their support site that using certain features would cause drastically shortened life of the storage drive, and patched the OS in attempt to reduce the amount of writes.
reply

TD-Linux 1 day ago

Logging to poorly specified Flash storage is the real problem. They were likely using a cheap eMMC flash, which are notorious for having extremely poor write leveling.
Unfortunately the jump to good flash is quite expensive, and often hard to find in the eMMC form factor which is dominated by low cost parts.
reply

Hoff 1 day ago

FWIW, the top-end HPE SSD models are rated for up to 25 writes of the entire SSD drive, per day, for five years.
The entry-level SSDs are rated for ~two whole-drive writes per week.
Wear gage, et al.
http://ift.tt/1RC3bKL...
reply

tim333 19 hours ago

Also maybe of interest, the Techreport The SSD Endurance Experiment. Their assorted drives lasted about 2,000 - 10,000 whole disk writes.
http://ift.tt/1MsAesE...
reply

ComputerGuru 1 day ago

Kind of stupid to end with "Since CPUs aren't getting faster, making storage faster is a big help."
CPUs and storage exist for completely disjoint purposes, and the fastest CPU in the world can't make up for a slow disk (or vice versa). Anyway, CPUs are still "faster" than SSDs, whatever that means, if you wish to somehow compare apples to oranges. That's why even with NVMe if you are dealing with compressible data enabling block compression in your FS can speed up your IO workflow.
reply

nly 1 day ago

Storage and CPU cycles aren't completely disjoint. While this is true for plain old data archival, a lot of storage in reality is just used as cache. You could argue even your customer data is a cache, because you can always go back to the customer for most of it. Most data can be recomposed from external sources given enough computation.
reply

TeMPOraL 1 day ago

Ever tried to play a modern computer game? You never have enough RAM for stuff; a lot of content gets dumped onto hard drive sooner or later (virtual memory), or is be streamed from the drive in the first place. Having faster access helps tremendously.
From my observation, actually most personal and business use machines are IO-bound - it often takes just the web browser itself - with webdevs pumping out sites filled with superfluous bullshit - to fill out your RAM completely, and then you have swapping back and forth.
reply

Dylan16807 1 day ago

I don't think I've touched a game on PC where you can't fit all the levels into RAM, let alone just the current level. Sometimes you can't fit music and videos into ram, but you can stream that off the slowest clunker in the world. A game that preloads assets will do just fine on a bad drive with a moderate amount of RAM. Loading time might be higher, but the ingame experience shouldn't be affected.
As far as swapping, you do want a fast swap device, but it has nothing to do with "Since CPUs aren't getting faster". You're right that it's IO-bound. It's so IO-bound that you could underclock your CPU to 1/4 speed and not even notice.
So in short: Games in theory could use a faster drive to better saturate the CPU, but they're not bigger than RAM so they don't. Swapping is so utterly IO-bound that no matter what you do you cannot help it saturate the CPU.
The statement "Since CPUs aren't getting faster, making storage faster is a big help." is not true. A is not a contributing factor to B.
reply

simoncion 1 day ago

I don't think I've touched a game on PC where you can't fit all the levels into RAM, let alone just the current level. I know, right?! I would rather like it if more game devs could get the time required to detect that they're running on a machine with 16+GB of RAM and -in the background, with low CPU and IO priority- decode and load all of the game into RAM, rather than just the selected level + incidental data. :)
reply

arielweisberg 1 day ago

For years total available RAM could easily exceed the install size of entire and entire game thanks to consoles holding everything back.
And by exceeded I mean the games were 32-bit binaries so the ram left over was enough to cache the entire game data set even in light of RAM used by the game.
Recently install size seems to have grown quite a bit.
reply

planckscnst 1 day ago

Your process spends some amount of time waiting on various resources; some of those may be CPU-bound, some may be disk-bound. Speeding either of those up will make your process faster. In fact, you can even trade one for the other if you use compression.
reply

awqrre 1 day ago

Storage is definitely the bottleneck for many applications and at one point in the past, it used to be the CPU that was the main bottleneck for these same applications so I can understand their point too.
reply

otakucode 17 hours ago

Can't wait for the inevitable discovery that NAND chips are being price-fixed. You would think after the exact same companies price-fixed RAM and LCD panels that peoples radars would go off faster. You expect me to believe that the metals and neodymium magnets and now helium containing drives are cheaper to manufacture than an array of identical NAND chips? NAND chips which are present in a significant percentage of all electronic products purchased by anyone anywhere? When a component is that widely used, it becomes commoditized. Which means its price to manufacture drops exponentially, not linearly like the price drops of NAND has done. This happened with RAM and LCDs as well. When you can look around and see a technology being utilized in dozens of products within your eyesight no matter where you are, and those products are still more expensive than the traditional technology they replace, price-fixing is afoot.
I am open to being wrong on this, but I don't think I am. Can anyone give a plausible explanation why 4TB of NAND storage should cost more to manufacture than a 4TB mechanical hard drive does, given the materials, widespread demand for the component, etc?
reply

pjc50 15 hours ago

"Apples do not cost the same as oranges, therefore oranges are being price-fixed" is not a convincing line of reasoning. The two technologies are very different and NAND storage is much newer, and has always been much more expensive than disk storage. The correct thing to compare NAND prices to is other chips that are being fabbed at the same process node, by die area.
reply

vbezhenar 1 day ago

May be SSD should add something like "raw mode", when controller just reports everything he knows about disk, and operating system takes in control the disk, so firmware won't cause unexpected pauses. After all, operating system knows more, what files are not likely to be touched, what files are changing often, etc.
reply

wtallis 1 day ago

The industry is moving toward a mid-point of having the flash translation layer still implemented on the drive so that it can present a normal block device interface, but exposing enough details that the OS can have a better idea of whether garbage collection is urgently needed: http://ift.tt/1RC3c0Z...
Moving the FTL entirely onto the CPU throws compatibility out the window; you can no longer access the drive from more than one operating system, and UEFI counts. You'll also need to frequently re-write the FTL to support new flash interfaces.
reply

nqzero 1 day ago

OS compatibility is important for laptops/desktops, but not in at least some database / server applications, and those are the applications that would benefit most from raw access
reply

seibelj 1 day ago

And now we must rely on each OS to implement their own version of a virtual firmware, and do comparisons between different implementations, etc. etc. etc.
reply

TD-Linux 1 day ago

This has existed for a long time, see the Linux kernel's mtd infrastructure, and the filesystems designed to run on top of it (jffs, yaffs). It used to be used in a lot of embedded devices before eMMC became cheap, and is still used in things like home routers.
I am not sure how well mtd's abstraction fits with modern Flash chips, though.
reply

rodionos 18 hours ago

log-structured I/O management built into SSDs is seriously sub-optimal for databases and apps that use log-structured I/O as well This assert piqued my interest given that my hands-on experience with HBase speaks to the contrary. The paper by SanDisk they refer to http://ift.tt/1MsAhot... seems to suggest that most of the issues are related to sub-optimal degragmentation by the disk driver itself. More specifically, the fact that some of the defragmentation is unnecessary. Hardly a reason to blame the databases and can be addressed down the road. After all, GC in Java is still an evolving subject.
reply

transfire 1 day ago

Not so sure. There are plenty of benefits to SSD too. I suspect system designers will just add more RAM to act as cache to offset some of these performance issues. Not to mention further improve temperature control.
reply

stefantalpalaru 1 day ago

More RAM means more reserve power needed to flush it to permanent storage when the main power is cut.
What's more likely to happen is exposing the low level storage and software to kernel drivers.
reply

scurvy 1 day ago

I think transfire was referring to RAM in the system to act as a pagefile read cache, not RAM on the SSD to act as a cache there. There's no power risk to an OS-level read cache.
reply

bravura 1 day ago

I need an external disk for my laptop that I leave plugged in all the time.
What is the most reliable external hard drive type? I thought SSDs were more reliable than spinning disks, especially to leave plugged in constantly, but now I'm not as sure.
reply

Thorbears 1 day ago

This article, and the majority of the comments here, are about using SSDs in server environments, where permanently high load and zero downtime is the norm. And it doesn't even seem to be about SSDs vs HDDs, it is about SSDs vs future technologies.
For personal use, SSDs outperform HDDs in just about every aspect, if you can afford the cost, an SSD is the better choice. And there is nothing mentioned here about downsides of leaving a drive plugged in and powered on at all times.
reply

ScottBurson 1 day ago

I still don't trust SSDs as much as I do spinning disks. While neither kind of drive should be trusted with the only copy of important data, I would say that drives used for backup, or for access to large amounts of data that can be recovered or recreated if lost and where the performance requirements do not demand an SSD, might as well be HDDs -- they're cheaper, and arguably still more reliable. If the workload is write-heavy, HDDs are definitely preferred as they will last much longer.
While all disks can fail, HDDs are less likely to fail completely; usually they just start to develop bad sectors, so you may still be able to recover much of their contents. When an SSD goes, it generally goes completely (at least, so I've read).
So it depends on your needs and how you plan to use the drive. For light use, it probably doesn't matter much either way. For important data, you need to keep it backed up in either case. SSDs use less power, particularly when idle, so if you're running on battery a lot, that would be a consideration as well.
reply

cnvogel 17 hours ago

Anecdotal evidence: 2.5" SATA HDD failed for me suddenly just last Tuesday, SMART was fine before, both the attributes and a long (complete surface scan) selftest I did a few weeks ago after I got this lightly used notebook from a colleague (I only needed for tests).
I think what people experience is that the sudden death of SSDs doesn't occur more often than on HDDs. But with the mechanical issues and slowly building up of bad sectors gone, sudden death is probably the only visible issue left.
(Just my personal opinion.) reply

exabrial 1 day ago

Does NVMe solve the garbage collection problem?
reply

pkaye 1 day ago

NVMe is just the communications protocol between the host and device. Garbage collection is due to hiding the non ideal properties of the NAND from the host. Primary among them is endurance limits and the size difference between write and erase units. You could for example move the handling of some of these GC details to the OS filesystem level but then that becomes more complicated and has to deal with the differences with each NAND technology generation. You couldn't just copy a file system image from one drive to another for example.
reply

nly 1 day ago

You could for example move the handling of some of these GC details to the OS filesystem level Isn't this basically the Linux 'discard' mount flag? Most distros seem to be recommending periodic fstrim for consumer uses, what's the best practice in the data center?
reply

pkaye 1 day ago

The trim is kind of like the free counterpart to malloc. When the drive has been fully written, there is a lot less free space which constraints the GC and makes it works much harder. The trim tells the drive to free up space allowing for less GC work.
A big difference between client and enterprise drives is the amount of over-provisioning. A simple trick if you can't use trim is to leave 5-10% of the drive unused to improve the effective over-provisioning and improve worst case performance.
The question of using trim in data centers might be due to interaction between trim and raid configurations because sometimes trim is implemented as nondeterministic (as a hint to the drive; persistant trim can cause performance drops unless you are willing to through in extra hardware to optimize it well) which causes parity calculation issues when recovering from a failed drive.
reply

wtallis 1 day ago

Discard operations only tell the drive that a LBA is eligible for GC. It's basically a delete operation that explicitly can be deferred. It does not give the OS any input into when or how GC is done, and it doesn't give the OS any way to observe any details about the GC process.
I think the recommendations for periodic fstrim of free space is due to filesystems usually not taking the time to issue a large number of discard operations when you delete a bunch of data. Even though discards should be faster than a synchronous erase command, not issuing any command to the drive is faster still.
reply

pkaye 1 day ago

SATA drives until recently didn't have queued trims so if you did an occasional trim between read/writes you would have to flush the queue. Queued trims were added later on but have been slow to be adopted because it can be difficult to get it working fast, efficient and correct when intermingled with reads and writes. I know atleast one recent drive with queued trim had some bugs in the implementation.
reply

wtallis 1 day ago

Yeah, SATA/AHCI's limitations and bugs have affected filesystem design and defaults, to the detriment of other storage technologies. NVMe for example requires the host to take care of ordering requirements, so basically every command sent to the drive can be queued.
reply

ilaksh 1 day ago

Article is garbage. Basically "I told you so" by someone who never got up-to-date after the first SSDs came out and found some numbers to cherry pick that seemed to support his false beliefs.
reply

acquacow 1 day ago

...unless you are Fusion-io, in which case, most of these problems don't affect you. reply

ddorian43 1 day ago

why? isn't fusion-io based on ssd ?
reply

pkaye 1 day ago

They have a special OS driver which moves the FTL closer to the OS. So one of the things they need is multi GB of memory from the OS to keep the FTL mapping tables. Also sudden power loss requires OS intervention to piece back the FTL structure (I seem to recall the original product taking 5 minutes to recover.) This also means you couldn't boot from an Fusion IO drive. I'm not sure if they fixed these issues on a more recent drive.
reply

acquacow 1 day ago

They use flash like all other SSDs, but don't use a disk controller or any community defined protocols for mass compatibility. They use their own software and an FPGA to control the flash.
reply

jjtheblunt 1 day ago

fusion-io (i believe, but please verify online) uses a spinning drive for frequent writes and an ssd for frequent reads, with software deciding what goes where, lessening the write traffic to the ssd, and thus wear to it.
reply

acquacow 1 day ago

No, Fusion-io has nothing to do with spinning drives. They make PCI-e flash drives with an FPGA as the "controller" for the flash. There are multiple communication channels, so you can simultaneously read and write from the drives, and there are various tunings available to control how garbage collection works. They are the only "SSD" that doesn't hide all the flash behind any kind of legacy disk controller or group protocol like NVMe
reply

jaysoncena 1 day ago

I think what you're trying to describe is apple's fusion drive.
reply

unixhero 15 hours ago

Great thread.
reply

fleitz 1 day ago

Given the number of IOPS SSDs produce they are a win even if you have to chuck them every 6 months.
reply



from lizard's ghost http://ift.tt/1RC3c11

Saturday, December 19, 2015

ah



from lizard's ghost http://ift.tt/1TVQoAs

not just joan jett...



from lizard's ghost http://ift.tt/22fR21E

ah pek that i am, i'm reminded..of..probably the best..



from lizard's ghost http://ift.tt/1JiVm4J

about the time of joan jett



from lizard's ghost http://ift.tt/1J0mSc8

why disband siah



from lizard's ghost http://ift.tt/1JiSdlq

so she can su rui a bit..



from lizard's ghost http://ift.tt/1OeD38h

ahahahahahahahahaha



from lizard's ghost http://ift.tt/1ND1p5Y

i dunno why i get reminded of her when i see su rui live



from lizard's ghost http://ift.tt/1ND1p5U

酒矸倘賣無



from lizard's ghost http://ift.tt/1OeBJ5c

一樣的月光



from lizard's ghost http://ift.tt/1ND1mXE

跟着感觉走



from lizard's ghost http://ift.tt/1OeBGpW

lithub.com/men-explain-lolita-to-me/

You don’t get to share someone’s sandwich unless they want to share their sandwich with you, and that’s not a form of oppression either. You probably learned that in kindergarten.



from lizard's ghost http://ift.tt/1PcsL5C

Thursday, December 17, 2015

on facewatch

pavel_lishin 2 hours ago

I wonder if the CV Dazzle[1] style will take off in the UK. Maybe we'll finally get the exact cyberpunk future promised us by Gibson, etc.
[1] https://cvdazzle.com/ reply

bsenftner 2 hours ago

No need to go that far: every system I'm aware of fails when there are no eyebrows. (I'm in the FR industry.)
reply

...

fredley 2 hours ago

This does not scale. It might work for a few people in a population of thousands, but beyond that false positives are going to be a real problem.
reply

bsenftner 2 hours ago

I work in facial recognition. When you say it does not scale, how so? I have a system with 975K people in it, the US registered sex offenders database, and with a single quad core laptop a lookup takes about 8 seconds. With a server class 32+ core system the lookup is nearly real time. How does that not scale?
reply

ensignavenger 2 hours ago

I think the OP was concerned about false positives, not processing times. If you ran, say, 10,000 faces through your system/day for a year, how many false positives would you get?
reply

ultramancool 2 hours ago

I assume he more meant accuracy at scale - if you have a large population how does the accuracy do? Do you wind up with many close samples or are things pretty good?
reply

bsenftner 2 hours ago

Pretty much all FR systems generate a list of matches, ranked from closest match on down. The size of the list is configurable. It is also industry standard NOT to use the list as an authority. Given a list of high matches (greater than 90%) a human can quickly filter out obvious false positives, and then the remaining are retained for further consideration. It is also industry standard NOT to rely on FR alone; combining FR with other measures reduces the ability for a false positive.
The issue I find with FR is people expecting it to be some super technology, gleaming significant information from a dust spec. Its not like that. And the media is playing it up with unrealistic descriptions. Remember the original DOOM and it's quality of graphics? That is where FR is now. We've got a ways before the journalist hype is close to reality. And the mature technology will not be FR, but a comprehensive multiple biometric measuring system capturing far more than someone's face.
reply

Retric 1 hour ago

So, if you’re looking for 1% of a total population your false positive rate before human intervention is 99%. That's not useful for automated systems.
Further, identical twins are going to show an image that would fool human verification making this questionable for many tasks even with human supervision.
PS: Almost 1% of the population has an identical twin. (% of births is lower, but you get 2 twins.)
reply

bsenftner 1 hour ago

There is only one true match. So the system will always be generating false positives. This technology is not an authority, but a filter. Yes, identical twins will both be identified if they are both in the system and no additional biometric measures are included. Identical twins still have different retina, and due to lifestyles identical twins beyond age 30 can be distinguished apart fairly easily.



from lizard's ghost http://ift.tt/1QMRnnf

Tuesday, December 15, 2015

rock & roll



from lizard's ghost http://ift.tt/1UtvauJ

Wednesday, December 09, 2015

happiness for everyone

You could take a little trip around Singapore town
In Singapore city bus
To see Collyer Quay and Raffles Place
The Esplanade and all of us

Chorus:
Because in Singapore, Singapore
Their hearts are big and wide you'll find
Because in Singapore, Singapore
You'll find happiness for everyone

Let's go down to the riverside, it's an unforgettable sight
To see the sunrise on a faraway isle, turning darkness into light

(Repeat Chorus)

The buildings are climbing all the way to the sky
And there's a hundred other people who are striving
For people like you and I

(Repeat Chorus)



from lizard's ghost http://ift.tt/21NEI8P

Saturday, November 28, 2015

Thursday, November 19, 2015

some tech events

Can’t Miss Tech Events - An Infographic from PrivacyPolicies.com Blog

Embedded from PrivacyPolicies.com Blog



from lizard's ghost http://ift.tt/1lwFOF2

Wednesday, November 18, 2015

how does the mrt work?



from lizard's ghost http://ift.tt/1S2DfVF

Thursday, November 12, 2015

scale

http://ift.tt/1sVeyN6
http://ift.tt/1yDugSO
http://ift.tt/1Pp9y3b
http://ift.tt/1O4HAXms-Test-Results-Record-Peak-Volume-And-Expected-Smooth-Sailing-for-Tokens

vs

http://ift.tt/1SHoXKO



from lizard's ghost http://ift.tt/1O4HChH

Wednesday, October 28, 2015

random stuff

http://ift.tt/1G1KcoR
http://ift.tt/1RfBlAn
http://ift.tt/1MnFNzc
http://toolsalad.com/
http://ift.tt/1PPESYNsource=reddit&utmmedium=organicpost&utmcampaign=androidchatapptutorialrelease



from lizard's ghost http://ift.tt/1H8WWVH

Tuesday, October 06, 2015

you can’t possibly have already consented to the stuff that sites do from the very moment you arrive.

Things like tracking you, via centralised analytics, and retargeted advertising. Inferring interests and demographics from your browsing habits. Sharing that data with others. Those sorts of things.

You hit a site, a cookie is set, and from that moment onwards – blockers notwithstanding – you’re not anonymous anymore. They might not know who you are yet, but they do know that you’re you. Your consent to this process has been assumed. In their view, you’ve given it implicitly, simply by visiting. Try to construct a real-world analogue of that situation, and you quickly see how there can be no ethics-based defence of the practice.

You can always go elsewhere, of course. You can close the browser tab, or do another search. You’ve lost nothing, and you’re gone. Except that neither of those statements are true.

You have lost something, and not just time: you’ve lost a period of attention, and that’s the only actual currency you should be measuring with. You’re also still there; a ghost of yourself, lingering behind in a row of a database, ready for wider correlation – or reanimation upon your next visit.

When you look at it like that, advertising (and whatever the tracking gleans) doesn’t actually pay for the content you consumed. Actually, the content repays you for what was already taken.



from lizard's ghost http://ift.tt/1NixdS3

Wednesday, September 23, 2015

Greetings Prelude



from lizard's ghost http://ift.tt/1V8BNqj

groupon

"We believe that in order for our geographic footprint to be an even bigger advantage, we need to focus our energy and dollars on fewer countries"

http://ift.tt/1NJB7Sf



from lizard's ghost http://ift.tt/1Pox1hp

Saturday, September 19, 2015

how old is she!?!



from lizard's ghost http://ift.tt/1F89bXa

opposites attract



from lizard's ghost http://ift.tt/1gAgv1J

but i like to hold something i can see!



from lizard's ghost http://ift.tt/1Ol9cK3

if not newrelic, then

http://ift.tt/1R01veD

http://ift.tt/1iWAn0O

http://ift.tt/1LDPLGe

http://ift.tt/1iWAn0Q

http://ift.tt/1LDPLGf

and one more thing...for mysql...
http://pinba.org/
http://ift.tt/1a5oM6V



from lizard's ghost http://ift.tt/1LDPNxY

Monday, September 07, 2015

(Most) all websites should look the same.

Most browsers look the same. Most car dashes look the same. Most newspapers look the same. Most books look the same.
The web is not art. At least, not most of the time. Websites should only look markedly different with good reason. For most clients, there is not a good reason - http://ift.tt/1EJLILG



from lizard's ghost http://ift.tt/1EM0OjC

Sunday, September 06, 2015

sequential access

http://ift.tt/1jezoY8
this leads to

http://ift.tt/UdIphk
which leads to

http://ift.tt/1bIzAK0
or maybe not in that order



from lizard's ghost http://ift.tt/1JXRGsB

Wednesday, August 26, 2015

never harden by hand..i think

http://hardening.io/

http://ift.tt/1wMK7Ri

http://ift.tt/1Ehum8pHatEnterpriseLinux/6/pdf/SecurityGuide/RedHatEnterpriseLinux-6-SecurityGuide-en-US.pdf



from lizard's ghost http://ift.tt/1UclJ0U

Thursday, August 20, 2015

refund la

from http://ift.tt/1JjPoFG
———————————————————————

Hello,

We’re writing to apologize for the number of issues you’ve experienced with your shipments. Your correspondences with us indicate you’ve required refunds on a majority of orders for a number of reasons.

Through the normal course of business, the occasional problem is inevitable. However, you seem to have had an unusually high rate of problems in your account history.

When unusual account activity such as this comes to our attention, we’ll evaluate each account on a case-by-case basis to determine if additional action is necessary, including closing the account. We’d prefer to work with you to avoid that inconvenience, as we do value your business.

If you have any questions in the future regarding your account, please write to us directly at cis@amazon.com.

Continued failure to comply with our policies may result in the removal of both your Amazon.com buying and selling privileges.

We appreciate your cooperation and understanding.

Best regards,

Account Specialist

———————————————————————-

But this is just a warning email, digging a little further it appears other people are getting the following email:

———————————————————————-

Hello,

A careful review of your account indicates you’ve experienced an extraordinary number of incidents with your orders and corresponding shipments.

In the normal course of business, the occasional problem is inevitable. The rate at which such problems have occurred on your account is extraordinary, however, and cannot continue. Effective immediately, your Amazon.com account is closed and you are no longer able to shop in our store.

Please know that any accounts related to yours have also been closed. If you were to open a new account, the same will result and it will also be closed. In the event that you attempt to do so, we will not accept the return of any additional orders, nor will we issue further refunds in connection with any future orders. We appreciate your cooperation in refraining from using our web site.

If you require additional assistance, or have any concerns, feel free to contact us directly at cis@amazon.com.

Please do not contact regular Customer Service again, as they will no longer be able to assist you.

Best regards,

Account Specialist



from lizard's ghost http://ift.tt/1Kx6iLK

Wednesday, August 12, 2015

really really hard?

from http://ift.tt/1IzPAgB

At the end of the day, Good Eggs is a food logistics company. It manages hundreds of fresh, perishable goods from food artisans and farmers, which it packages and delivers to peoples’ homes. While technology enables it to accept customer orders, streamline fulfillment and optimize delivery routing, technology is not a silver bullet. It is highly cost intensive to build fulfillment centers, establish and manage a network of suppliers and maintain inventory. And every city is very different.

Here’s what Spiro told me Good Eggs would be doing to address these challenges in a 2013 interview:

“We’re scalable because we’ve cut out the usual things that drive costs up. One of the reasons that’s possible is because we’re using lots of custom software throughout the process. Producers know exactly how much to harvest and make, which reduces waste. We’re not warehousing anything, all the food that shows up in our Foodhub is pre-sold, and goes out to shoppers that same day – that reduces overhead and makes for a really streamlined process.”

Two years later, it’s clear that software and reduced inventory were not enough to make the model scaleable. Is this something that could have been learned from one city?

There is a reason it took Amazon and Fresh Direct so long to expand to new cities: “The Last Mile Challenge,” the cost of getting goods from a distribution center to a customer’s home. To address this challenge, Good Eggs originally focused on central pickup locations and managing its own fleet of trucks to deliver groceries to people’s homes for a premium. It has since dropped the pickup locations.

Some organic online grocers like Farmigo, which also serves the Bay Area and Brooklyn, rely solely on pickup to address the last mile issue. “I believe our business model of having an organizer in a neighborhood serve as a community pickup location is an extraordinary advantage in the category,” Farmigo founder and CEO Benzi Ronen tells me. “Our model allows us to avoid the massive costs associated with home delivery and can scale across regions in a way that is affordable for the consumer and economical from a business perspective.”

Others like Door to Door Organics do offer home delivery, but it manages the last mile by increasing the basket size of its shoppers. Subscribers sign up for a CSA-like vegetable and produce box subscription, and then are able to add groceries on top of their order. It seems like Good Eggs is also experimenting with bundles now.

they might want to look at http://ift.tt/1f6pQ0oWholesaleGrocers ?

or perhaps this is more honest?

"The single biggest mistake we made was growing too quickly, to multiple cities, before fully figuring out the challenges of building an entirely new food supply chain."



from lizard's ghost http://ift.tt/1WgUruh

Sunday, August 02, 2015

Monday, July 27, 2015

logiciel libre et logiciel gratuit

You have a defective language. Try French, with logiciel libre et logiciel gratuit.
Free software (logiciel libre) is not software given for free, for USD0.00.
Free software (logiciel libre) is software that gives freedoms to the user:
- The freedom to run the program as you wish, for any purpose (freedom 0). - The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. - The freedom to redistribute copies so you can help your neighbor (freedom 2). - The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. If creating software costs you dearly, then of course, you shall invoice your users for selling your software to them. Bug give them those freedoms! Sell free software. Don't give enslaving software for free (gratuitement).
As a user, I would not use freeware (logiciel gratuit). Freeware (logiciel gratuit) is enslaving; usually the source is even not available, so I cannot verify that freeware software (logiciel gratuit) doesn't do something evil in my back.
Notice that if you sell your logiciel libre under the GPL license, you have to give the sources of your logiciel libre to your paying customer, but not to anybody else. Of course, then your customer has the 4 freedoms described above, and he could further sell or give your software, along with the sources to somebody else. But this is work and it would involve charges, so he may choose not to do so.



from lizard's ghost http://ift.tt/1OvBeiL

Wednesday, July 22, 2015

on ux

i hear people say apple are a ux firm not a tech firm.
i wish they would stop messing with watches, phones and tablets already. the ux is plenty good on android, windows, etc.
i wish they would start making clustered webservers, databases, cacheing proxies, key-value stores, backup software, distributed filesystems, etc. now these need better ux.



from lizard's ghost http://ift.tt/1KjzUhP

Saturday, July 11, 2015

a keeper

http://ift.tt/1HI7bp7

Larry Land
Check out my other thing, Envoy! About
Revised and much faster, run your own high-end cloud gaming service on EC2!
Jul 5, 2015

Playing Witcher 3, a GPU-intensive game on a 2015 fanless Macbook

I’ve written about using EC2 as a gaming rig in the past. After spending some time and getting all sorts of feedback from many people, I’m re-writing the article from before, except with all the latest and greatest optimizations to really make things better. Now we’re using things like NvFBC for graphics card H.264 encoding, using the built-in SSD for better hard drive performance, plus getting rid of things like VNC. I’ve also made the OpenVPN instructions easier to follow.

This is the perfect solution for you fanatics that love to play AAA games but are stuck with one of the new fan-less Macbooks (or similarly slow machines). This is a pretty awesome alternative to building out a new gaming PC. Just make sure you have a good internet connection (ideally 30mbit+ plus <50ms ping to the closest Amazon datacenter). This article will assume you’re on a Mac client, though it should work on Linux or Windows with some minor changes in the client tools.

TLDR: I have made an AMI for you to use such that you don’t need to go through this lengthy step-by-step. Note you become less of a badass if you use the AMI instead of doing it yourself ;). See instructions below.

Costs

Believe it or not, it’s actually not that expensive to play games this way. Although you could potentially save moneys by streaming all your games, cost savings isn’t really the primary purpose. Craziness is, of course. :)

$0.11/hr Spot instance of a g2.2xlarge $0.41/hr Streaming 10mbit at $0.09/GB $0.01/hr EBS storage for 35GB OS drive ($3.50/mo) You’re looking at $0.53/hr to play games this way. Not too bad. That’s around 1850 hours of gaming for the cost of a $1000 gaming PC. Note that prices vary for different datacenters.

Creating your own AMI with the right config

On AWS, create a new EC2 instance. Use defaults everywhere except as mentioned below:
Base image should be Microsoft Windows Server 2012 R2 Base (since Windows still has all the best games)
Windows Server 2012 R2
Use a g2.2xlarge instance (to get an NVIDIA GRID K520 graphics card). Though a larger instance does exist in some regions, I have been unsuccessful in taking advantage of the multiple vGPUs w/ SLI. Plus it’s four times the cost.
Use a Spot instance, it’s significantly cheaper (fractions the regular cost) than regular instances. Note cost will vary depending on region. I usually bid a penny more.
For the storage step, leave everything at the defaults. This will provision a 35GB EBS drive where your OS will live, and a 65GB SSD-backed instance-store (which is super fast and where the games will go). This instance-store will be available as a Z:\ drive.
For the Security Group, I’d recommend creating one that has 3 rules: one that allows All TCP, one that allows All UDP and one that allows All ICMP. Source should be from Anywhere for all 3. Yes, its not maximum security, but with the VPNs you’ll be setting up, it’ll be very convenient.
Finally, for the Key Pair, create a new one since you’ll need one for Windows (to retrieve the Administrator password later)

Once your machine has spun up, get the Windows password using your private key. Connect via Microsoft Remote Desktop and add the details in there. Also make sure to select Connect to admin session to avoid GPU detection troubles. Note that your first connection might have a black screen for about a minute as it creates your user profile.

Before we go too crazy:
Disable the IE Enhanced Security Configuration (so you can use IE)
Enable auto-login
Disable the windows firewall
Enable showing filename extensions

Download and install version 347.88 of the GeForce GTX TITAN X driver package for Windows 8.1. Only the GeForce package contains the latest drivers for the GRID cards. If you get an error when installing the drivers that says it couldn’t detect a GeForce card, you’re not in Remote Desktop as an admin session. Reboot when asked. Note that the latest version of the drivers sometimes cause Windows not to be able to restart. Geforce Titan for Windows 8.1 64-bit

The GRID cards have an optimization Steam can use which can offload the H.264 video encoding to the GPU. We need to enable this though. Sign up for a developer account with NVidia and download and extract the GRID SDK. In the bin directory run the following (using a Command Prompt): NvFBCEnable.exe -enable -noreset. Reboot again.

In order to make games actually use the video card, you’ll need to completely remove the default display driver. Open up Device Manager, and a) disable the Microsoft Basic Display Adapter, b) uninstall it and c) run the following in a Command Prompt. Reboot afterwards.

takeown /f C:\Windows\System32\Drivers\BasicDisplay.sys echo Y | cacls C:\Windows\System32\Drivers\BasicDisplay.sys /G Administrator:F del C:\Windows\System32\Drivers\BasicDisplay.sys Only the NVIDIA GRID K520

Start the Windows Audio Service as per the instructions here. As we’re also on an EC2 machine, there is no soundcard, so install Razer Surround to get a virtual soundcard, AND you get fancy 5.1 simulation! Note that there’s no need to create/login to a Razer ID account.

Download OpenVPN from here. Select the 64-bit Vista installer and when installing make sure to select to select to install all components. After installing, open a Command Prompt and run the following:

cd C:\Program Files\OpenVPN\easy-rsa init-config vars clean-all build-ca (leave all answers blank) build-key-server server (leave all answers blank except Common Name "server", yes to Sign and yes to Commit) build-key client (leave all answers blank except Common Name "client", yes to Sign and yes to Commit) build-dh robocopy keys ../config ca.crt dh1024.pem server.crt server.key Then:

Download my server config and place it in the C:\Program Files\OpenVPN\config directory.
Use the Microsoft Remote Desktop file sharing feature to download the following files from the C:\Program Files\OpenVPN\easy-rsa\keys directory: ca.crt, client.crt, and client.key onto your client computer.
Combine those files along with my client config, up.sh and down.sh. The up and down are used to forward multicast from your client to the server. Note you’ll need WireShark installed on the mac (with pcap support) in order to make multicast more reliable with the up/down scripts.
Edit the client.ovpn file to have your server’s IP in it.
Install TunnelBlick on your Mac. Rename the folder with all the files from above (on your client) to have a .tblk extension and double click on it. TunnelBlick will install the VPN.
Finally, start the OpenVPN service on the server (you should also set it to start Automatically), and connect to it from the client. Don’t bother with the OpenVPN GUI stuff.

phewf That was difficult, though you’re pretty badass for getting it done! Note alternatively you can use ZeroTier (make sure to enable IP addressing on their website w/ an IP range) and not do any of the above OpenVPN craziness. ;) Also alternatively to ZeroTier is Hamachi.

Create a new file, C:\startup.bat which contains md Z:\SteamLibrary. The idea is that when the computer boots fresh, it will ensure that the Z drive is initialized properly for Steam to use as a game storage drive. Add this script via gpedit.msc to your startup. See instructions here.
gpedit.msc

Install Steam and set the following settings:
Make it remember your username/password so it can auto-login every time
In the Steam preferences, create and add Z:\SteamLibrary to your Downloads > Steam Library Folders.
I recommend you turn off Automatic Sign-in of Friends (since this server will always be logged in) in Friends, and turn off the promo dialog in Interface (at the bottom).
Enable hardware encoding at In-Home Streaming > Advanced Host Options > Enable Hardware Encoding
Server Steam Settings

On your mac, make sure you have Steam installed, but change In-Home Streaming > Enable Hardware Decoding. Similar settings to above might also be applicable.
Client Steam Settings

Once all is set up, run the following to log out of the Remote Desktop session and not lock the screen (so games can start): tscon %sessionname% /dest:console. I suggest creating a shortcut on the desktop for this. Logout Shortcut
Gaming time!

Make sure the image you created above is ready. I recommend the gaming-up.sh and gaming-down.sh scripts mentioned below to load/save state via an AMI.

With TunnelBlick on your client, connect to the VPN and start Steam on your client. It should detect the remote machine.
TunnelBlick connected
Steam in-home connected

Select a game to install (make sure to install to the Z drive), and after installing and click the Stream button!
Server
Server Steam

Client
Client Steam

Client streaming Deus Ex: Human Revolution
Streaming Deus Ex

Closer view of stats
Closer view of stats

Further optimizations

Because these machines have a lot of RAM, i’d suggest setting the Pagefile to something small like 16MB. See how here. The smaller your C:\ drive, the faster the AMI creation will be.
Often times games will crash when trying to start. It’s usually because they’re missing certain libraries. Make sure to install .NET 3.5, XInput/Xaudio libraries, and the Media Foundation feature package (from Server Manager). Also force run Windows Update and apply everything (including Optional packages).
I wouldn’t suggest attempting to write scripts to backup your Z:\ drive to C:\ when shutting down your machine. The games download quite quickly on a fresh boot from Steam. The C:\ drive and EBS is quite slow.
To make it easy to start/stop the gaming instance I’ve made gaming-up.sh and gaming-down.sh. gaming-down.sh will terminate the instance after creating an AMI, and gaming-up.sh will restore this AMI. You’ll need jq installed.
Some games don’t have Steam Cloud. I’d recommend installing Dropbox and syncing the My Documents directory with it. That way you won’t lose your save game files between terminations.
Performance gauging

There are two ways to see how your streaming performance is doing.

The first is have the Display performance information option enabled in your client’s Steam In-Home Streaming settings. Then when in-game, press F6 (Fn+F6 on a Mac) and information will be displayed at the bottom of the screen.
Rendering Stats

Make sure that the Encoder is always NVFBC. If it’s not this will significantly slow things down since the H.264 encoding of the video will be done on the CPU (slower than the hardware H.264 encoding on the GRID GPU). If you see any form of x264 here, it’s using CPU encoding.
Same goes for making sure you’re not doing software decoding. VideoToolbox is good if that’s what you see.
The Incoming bitrate will be high, so make sure nobody else is using your internet!
Packet Loss needs to be extremely low. Often times MTU problems will bring this up in the double digits, making the game unplayable. Do a Google search for how to fix MTU problems.
The graph on the right side is important. The colors basically mean:
Dark Blue: amount of time to generate/encode the frame. If this is past 10ms, turn down your game resolution and settings.
Light Blue: amount of time to transfer the frame over the network. This will be the crazy one if you don’t have a spectacular connection or one of your roommates decides to start Bittorrent, etc. Try to keep this one as low to the Dark Blue line as possible, but much of it is out of your control.
Red: amount of time to decode and display the H.264 video. You can’t do much here except keep the resolution down and make sure hardware decoding is on.

The second, more detailed way to look at streaming performance is to press F8 while gaming. Note this will likely crash your Mac Client in the process. An example of the output (found at C:\Program Files (x86)\Steam\logs\streaming_log.txt on the server):

{ "GameNameID" "The Witcher 3: Wild Hunt" "TimeSubmitted" "1435438519" "ResolutionX" "1280" "ResolutionY" "800" "CaptureDescriptionID" "Desktop NVFBC H264" "DecoderDescriptionID" "VideoToolbox hardware decoding" "BandwidthLimit" "15000" "FramerateLimit" "0" "SlowGamePercent" "0" "SlowCapturePercent" "0" "SlowConvertPercent" "0" "SlowEncodePercent" "0" "SlowNetworkPercent" "0" "SlowDecodePercent" "0" "SlowDisplayPercent" "0" "AvgClientBitrate" "21.21160888671875" "StdDevClientBitrate" "28.340831756591797" "AvgServerBitrate" "10105.5224609375" "StdDevServerBitrate" "0" "AvgLinkBandwidth" "104074.671875" "AvgPingMS" "8.4620447158813477" "StdDevPingMS" "1.4712700843811035" "AvgCaptureMS" "4.6132941246032715" "StdDevCaptureMS" "2.260094165802002" "AvgConvertMS" "0" "StdDevConvertMS" "0" "AvgEncodeMS" "4.6132822036743164" "StdDevEncodeMS" "2.2601768970489502" "AvgNetworkMS" "6.5326347351074219" "StdDevNetworkMS" "2.3294456005096436" "AvgDecodeMS" "2.4401805400848389" "StdDevDecodeMS" "3.9411675930023193" "AvgDisplayMS" "6.3233218193054199" "StdDevDisplayMS" "6.5956048965454102" "AvgFrameMS" "27.07377815246582" "StdDevFrameMS" "14.234905242919922" "AvgFPS" "60.287784576416016" "StdDevFPS" "9.2482481002807617" "BigPicture" "0" "KeyboardMouseInput" "1" "GameControllerInput" "0" "SteamControllerInput" "0" } See more information about this file in the Steam In-Home Streaming Steam Group.

Problems?

If when you start streaming a game, Steam says the “Screen is locked”, you’ll need to make sure you close your Remote Desktop session with tscon %sessionname% /dest:console.
If you can only see part of the game view, it’s likely it launched as a window and it’s being improperly cropped by Steam. Make sure your game is in fullscreen mode (usually done in the game’s options).
If the game is extremely choppy, check the Packet Loss percentage by pressing F6. If it’s any higher than 1% or 2% (especially if it’s around 50%), you’re likely having an MTU problem. Try adjusting it according to methods mentioned on Google.
If the computers can’t see each other, on your Steam client, go to the InHome Streaming settings and disable and enable streaming. That will send the UDP Multicast packet which should be forwarded over the VPN and get the server to reveal itself. Also, check your VPN connection in general.
If when you start Steam on your Mac you get a Streaming error, follow the instructions here to fix the executable.
Using the pre-made AMI

Lets face it, following all of the stuff above is a long, tedious process. Though it’s actually quite interesting how everything works, I’m sure you just want to get on the latest GTA pronto. As such I’ve made an AMI with everything above, including the optimizations.

On AWS, create a new EC2 instance. Use the instructions on the first step, except select the ec2gaming Community AMI. Don’t worry about the Key Pair. EC2gaming instance

Follow step 2 except the password for the instance is rRmbgYum8g. Once you log in using Microsoft Remote Desktop, you’ll be asked to change the Administrator password. Change it to something.

Install TunnelBlick on your Mac. Download the VPN configuration from here and unzip it. In the client.ovpn file, change YOURHOSTNAMEHERE to your instance’s IP/hostname. Rename this folder to ec2gaming.tblk and double click on it to import. Connect to the VPN with username Administrator and the password you set in the previous step.

Set up Steam as above, though it’s already installed. Just login with your account credentials and configure it accordingly.

You should be good to go! Use the logout shortcut on the Desktop to log out, and then follow the standard Gaming Time section above.

Huge thanks for helping me with this goes out to: @crisg, @martinmroz, Jeff K. from AWS Support, Daniel Unterberger, and Clive Blackledge

Receive an email when I publish something new!

Larry Land

Larry Land
trivex@gmail.com
lg lg Larry Gadea's random ramblings



from lizard's ghost http://ift.tt/1Mlt77o

Wednesday, July 08, 2015

btc cc, how can it be useful?

http://ift.tt/1HLWt0N

http://ift.tt/1cvJpqG

http://ift.tt/1G5HWWq

http://ift.tt/1zaE7OG



from lizard's ghost http://ift.tt/1Tkvmfy

Linus, on AI

I just don’t see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.



from lizard's ghost http://ift.tt/1UyknRa

Tuesday, July 07, 2015

Shelley-the dreariest journey

I never was attached to that great sect,
Whose doctrine is, that each one should select
Out of the crowd a mistress or a friend,
And all the rest, though fair and wise, commend
To cold oblivion, though it is the code
Of modern morals, and the beaten road
Which those poor slaves with weary footsteps tread,
By the broad highway of the world, and so
With one chained friend, perhaps a jealous foe,
The dreariest and the longest journey go.



from lizard's ghost http://ift.tt/1HJLllf

Monday, July 06, 2015

reinventing the wheel

There’s been a joke for a few years now that all the web applications you’ve ever heard of are actually some other Internet protocol, reconfigured to answer on port 80, the standard port for (unencrypted) web traffic. Some of these parallels are obvious, and even more or less literal: Gmail is IMAP and SMTP, and Google Talk is (or was) XMPP, all on port 80 in the same browser window. Others are more metaphorical. Twitter is IRC on port 80, although one primarily listens to users instead of channels — but listening to channels is still possible, if one uses a client that can follow hashtags, and the syntax is even the same. Dropbox is FTP on port 80. Reddit is Usenet on port 80.



from lizard's ghost http://ift.tt/1KFoqrG

Friday, July 03, 2015

$ ./antirez --voice "godfather" --talk

a bit more about the "godfather" bit..http://ift.tt/18kVvm3



from lizard's ghost http://ift.tt/1RUJDge

Tuesday, June 09, 2015

what a strange thing to accuse Amazon of..

I believe that reading only packaged microwavable fiction ruins the taste, destabilizes the moral blood pressure, and makes the mind obese.
- http://ift.tt/1darSvR



from lizard's ghost http://ift.tt/1HlaEZ1

Sunday, June 07, 2015

alternatives..

dell is competing with chromebooks..
http://ift.tt/1EqJaL0
http://ift.tt/1BkmbS4

sabayon comes with steam..
http://ift.tt/1FGec2B
but unfortunately steam probably doesn't work on reactOS..



from lizard's ghost http://ift.tt/1JzuTCl

Thursday, June 04, 2015

Everything at Google, from Search to Gmail, is packaged and run in a Linux container.

Each week we launch more than 2 billion container instances across our global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient utilization of our infrastructure. We developed Kubernetes, the open source orchestration system for Docker containers, based on our experience engineering Google’s internal systems. Now Kubernetes powers Container Engine. With Container Engine, you can focus on your application, rather than managing a compute cluster or manually scheduling your containers.

http://ift.tt/1E3iYHb



from lizard's ghost http://ift.tt/1QraCiP

Wednesday, June 03, 2015

finding a soulmate

Your love is one in a million
You couldn’t buy it at any price.
But of the 9.999 hundred thousand other loves,
Statistically, some of them would be equally nice.

With all my heart and all my mind I know one thing is true:
I have just one life and just one love and, my love, that love is you.
And if it wasn't for you, baby,
I really think that I would
have somebody else.

http://ift.tt/OGmgZU



from lizard's ghost http://ift.tt/1QoFvEG

but openssh hasn't met them before..

https://twitter.com/damienmiller/status/605864767232704513



from lizard's ghost http://ift.tt/1GY9O4b

Looking Forward: Microsoft: Support for Secure Shell (SSH)

As Microsoft has shifted towards a more customer-oriented culture. Microsoft engineers are using social networks, tech communities and direct customer feedback as an integral part on how we make decisions about future investments. A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH. Thus, the combination of PowerShell and SSH will deliver a robust and secure solution to automate and to remotely manage Linux and Windows systems.

SSH solutions are available today by a number of vendors and communities, especially in the Linux world. However,there are limited implementations customers can deploy in Windows production environments. After reviewing these alternatives, the PowerShell team realized the best option will be for our team to adopt an industry proven solution while providing tight integration with Windows; a solution that Microsoft will deliver in Windows while working closely with subject matter experts across the planet to build it. Based on these goals, I’m pleased to announce that the PowerShell team will support and contribute to the OpenSSH community - Very excited to work with the OpenSSH community to deliver the PowerShell and Windows SSH solution!

A follow up question the reader might have is When and How will the SSH support be available? The team is in the early planning phase, and there’re not exact days yet. However the PowerShell team will provide details in the near future on availability dates.

Finally, I'd like to share some background on today’s announcement, because this is the 3rd time the PowerShell team has attempted to support SSH. The first attempts were during PowerShell V1 and V2 and were rejected. Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive. So I want to take a minute and thank all of you in the community who have been clearly and articulately making the case for why and how we should support SSH! Your voices matter and we do listen.

Thank you!

Angel Calvo
Group Software Engineering Manager
PowerShell Team

Additional Information

For more information on SSH please go to http://ift.tt/1uGPf5v

For information on OpenSSH go to: http://ift.tt/1uVGPHO



from lizard's ghost http://ift.tt/1M3hsd1

"still alive" - portal



from lizard's ghost http://ift.tt/1GXuzgk

Saturday, May 30, 2015

The Relativity of Wrong By Isaac Asimov

I RECEIVED a letter the other day. It was handwritten in crabbed penmanship so that it was very difficult to read. Nevertheless, I tried to make it out just in case it might prove to be important. In the first sentence, the writer told me he was majoring in English literature, but felt he needed to teach me science. (I sighed a bit, for I knew very few English Lit majors who are equipped to teach me science, but I am very aware of the vast state of my ignorance and I am prepared to learn as much as I can from anyone, so I read on.)

It seemed that in one of my innumerable essays, I had expressed a certain gladness at living in a century in which we finally got the basis of the universe straight.

I didn't go into detail in the matter, but what I meant was that we now know the basic rules governing the universe, together with the gravitational interrelationships of its gross components, as shown in the theory of relativity worked out between 1905 and 1916. We also know the basic rules governing the subatomic particles and their interrelationships, since these are very neatly described by the quantum theory worked out between 1900 and 1930. What's more, we have found that the galaxies and clusters of galaxies are the basic units of the physical universe, as discovered between 1920 and 1930.

These are all twentieth-century discoveries, you see.

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. "If I am the wisest man," said Socrates, "it is because I alone know that I know nothing." the implication was that I was very foolish because I was under the impression I knew a great deal.

My answer to him was, "John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

The basic trouble, you see, is that people think that "right" and "wrong" are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.

However, I don't think that's so. It seems to me that right and wrong are fuzzy concepts, and I will devote this essay to an explanation of why I think so.

When my friend the English literature expert tells me that in every century scientists think they have worked out the universe and are always wrong, what I want to know is how wrong are they? Are they always wrong to the same degree? Let's take an example.

In the early days of civilization, the general feeling was that the earth was flat. This was not because people were stupid, or because they were intent on believing silly things. They felt it was flat on the basis of sound evidence. It was not just a matter of "That's how it looks," because the earth does not look flat. It looks chaotically bumpy, with hills, valleys, ravines, cliffs, and so on.

Of course there are plains where, over limited areas, the earth's surface does look fairly flat. One of those plains is in the Tigris-Euphrates area, where the first historical civilization (one with writing) developed, that of the Sumerians.

Perhaps it was the appearance of the plain that persuaded the clever Sumerians to accept the generalization that the earth was flat; that if you somehow evened out all the elevations and depressions, you would be left with flatness. Contributing to the notion may have been the fact that stretches of water (ponds and lakes) looked pretty flat on quiet days.

Another way of looking at it is to ask what is the "curvature" of the earth's surface Over a considerable length, how much does the surface deviate (on the average) from perfect flatness. The flat-earth theory would make it seem that the surface doesn't deviate from flatness at all, that its curvature is 0 to the mile.

Nowadays, of course, we are taught that the flat-earth theory is wrong; that it is all wrong, terribly wrong, absolutely. But it isn't. The curvature of the earth is nearly 0 per mile, so that although the flat-earth theory is wrong, it happens to be nearly right. That's why the theory lasted so long.

There were reasons, to be sure, to find the flat-earth theory unsatisfactory and, about 350 B.C., the Greek philosopher Aristotle summarized them. First, certain stars disappeared beyond the Southern Hemisphere as one traveled north, and beyond the Northern Hemisphere as one traveled south. Second, the earth's shadow on the moon during a lunar eclipse was always the arc of a circle. Third, here on the earth itself, ships disappeared beyond the horizon hull-first in whatever direction they were traveling.

All three observations could not be reasonably explained if the earth's surface were flat, but could be explained by assuming the earth to be a sphere.

What's more, Aristotle believed that all solid matter tended to move toward a common center, and if solid matter did this, it would end up as a sphere. A given volume of matter is, on the average, closer to a common center if it is a sphere than if it is any other shape whatever.

About a century after Aristotle, the Greek philosopher Eratosthenes noted that the sun cast a shadow of different lengths at different latitudes (all the shadows would be the same length if the earth's surface were flat). From the difference in shadow length, he calculated the size of the earthly sphere and it turned out to be 25,000 miles in circumference.

The curvature of such a sphere is about 0.000126 per mile, a quantity very close to 0 per mile, as you can see, and one not easily measured by the techniques at the disposal of the ancients. The tiny difference between 0 and 0.000126 accounts for the fact that it took so long to pass from the flat earth to the spherical earth.

Mind you, even a tiny difference, such as that between 0 and 0.000126, can be extremely important. That difference mounts up. The earth cannot be mapped over large areas with any accuracy at all if the difference isn't taken into account and if the earth isn't considered a sphere rather than a flat surface. Long ocean voyages can't be undertaken with any reasonable way of locating one's own position in the ocean unless the earth is considered spherical rather than flat.

Furthermore, the flat earth presupposes the possibility of an infinite earth, or of the existence of an "end" to the surface. The spherical earth, however, postulates an earth that is both endless and yet finite, and it is the latter postulate that is consistent with all later findings.

So, although the flat-earth theory is only slightly wrong and is a credit to its inventors, all things considered, it is wrong enough to be discarded in favor of the spherical-earth theory.

And yet is the earth a sphere?

No, it is not a sphere; not in the strict mathematical sense. A sphere has certain mathematical properties - for instance, all diameters (that is, all straight lines that pass from one point on its surface, through the center, to another point on its surface) have the same length.

That, however, is not true of the earth. Various diameters of the earth differ in length.

What gave people the notion the earth wasn't a true sphere? To begin with, the sun and the moon have outlines that are perfect circles within the limits of measurement in the early days of the telescope. This is consistent with the supposition that the sun and the moon are perfectly spherical in shape.

However, when Jupiter and Saturn were observed by the first telescopic observers, it became quickly apparent that the outlines of those planets were not circles, but distinct ellipses. That meant that Jupiter and Saturn were not true spheres.

Isaac Newton, toward the end of the seventeenth century, showed that a massive body would form a sphere under the pull of gravitational forces (exactly as Aristotle had argued), but only if it were not rotating. If it were rotating, a centrifugal effect would be set up that would lift the body's substance against gravity, and this effect would be greater the closer to the equator you progressed. The effect would also be greater the more rapidly a spherical object rotated, and Jupiter and Saturn rotated very rapidly indeed.

The earth rotated much more slowly than Jupiter or Saturn so the effect should be smaller, but it should still be there. Actual measurements of the curvature of the earth were carried out in the eighteenth century and Newton was proved correct.

The earth has an equatorial bulge, in other words. It is flattened at the poles. It is an "oblate spheroid" rather than a sphere. This means that the various diameters of the earth differ in length. The longest diameters are any of those that stretch from one point on the equator to an opposite point on the equator. This "equatorial diameter" is 12,755 kilometers (7,927 miles). The shortest diameter is from the North Pole to the South Pole and this "polar diameter" is 12,711 kilometers (7,900 miles).

The difference between the longest and shortest diameters is 44 kilometers (27 miles), and that means that the "oblateness" of the earth (its departure from true sphericity) is 44/12755, or 0.0034. This amounts to l/3 of 1 percent.

To put it another way, on a flat surface, curvature is 0 per mile everywhere. On the earth's spherical surface, curvature is 0.000126 per mile everywhere (or 8 inches per mile). On the earth's oblate spheroidal surface, the curvature varies from 7.973 inches to the mile to 8.027 inches to the mile.

The correction in going from spherical to oblate spheroidal is much smaller than going from flat to spherical. Therefore, although the notion of the earth as a sphere is wrong, strictly speaking, it is not as wrong as the notion of the earth as flat.

Even the oblate-spheroidal notion of the earth is wrong, strictly speaking. In 1958, when the satellite Vanguard I was put into orbit about the earth, it was able to measure the local gravitational pull of the earth--and therefore its shape--with unprecedented precision. It turned out that the equatorial bulge south of the equator was slightly bulgier than the bulge north of the equator, and that the South Pole sea level was slightly nearer the center of the earth than the North Pole sea level was.

There seemed no other way of describing this than by saying the earth was pear-shaped, and at once many people decided that the earth was nothing like a sphere but was shaped like a Bartlett pear dangling in space. Actually, the pear-like deviation from oblate-spheroid perfect was a matter of yards rather than miles, and the adjustment of curvature was in the millionths of an inch per mile.

In short, my English Lit friend, living in a mental world of absolute rights and wrongs, may be imagining that because all theories are wrong, the earth may be thought spherical now, but cubical next century, and a hollow icosahedron the next, and a doughnut shape the one after.

What actually happens is that once scientists get hold of a good concept they gradually refine and extend it with greater and greater subtlety as their instruments of measurement improve. Theories are not so much wrong as incomplete.

This can be pointed out in many cases other than just the shape of the earth. Even when a new theory seems to represent a revolution, it usually arises out of small refinements. If something more than a small refinement were needed, then the old theory would never have endured.

Copernicus switched from an earth-centered planetary system to a sun-centered one. In doing so, he switched from something that was obvious to something that was apparently ridiculous. However, it was a matter of finding better ways of calculating the motion of the planets in the sky, and eventually the geocentric theory was just left behind. It was precisely because the old theory gave results that were fairly good by the measurement standards of the time that kept it in being so long.

Again, it is because the geological formations of the earth change so slowly and the living things upon it evolve so slowly that it seemed reasonable at first to suppose that there was no change and that the earth and life always existed as they do today. If that were so, it would make no difference whether the earth and life were billions of years old or thousands. Thousands were easier to grasp.

But when careful observation showed that the earth and life were changing at a rate that was very tiny but not zero, then it became clear that the earth and life had to be very old. Modern geology came into being, and so did the notion of biological evolution.

If the rate of change were more rapid, geology and evolution would have reached their modern state in ancient times. It is only because the difference between the rate of change in a static universe and the rate of change in an evolutionary one is that between zero and very nearly zero that the creationists can continue propagating their folly.

Since the refinements in theory grow smaller and smaller, even quite ancient theories must have been sufficiently right to allow advances to be made; advances that were not wiped out by subsequent refinements.

The Greeks introduced the notion of latitude and longitude, for instance, and made reasonable maps of the Mediterranean basin even without taking sphericity into account, and we still use latitude and longitude today.

The Sumerians were probably the first to establish the principle that planetary movements in the sky exhibit regularity and can be predicted, and they proceeded to work out ways of doing so even though they assumed the earth to be the center of the universe. Their measurements have been enormously refined but the principle remains.

Naturally, the theories we now have might be considered wrong in the simplistic sense of my English Lit correspondent, but in a much truer and subtler sense, they need only be considered incomplete.



from lizard's ghost http://ift.tt/1dDfeX3