Wednesday, March 29, 2017

drupal mulls dropping mysql support

MySQL uses "master" and "slave" servers. Clearly this database is not aligned with Drupal's values and should be banned.



from lizard's ghost http://ift.tt/2o2TLzC

groceries dun haz hi margins eh

http://ift.tt/2orrz68

http://ift.tt/2naj4vM



from lizard's ghost http://ift.tt/2o4oOeD

Monday, March 27, 2017

on data

https://stenci.la/

http://ift.tt/2nUtnYw



from lizard's ghost http://ift.tt/2nVj8TW

Similar story, but with 3 consecutive large capacity HGST hdds that were sold as new (SMART told a different story), switching sellers each time. Ultimately just went to Newegg.

http://ift.tt/2mBMCWR

http://ift.tt/2o6qnFEcrdpdhist1?ie=UTF8&filterByStar=onestar&reviewerType=avponlyreviews

http://ift.tt/2npEuYzandscience/medicalexaminer/2014/05/amazonillegaldrugsmusclerelaxantssteroidsprescriptiondrugs_delivered.html

http://ift.tt/2mXNaBZ



from lizard's ghost http://ift.tt/2npCCio

Tuesday, March 14, 2017

Monday, March 13, 2017

a layer of indirection

I forget the disclaimer that you should not do this, ever :)

We had a cluster for Hadoop experiments at uni and no ressources to replace all the faulty disks at that time (20-30% were faulty to some degree from the SMART values - more than 150 disks). So this was kind of an experiment. All used data was available and backup up ouside of that cluster. The problem was that with ext4 after running a job certain disks always switched to readonly and this was a major hassle as this node had to be touched by hand. HDFS ist 3x replicated and checksummed and the disks usally worked fine for quite a time after the first bad sector. So we switched to ZFS, ran weekly scrubs - only replaced disks that didn't survived the srub in reasonable time or with reasonable failure rates and bumped up the HDFS checksum reads that everything is control read once a week. The working directory for the layer above (MapReduce and stuff like that) got a dataset with copies=2 so that intermediate data is still fine within reasonable amounts. This was for learning or research purposes where top speed or 100% integrity didn't matter and uptime and usability was more important. Basically the metadata on disk had to be sound and the data on a single disk didn't matter that much. This was quite a ride and it's long been replaced since then.

Just thought it's interesting how far you can push that. In the end it worked but turned out there is no magic, disks die sooner or later and sometimes take the whole node with them.

Don't go to ebay and buy broken disks out of believing with ZFS these will work. Some survive a while, most die fast, some exhibit strange behavoir.

That RAIDZ is more or less for "let's see where this goes" purposes, backups are in place it's not a production system.



from lizard's ghost http://ift.tt/2nlAtCj