andrewducker (
andrewducker) wrote2018-05-18 12:00 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
- advertising,
- age,
- art,
- china,
- christianity,
- drugs,
- edinburgh,
- education,
- europe,
- fungus,
- funny,
- history,
- judaism,
- links,
- nationalism,
- privacy,
- propaganda,
- religion,
- sound,
- wood
Interesting Links for 18-05-2018
- Here's what's actually going on with the Yanny/Laurel clip
- (tags: sound age )
- Accurate anti-drug slogans
- (tags: drugs funny )
- Some differences between Christianity and Judaism
- (tags: Judaism Christianity history religion )
- Exquisite Rot: Spalted Wood and the Lost Art of Intarsia
- (tags: art wood history )
- Deaths from fungal infections exceeding malaria, say researchers in new drug resistance warning
- (tags: fungus )
- Chinese mass-indoctrination camps evoke Cultural Revolution
- (tags: China propaganda Education nationalism )
- Edinburgh to ban all on-street advertising boards citywide
- (tags: Edinburgh advertising )
- GDPR Hysteria
- (tags: privacy europe )
no subject
Processing all these end-user requests will be a huge burden
Then automate it. If you could automate the collection of the data in the first place then you definitely can automate the rest of the life cycle. There is no technical hurdle companies won’t jump through if it gets them juicy bits of data but as soon as the data needs to be removed we’re suddenly back in the stone age and some artisan with a chisel and hammer will have to jump into action to delete the records and this will take decades for even a small website. Such arguments are not made in good faith and in general make the person making them look pretty silly after all nobody ever complained about collecting data, in fact there are whole armies of programmers working hard to scrape data from public websites which is a lot more work than properly dealing with the life cycle of that data after it has been collected. So yes, it is a burden, no, the burden isn’t huge unless you expressly make it so but that’s your problem.
Hmm.... Speaking as someone who develops these sorts of systems, that's bollocks. It's rather more complicated than that. You might have, for instance, really antique code that collects the data, and therefore no current developers who really want to touch it. More generally though, allowing one user to store one record is a whole load simpler than allowing one user to retrieve the record that is theirs (you have to validate that they are that record's owner!), and make changes to it or delete it.
no subject
More generally, the data you're storing about your customers is probably fine. If you have a significant exposure to the GDPR, you were probably storing too much and you should wonder why you were doing that.
I think this is the salient bit of the article:
no subject
That doesn't mean it's their business model.
And every site has technical debt. Sites for non-profits often have years of it. For instance, I have for the last year been working on a complete rebuild of a not-for-profit's site that was built about 7-8 years ago. If they hadn't come along and stumped up for the rebuild, they would have a massive heap of tech debt.
no subject
As for technical debt, it's like any debt: some is fine, if it means you get what you need faster, and you can then pay it back at your leisure. It's if it becomes unmanageable that you have a problem. If the GDPR means lots of companies need to look at their legacy systems and put in some work to make them manageable again, again that's a good thing, like the Millennium Bug was.
no subject
The point I am taking issue with is just that it's no big deal to comply with this. It's not necessarily.
> As for technical debt, it's like any debt: some is fine, if it means you get what you need faster, and you can then pay it back at your leisure. It's if it becomes unmanageable that you have a problem.
Yeah, well most websites are built on bit of glue and string and fly on sheer blind hope and luck. I should know, I build them. GDPR doesn't expose this. This gets exposed any time there is any kind of major security issue, and we see that most sites just aren't keeping up to date (e.g. Panama papers, where the site that leaked them hadn't applied a security patch that had been released something like a year previously). But then that's not just websites -- all aspects of software are glue and string and luck. Look at all the things that were in trouble with the SSH bug that came to light about a year ago.