EDiMA, the representative body for online technology companies, that counts Microsoft, Facebook, Google and Amazon among its members, has published proposals for a new Online Responsibility Framework. This has been prepared in anticipation of new Europe-wide legislation aimed at making such companies more responsible for the illegal and harmful content that they host.

One aspect of the proposal that might raise eyebrows is the suggestion that online platforms do not do more already to seek out and remove such content because of fears that this will impose additional liability on them. But this is not as counter-intuitive as it might sound.

By way of example, we can look at the approach to online defamation in the UK. Prior to the coming into force of the 2013 Defamation Act, website operators in the UK were faced with a difficult challenge. If they hosted user generated content, there was a real risk that they would be deemed to be a publisher of that content. If they did nothing proactively to monitor the content, then they would be deemed to be a publisher if (a) they were notified about it; and (b) they then failed to do anything to remove it.

This ran the risk that the notification might be overlooked or they might end up having to delete content because it "looked" defamatory and they didn't want to run the risk of leaving it online while the position was verified. Nevertheless, it was preferable to the alternative - if a platform put in place mechanisms for pro-active moderation of content (rather than waiting for it to be flagged by users) they greatly increased the likelihood of being designated publishers of that content - effectively having accepted editorial responsibility for it from the time of its publication.

That changed with the coming into force of the 2013 Defamation Act. That legislation, along with some accompanying regulations specifically aimed at website operators, put in place a regime which gave a complete defence to a platform which hosted content that was alleged to be defamatory. All that they needed to do was to have a mechanism for individuals who objected to content to complain about it, and for that complaint to be drawn to the attention of the author. The author then could decide whether to retract the publication or stand by it, and in the latter scenario the operator could pass on the author's details to the complainant and bow out of the picture. If the author did not engage, the website operator had to decide whether to take down the post (giving it the benefit of the defence under the Act) or leave it up and assume responsibility as a publisher.

Plainly that mechanism would not be appropriate for the instances of illegal and harmful content with which the next wave of regulatory activity is going to be concerned. But what it does illustrate is that these platforms concerns about assuming liability by "doing too much" are not wholly unfounded; and that with a degree of common sense and pragmatism, mechanisms can be devised that incentivise the platforms to do more, while protecting them from liability which ought properly to sit with the authors/creators of the offending content.