Encrypted-DNS-Deployment-Initiative / Use-Cases

3 stars 3 forks source link

DNS based filtering in the UK and DoH - SafeCast’s proposals for universal Pre-Labelling of video content #1

Closed AliKelman closed 5 years ago

AliKelman commented 5 years ago

We propose a universal pre-labelling system to be used in association with DoH (or DNSCrypt or other equivalent technologies) to enable effective child protection to be seamlessly embedded within the internet.

We believe that this can be done without the need for new legislation and in full compliance with the existing internet infrastructure.

Because this topic is in the content/application space while the DNS discussion and expertise on the EDDI list is much lower and more focussed on how domain names and IP addresses are provided to the network and applications, you may well say TL:DR because you do not see a clear linkage to the DNS issues being discussed on the EDDI list. But please read through the following three longish postings to get an understanding of the background and history to the problem. Within the next fortnight I will post a new link to some pages and materials on the SafeCast website where I will be organising material to tie everything together.

Alistair

AliKelman commented 5 years ago

Background to Universal Pre Labelling of Video Content Every labelling system that has been promulgated over the past three decades to protect children and vulnerable people from seeing inappropriate content on television and the internet has been an added cost on the site operator or broadcaster. With tight margins and legitimate concerns about censorship, website operators and broadcasters have been able to avoid including child protection metadata labels in their content. Until now.

Over the past three years the volumes of video coming from a plethora of devices (smartphones, vlogs, etc) has overwhelmed society’s capability of protecting children from unlabelled content. Every minute of every day four hundred hours of new video content is uploaded to YouTube alone. Sharon White, the UK telecoms regulator (Ofcom), said that “The sheer volume of text, audio and video that is generated or shared online far outstrips the output of traditional media. That means, for example, that it could be impractical to review platforms’ decisions about content case-by-case.

The UK Government’s response to this situation has been to require “parity of protection” for children, whether they be on or off line, whilst answering the challenges of censorship and capacity. Sharon White refers to this concept as “the standards lottery” and has some appreciation of the difficulties in transplanting traditional broadcast regulation, unamended, into the online corpus. In March 2019 the Communications Committee in the House of Lords published “Regulating in a Digital World”. It calls for a new, overarching regulatory framework so that the services in the digital world are held accountable to an enforceable set of shared principles. These proposals have more recently been outlined in the Online Harms White Paper which is to form the basis of new legislation post-Brexit.

Safecast and the SafeCast HeadCodes (see below) are a free-to-use pre-labelling system which supports accountability. The National Crime Agency and CEOP in their evidence to the Home Affairs Select Committee In Parliament in March 2018 specified pre-labelling of content during uploading as a ‘key ask’ from law enforcement.

Together these developments could lead to a revised approach to universal child protection on tv and the internet through the use pre-labelling to embed accountability and enforcement of shared principles. In 2017 the Government passed the Digital Economy Act which set out in Section 104 a new power to filter content for child protection purposes. Filtering, without censorship, can only be performed by means of pre-labelling content since post-labelling of content is censorship by a different name.

AliKelman commented 5 years ago

The SafeCast HeadCodes and their use - a brief history of content labelling

We explain below the history of content labelling and then how SafeCast HeadCodes could work as part of the EIDR standardisation process.


Content labelling issues, for content which is accessible on the web, are not new. Back in the early 1990s the World Wide Web Consortium (W3C), an international community led by Sir Tim Berners-Lee, set about drawing up web standards. The W3C quickly identified that there needed to be an international standard specification to enable labels (metadata) to be associated with Internet content. W3C developed the  PICS specification (Platform for Internet Content Selection).  PICS was designed to help parents and teachers control what children access on the Internet. The PICS platform became the construct on which rating services and filtering software, such as  NetNanny and  K9 Web Protection, were built. 


However filtering services and filtering software based on the PICS platform are not free. Today, for example, NetNanny charges $39.99 per personal computer. For this reason there has been minimal take-up of PICS platform based services as a means of protecting children from seeing inappropriate content on the web. They also are too closely associated with censorship for universal acceptance.


In about 1999 PICS was superseded by the Protocol for Web Description Resources ( POWDER) which inserted greater flexibility into the PICS classification system. Unfortunately, the POWDER specification settled on a design which separated content labels from the content itself - a design which is inappropriate for cloud-based operations. It then, as an additional complication, started to add quality labels to content. Rather than a simple binary statement (Is there any violence in this clip? Y/N) the POWDER system attempted to answer questions such as “Is this a good piece of literature? Is this an important website?” This led to the POWDER web classification system becoming one of the tools of “ astroturfers” as battles raged between Search Engine Optimisers trying to push their clients’ sites up  search engine listings, and Google’s analytical algorithms trying to only show the best sites to its users.


POWDER in 2009 was approved by W3C as “the recommended standard method for describing Web sites and building applications that act on such descriptions”. Unfortunately, by the end of this process it was useless as a practical tool to protect children from seeing pornographic or inappropriate video content. Today POWDER is not supported on any modern platform.



AliKelman commented 5 years ago

Details of the Safecast Headcode system


In 2013, Safecast determined that all video could be classified into seven levels to cover the entire spectrum of content ranging from content that did not contain any sex, violence, or horror through to content that was too graphic or horrific to be broadcast on television or circulated on the Internet. It found that this simple classification process, dividing video into appropriate classes, mirrored the process that is regularly undertaken in the viewing rooms of the major UK television broadcasters. Professional television schedulers within the major broadcasting networks are used to reviewing a programme before it is broadcast and taking a view on the earliest time that the programme can be shown based upon the television watersheds. 


Historically, there were multiple watersheds on broadcast television, early-evening, seven-thirty, nine-o'clock, ten-o'clock and eleven-o’clock. Each watershed was aimed at allowing more adult themed programmes to be broadcast with higher levels of salacious content at times when younger children were likely to be asleep. This television “watershed” system was memorably said to be a graduated system across an evening’s viewing and not a ‘waterfall’ moment where, at a single time, content suddenly became unrestricted.

 

Research from Ofcom has shown that the television watershed system has in excess of 74% support from parents, teachers, politicians, academics, and children. The television watershed system, which works solely on live television broadcasting, can now be implemented on “anytime, anywhere, any device” viewing so that children of all ages and maturities are adequately protected whenever they choose to view content. This, in fact, is what our company, SafeCast, will deliver. 

 

Safecast has implemented the graduated TV Watershed classifications as hidden labels embedded as metadata in video that can be recorded in accordance with existing broadcasting industry standards. With this in place, a very simple filter can then be written for video browser applications. The filter will read the hidden label in a video and decide whether the video should be shown to a child of a particular age or not. This SafeCast filter can be part of any video browser application that can read embedded metadataA timely rollout of the Safecast video browser filter will mean that any video can be filtered in a straightforward manner to enable universal compliance with the Section 104 of the Digital Economy Act 2017 by both broadcasters and internet service providers.


Set out below is a table showing the SafeCast HeadCode levels and how the filtering will operate on devices such as a mobile phone or a tablet belonging to a child.

 

Safecast HeadCode Level

Child’s Age

Equivalent TV Watershed time

Comments

0

No restrictions

No restrictions

Can be shown at anytime

1

6 and under

No restrictions but logged on device

Very young children should not see too much of this content - hence logging required in phones and tablets

2

Age 7 and over

7.30pm

Young children should not see too much of this content - hence logging required in phones and tablets.  Also the restriction applies to advertising of high fat, high sugar products and services

3

Age 11 and over

9.00pm

Normal TV Watershed restrictions including on advertising of medicines, alcohol, gambling etc

4

Age 14 and over

10.00pm

Enhanced TV Watershed restriction used by UK schedulers

5

Age 18 and over

11.00pm

Highly enhanced TV Watershed restriction used by UK schedulers

6

Age 18 and over

Not allowed to be broadcast

Reserved for content which is too extreme for broadcasting but which may be required as evidence. Video of a bomb explosion on a bus shown at a coroner’s inquest would be a typical example of Level 6 content 


Additionally, in order to address the sleep requirements of children, Safecast researched and developed a recommendation set out in a second table below. This shows how age restrictions and bedtime cut-off can be implemented  in a Safecast filtered mobile phone or tablet belonging to a child through a social media app. The viewing hours for the bedtime cut-off are based upon NHS guidance on the amount of sleep a child should have for good health and proper development.  The table below was prepared by working on the basis that a child needs to wake up at 7.00am to get to school at the proper time.  


Child’s Age

Safecast Headcode filter

Default Bedtime cut-off on device on a school night

6 and under

Levels 1,2,3,4,5 and 6 content rejected

8.00pm

7 to 10

Levels 2,3,4,5 and 6 content rejected

9.00pm

11 to 13

Levels 3,4,5 and 6 content rejected

10.00pm

14 to 17

Levels 4,5 and 6 content rejected

11.00pm

18

Level 5 and 6 content rejected (unless expressly requested)

No cut off

Safecast and Facebook


In April 2018, Safecast was asked on behalf of the Children’s Commissioner for England to see if Safecast could persuade YouTube and Facebook to take up SafeCast’s labelling proposals. This was because the Children’s Commissioner was most concerned about inappropriate content on these systems - as illustrated by a recent piece in the Telegraph on  human rights and war crimes. Safecast therefore came up with a proposal which would mean that the selection of SafeCast HeadCodes by the creator would be included within the standard uploading processes on Facebook and YouTube systems. This was sent to Facebook immediately.  Facebook did not respond. In September 2018, Safebook supplied the letter to the ICO as part of the ICO’s Age Appropriate Design Code’s call for evidence. The UK ICO subsequently published  a ‘redacted’ copy of the letter on the ICO site.


Support for Safecast’s Proposals


Safecast has liaised with general practitioners, educators including the National Association of Head Teachers (NAHT) and UK children’s charities including NSPCC, the Children’s Media Foundation, the Mothers Union, Barnardo’s. All of these experts and organisations recognise the need for the filtering of unacceptable material (which is easily accessible on the Internet) away from children and the damage that is being done to children by the absence of effective filtering measures being in place. 




mstojens commented 5 years ago

I do not see any reason to tie encrypted DNS deployment to the issue of content labeling and think this issue as it is defined should be closed.

The main responsibility of the DNS is to resolve domain names into IP addresses that can be used to connect to network resources, usually over the Internet but not always. This has little correlation with the actual content being served up other than a high level provider indication. For example, doing a DNS query for netflix.com has no bear on whether the content I then request from Netflix (not over DNS but HTTPS or some other protocol) is child friendly or not.

At most, the DNS can be used to control traffic on a per-domain basis, which for your use case would mean 1) always allowing domains known to be "good" content only, and 2) always denying domains known to be "bad" content only. Notice this has absolutely nothing to do with media formats as the DNS should not participate in the transfer of AV media from one location to another.

No matter how the DNS queries are handled, unencryted or encrypted, this kind of proposal could work. However, this isn't the place to discuss it as it has no bearing on how we do encrypted DNS.

AliKelman commented 5 years ago

I think that you are right to close this as an issue - trying to fit these discussions into this forum is not straightforward or simple. As I said in my posting back to Paul "I was invited by the IWF to outline my thoughts on solving the content labelling crisis to the EDDI list. I believe I was asked to do so because the New York Times article, which I cited in my first posting, indicated that the inability to filter content on browsers could pose an existential threat to family-friendly internet access in the home. That remains the case. Thus once I have created my discussion forum on the SafeCast website I am going to post an open invitation on the EDDI list for all of you to review and debate these issues with me in the hope of building a consensus as a matter of urgency."

Alistair

Alistair Kelman ali.kelman@safecast.co.uk

Director and CEO

Cachebox TV Ltd t/a SafeCast®

A Safe Harbour for On-Demand TV and Video http://www.safecast.co.uk

The information in this e-mail is confidential and may be legally privileged. It is intended solely for the addressee. Access by any other person to this e-mail is not authorised. If you are not the intended recipient, please delete this e-mail. Any disclosure of this e-mail or of the parties to it, any copying, distribution or any action taken or omitted to be taken in reliance on it is prohibited, and may be unlawful.

On Wed, 16 Oct 2019 at 20:41, Tommy Jensen notifications@github.com wrote:

I do not see any reason to tie encrypted DNS deployment to the issue of content labeling and think this issue as it is defined should be closed.

The main responsibility of the DNS is to resolve domain names into IP addresses that can be used to connect to network resources, usually over the Internet but not always. This has little correlation with the actual content being served up other than a high level provider indication. For example, doing a DNS query for netflix.com has no bear on whether the content I then request from Netflix (not over DNS but HTTPS or some other protocol) is child friendly or not.

At most, the DNS can be used to control traffic on a per-domain basis, which for your use case would mean 1) always allowing domains known to be "good" content only, and 2) always denying domains known to be "bad" content only. Notice this has absolutely nothing to do with media formats as the DNS should not participate in the transfer of AV media from one location to another.

No matter how the DNS queries are handled, unencryted or encrypted, this kind of proposal could work. However, this isn't the place to discuss it as it has no bearing on how we do encrypted DNS.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/Encrypted-DNS-Deployment-Initiative/Use-Cases/issues/1?email_source=notifications&email_token=AL5SZPJDURHDCTKDNV6ZODTQO5U7NA5CNFSM4JA6VFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBNWH5Y#issuecomment-542860279, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL5SZPI66HR7MC2HR7ODBXLQO5U7NANCNFSM4JA6VFXQ .