SRCCON 2019 • July 11 & 12 in MPLS Support the SRCCON team

← SRCCON 2019 Session Transcripts

The Scramble Suit: Creating harm prevention guidelines for newsroom data

Session facilitator(s): Thomas Wilburn, Kelly Chen

Day & Time: Friday, 11:45am-1pm

Room: Ski-U-Mah

THOMAS: Okay. We’re gonna kick things off in a minute and a half, because we have a lot of stuff to get to. We know people will continue to drift in, and that’s cool. But we’re just gonna start with an introduction. So go ahead. Find a seat. Welcome to our session.

Okay. We’re gonna go ahead and get started. Thanks again for coming to assembling the scramble suite. This is about guidelines for newsrooms who source or use or publish personally identifiable information. My name is Thomas Wilburn. I’m at NPR.

KELLY: My name is Kelly Chen. I’m currently at News Therapy.

AUDIENCE: I think I’m in the wrong place. My agenda says that…

THOMAS: Yeah, sorry, they moved the room. It’s okay. It’s fine. If anyone needs to leave for any reason, totally cool. We’re not hurt. Ideally, I hope that a bunch of people are now traveling from Swane to here.

AUDIENCE: Sorry. It sounds like you recognized that we put the wrong…

THOMAS: It’s okay. No problem. Okay.

AUDIENCE: What did they think they were in?

THOMAS: I have no idea. I’m sure it was great. Cool. All right. We will keep going.

KELLY: So before we get started, we just wanted to kind of make a disclaimer, recognizing that harm often results from a personal sense of privacy and safety being taken away from someone. And I feel like many of us in this room may have personal experience with that. Or have been in close proximity to that, just given the nature of our jobs. So if at any point you feel like you’re uncomfortable or triggered or just kind of need to take a break, feel free to step outside. Do what you need to do, to take care of yourself. Also a reminder that this session is being transcribed. So if you want anything off the record, just be sure to say that, and we’ll scrub it.

THOMAS: Cool. All right. So… We wanted to talk – specifically, this session is about the idea of having privacy policy as a way of preventing harm. And so we want to start out by talking a little bit about the kinds of harms that can result from lacking a privacy policy.

KELLY: Yeah. So what can go wrong? A lot can go wrong. We’re all familiar with the instances of 8Chan, GamerGate, a lot of white extremism online, doxxing, SWATing… Any time your personal information is released and can be traced back to a single individual, be it your home number, a home address, even your ID, all of that can result in a lot of harm and harassment. And oftentimes it can be even deadnaming, choosing to release the names of someone who has changed their name, which can be pretty sensitive to a lot of folks in the trans community, and even the right to be forgotten.

And so there are new regulations and compliance policies that have arisen from a new sense of… We need to protect this kind of harm. So with GDPR and even California’s privacy policies – if you’re not paying attention to this, you can have legal problems. In the photojournalism world, consent is big. Even if someone has given you consent to use their photo and images, sometimes it can be kind of on a strange power dynamic, where they might not feel comfortable saying no. Or maybe not even realize that you have so much power that they’re not really aware of.

So all this kind of leads to kind of a breakdown in trust between you as a newsroom and the communities that you’re serving and your direct audiences.

THOMAS: Yeah. And I want to say: I think it’s really important to note that this is not necessarily something we think everybody sets out to do. Right? We don’t think anybody in this room is setting out to perpetuate GamerGate or to institute harassment or to cause abuses. But there are lots of ways in which we as newsrooms have been conditioned to the idea that it is the right thing to do, to release information which may contain personally identifiable information. So, for example, a really common scenario here is investigative reporting. And you may FOIA information. That information comes from the government. It may contain personally identifiable information. And we’ve kind of thought over the years that the way that we are supposed to report is to show our work. And the way we show our work is by taking these database dumps and just kind of throwing them out to the public. And I would argue strongly that, A, that is not good storytelling. It’s not great journalism. There is no great story that you get from 40,000 random names and addresses. Like, you are better off dealing with this information in aggregate. But even that aside, there’s the question of, like, are you prioritizing that transparency over your relationship and the safety of your readers? And the relationship and safety of your community?

There are ways you can do this inadvertently. When we do mapping and geocoding and we’re releasing that information, have we done enough to anonymize that data? Before it goes out, have we cleaned it so that nobody can look at the code and figure out what’s in there? If you have run stories on things in the past, and somebody contacts you, and says: Listen, I did something 20 years ago, and you reported on it. I have done my time. I have paid my dues. But it is still there when somebody Google searches my name… What do you do about that? Do you have a policy for that? That’s kind of the question that we’re trying to figure out. And then lastly, of course, is: Newsrooms these days are not only kind of like getting this information from external sources, but we are creating it. Right?

We perform surveys. We have CRM and internal tracking that we do for our users. And we do that for good reason. But part of the question would be: Do we need to? How granular does it need to be? And who needs to have access to that information? I think those are all questions that once upon a time we kind of thought that these were more innocent… Like, all of this data is good. And what could go wrong? And we know now a lot can go wrong.

KELLY: Yeah, and to echo…

THOMAS: No, it’s okay. So part of what we’re trying to do here – the goal of this session is to come out of this with a prototype policy that we can hopefully bring to newsrooms and have people adopt. And the goal of this policy is to address a simple fact, which is that in this room, we know that data is not unbiased. In this room, we know that the data economy is not benign. This is not a safe thing. But we are savvy technical people. Right? So we want to think of data as nuclear waste. To borrow a line from Mache Siklowski. This is a by-product of a process that we ultimately may need to have, but that we need to handle and manage safely. And typically we do a really bad job of doing that. So a policy that we can create today, or at least prototype today, and refine, is something that we can use to give newsrooms, including the non-technical people who are not in this room, a way of thinking about how we will manage and publish that data.

KELLY: And to echo what Thomas said earlier about… No one in our newsrooms are setting out to purposely induce this kind of harm. That’s really not our goal as journalists. As journalists, we are kind of empowered to believe that we are out there serving the public good, thinking that transparency really is helping everybody. When sometimes it’s not the case. Just because you can release all this data doesn’t mean you should. And sometimes we kind of neglect to kind of recognize our own privilege when we don’t have to consider the type of harm that other people may face, because we’re not in those vulnerable positions.

So this session, in addition to kind of crafting a policy of what these guidelines should be, is to kind of like… Put on a little bit of empathy to think about what it’s like to be on the receiving end. Even if they may not be your main audience. You have to recognize the kind of power that you’re putting out there, when you’re writing a story, releasing a database, and really thinking about where your power extends.

AUDIENCE: I have a question. Do you think this extends to the subjects of reporting, particularly when we all like dropping someone’s tweet? Or Instagram or Facebook post? That decision about who to highlight?

THOMAS: Yeah. So… Yeah, that’s something that I’ve thought a lot about. Kind of the ways that we embed and the way that particularly in a lot of product designs, it makes it very easy to embed and direct personal information. I think that is a little bit beyond the scope of what we’re hoping to address here today. But that’s definitely something that’s informing what we’re thinking about. Another question?

AUDIENCE: I couldn’t hear the question.

THOMAS: Oh, sorry.

AUDIENCE: The question was about whether this thinking would also extend to when we choose to drop a normal human being into the full traffic of a BuzzFeed story and being like… This tweet was really funny! Or this person said a controversial thing! And linking to their Instagram or Facebook post. Even who we pick to feature can expose to person to the kind of harassment they’re describing.

KELLY: Yeah, and I think about that a lot. Because it seems like within a policy guideline, that needs to be inherently embedded in your editorial process of whenever a story is assigned, and thinking about what happens after you publish something, what is the real impact that we want. You should get page views and then you’re out of side, out of mind. Don’t have to think about this story again. There’s a whole other human being and a whole other community on the other side that doesn’t often get to be seen.

THOMAS: Okay. So we’re gonna get started with groupwork in just a second. I did want to call out a couple of quick premises. Because I think it’s important to establish kind of what we’re trying to do here, very explicitly. We understand that… And I personally have a more extreme view than many people do on this kind of stuff, because I feel very strongly about the idea of newsrooms publishing individual names. For private citizens. Not everybody feels that way. And not every newsroom will even agree with the weak idea of this thesis. There’s a strong and a weak claim here, and the strong one – or even the weak one – is not necessarily amenable. That’s cool. But I want to stress: We are, for this session, not interested in legislating that thesis. Like, whether you think your newsroom will or will not adopt this, what we are here to do is at least for now go along with the idea that this policy is a good idea.

We want to create something that newsrooms will adopt. So we want questions to be raised. We want to think critically about it. We don’t want to do something that is the strong, power to the people version of this, but at the same time, we don’t want to… “well, actually” ourselves. I do want to call out that this is inspired by work that other people are doing and that have been really influential on us. In particular, for me, Kana Osamariya and her work on photography and consent and the way that the bodies of marginalized groups are used for news purposes has been deeply influential. Ryan Murphy did a talk here at SRCCON in 2016 that I did a lot of research with. DeMarca, RIP, was one of the few news organizations that had a legit policy about when they would handle and delete personal information.

Wow. That was a mess, wasn’t it? So anyway… But the policy was good!

(laughter)

AUDIENCE: They’re still around. They’re not RIP. Per se.

THOMAS: Fair. Fair. Fair.

(laughter)

And I also wanted to mention – so the model for me, for this, is the contributor’s covenant, which any of you who have worked in Open Source might be aware of. And this is the idea that if you create a minimum viable code of conduct that is easy for people to attach to a repo, you can get a lot of people to adopt a minimum standard of behavior. So that’s kind of what we’re going for here today. The contributor’s covenant equivalent of a privacy document. Cool. Okay. So… To get you all primed, and start thinking about, like, what this means for you, and what this would mean for your newsroom, what we want to do is get you all together and do a little bit of a card sorter exercise. So it would be really great, actually – this is spaced out a little bit more than I expected – if we could get you all to kind of converge into I think four or six groups. Let’s all kind of do the math on that right now.

KELLY: You have to move.

THOMAS: Or multiples of two. So, like, eight groups. And a reasonable size of like four people per group. Let’s see how we can shake that out. Okay. So as you’re shifting yourself into position, here’s what we want to start thinking about. We’ve written up four questions here, and we want you to take these in two sets. So you’re not gonna try and answer all four at once. But we wanted you to take, like, five minutes in your group to just real quick think about and use Post-Its, and if you need Post-Its, raise your hand and I’ll come around with them. But we want you to think about real world scenarios. How does this apply to you. So within your groups, write down: What privacy cases have you run into, where this kind of policy might have applied, and second, what policies do you already have? Does your newsroom already have something like this? Or is it completely new to you? So we’re gonna take five minutes for that, and then we’ll do a little reporting out.

Okay. We’ve got about a minute left. Okay. Let’s bring it in. So what we want to do now is we want to go around and share out. If people have things that were discussed at your table. I want to set a quick ground rule. We’re encouraged to do that as facilitators. We set ground rules. So often in these group sessions, when people will be working in small groups, people will have good ideas that somebody else has had. And when it comes time to report out, that same thing will come up over and over again. So if somebody says something before we get to you, and they beat you to it, that’s cool. But we’re trying to keep things really short. So let’s bring up only things that are maybe new from your table. And it is totally cool if by the time we get to somebody all the good ideas have been taken, because that just means you were really in tune with all of the great ideas in the room. So how about we start back here? Is there anything that y’all want to call out?

AUDIENCE: We didn’t get to talking about the first one within the group at all. We barely got into the second one. Which… Can you repeat that?

KELLY: The first one is what kind of privacy cases have you run into, in your own newsrooms. And the second one is: What are related policies that might already exist.

AUDIENCE: So we can add to the first one. Probably take a very repetitive area… Which would be GDPR and CCPA or whatever the hell the acronym was. Sorry, I meant that for the second one. Those are business-level and also trickle down to newsroom-oriented concerns. That’s one of the things we have that everybody agreed on that everybody is aware of as a policy. And we didn’t get much further than that.

THOMAS: Okay. Cool. How about this table here?

AUDIENCE: So I work for Gannett, and we are in the process of a test of using natural language generation techniques, so turning structured data into narrative text. For home sales transactions data in the state of New Jersey. Where it’s collected at the state level. And this is the kind of thing that news organizations have done in the past. Right? In the newspaper you have your list of property sales. That was pretty standard in a lot of places, until newsprint got expensive. And so our newsrooms have said… Yeah, sure. This seems like something we should do. The problem is, as I mentioned to these guys, this is like SEO catnip, and in many cases, the first thing that pops up when you search for your name is how much you paid for your house. And this is obviously a challenge. We’ve gotten a few complaints, and I suspect we will see more. And we would like to be responsible about how we deal with this.

THOMAS: Great. Thank you. How about right here? I’m just gonna go… We’ll go the other way next time. So don’t worry. Yeah?

AUDIENCE: I had a few. My name is Ryan Murphy. Now at the LA Times, but used to be at Texas Tribune. So I was one of the maintainers for a couple of the databases that had personal information in them there. The one that was in my opinion – and I feel more comfortable saying this now – particularly egregious was the prisoner database, where it had the flaw of… It takes a month for us to get updated data, so there’s always a month where people who have been let out of prison are still in our database. And that was a common complaint that got raised. Was… Hey, I’ve done my time. Why am I still in here? And we would spot wipe them out to do that, but it was an automated system that was put in place before I got there that had no logic to it. And the flip side of that too – we also had the salary database, which had its own kind of issues too. But in particular, often had the complaint raised, which is: If you had a unique enough name, which is a similar thing, the first result was gonna be us. And that was… A very gray area that needed a policy that never really got put in place.

AUDIENCE: I’m Moiz. Currently Pro Publica. Formerly the Intercept. These are all Intercept projects, because I’ve only been at Pro Publica for a few weeks. So we had the database of terrorism since 9/11. So this is a database of all the people, close to 800 now, who were prosecuted under terrorism laws. The vast majority of them were arrested within these falsified cases, stings, where they are… They were wrong place, wrong time, doing silly things, and now they’re labeled as terrorists. And all the stories that went with this database, which we were keeping updated and alive… Basically highlighting the fact that most of these people are actually… Have served their very small prison sentences, and actually are free people. But we still have them in a database with their photographs. Right?

And for most of the people, when you looked them up, this is the first thing that would come up on Google. So it was a very fraught thing. Learning from this project, later on, we did a database of cases, of sexual assault, under ICE custody. And this was close to… I think it was like 3,000 cases in this database. Because this was a database spanning 10 years. And we decided not to publish that database, even though we built this thing internally for our reporters to research out of, and we had to do a lot of data cleaning, and it was a lot of work. We decided not to publish it. And I think it was partly because the understanding around sexual assault is… We have much more of an awareness to talk about it, and kind of think about those things. Better than if you were prosecuted under terrorism laws. I think we… Those people, they’re gonna lose their humanity more, I think.

KELLY: Having the language, the words to define it.

THOMAS: Thank you. Anything here that this table wanted to report out?

AUDIENCE: So I was from New Jersey, and New Jersey has a lot of million dollar houses, and we did a map. So the data is publicly accessible, but we did a map that geocoded all the million dollar houses, and so now that we’re talking about the policy, I started to think… Whether it’s ethical to publish the map. Something bad may happen because of it. But I’m just thinking… Is there a way to… A policy to make sure whether it’s a good thing or a bad thing? Because they’ve been talking that rich people have been fleeing from New Jersey, and our data shows that they’re not. There are more million dollar – millionaires in the state.

THOMAS: Yeah. I think one of the really common objections, when I was talking about this originally with people, was the idea of that kind of data. That it may be public already. It’s this question of like… As a news organization, what is your responsibility, and how do you handle this idea of like… This is public. Do you have to boost it? Do you have to… Or do you have to organize it in a way that it’s not already organized? This table?

AUDIENCE: So something that can go in the second pillar that was brought up by Stacy on our team was… How in her workplace, a key part of designing various products are having conversations related to security present in designing systems, and not seen as, like, a consequence of the system. And that really resonated with me. I formerly worked at the LA Times, and when we were going about designing a tips page, that was something that was front and center, and thinking about… Let’s not have this inside our CMS, which has all kinds of ad collectors on it, which some news organizations do. And served as… That would be an example of a consequence of having the ability for readers to reach out with various tips and concerns. So I think… Yeah. Putting security front and center and not seeing it as something that has to be addressed after business decisions or whatever.

THOMAS: Thank you. And last table here?

AUDIENCE: I just had kind of an aside. It seems like the unspoken thing in the room is… If we get it off Google, is that enough? If you just put a no index, no follow, because it seems like most of the… I work at a community newspaper, and so we’ll get people for this or for that, and we kind of do it on a case by case basis. They want it taken down. And what we do is we don’t take it down. But we usually just put in a no index, no follow, on the CMS, and wait for Google to recrawl it.

THOMAS: So I think that’s a really interesting question to think about. Like I said, the goal of this exercise is to get you primed and thinking about this, and what the challenges and approaches are. So as we start getting into the actual policy drafting, let’s try to keep stuff like that in mind. I think that’s a really good point. Okay. Cool. Thank you to everyone. Let’s take another two. So now we want to start getting into – at the same time, at an administrative level – so first, how does or how should your newsroom communicate these policies? Who is supposed to be responsible for this? Where does this rest? And the second question is – and I think it is something that comes up or will come up a lot – what are the exceptions? Like, there are always exceptions to a policy like this. There are always cases where it’s like… Okay. Sure, there’s privacy. But notoriety. Are there cases where somebody… If somebody is famous enough, and a lot of newsrooms already have this. Like, you may have a policy about publishing the addresses of public figures.

KELLY: Even with the no indexing, how do you determine that on a case by case basis? All these little…

THOMAS: Right. So we want to think about not only a policy, where we want it to cover, but also where it breaks down. And that’s what we want to talk about next. So let’s take another five minutes, and we’ll do another quick report out from that.

Okay. Take another minute or so. Okay. Bring it in. Let’s go the opposite direction this time. So that we give everybody a chance to sprint ahead. So here at this table… For either of these questions.

KELLY: You want to take the third one?

AUDIENCE: Yeah, we mostly talked about the third one. There are two things that I really took away from this, from our group. We have people that are at our table that are both in the newsroom and on the product side. There are projects that… If it originates in the newsroom and gets passed to the product team, the product team is working on something… If the standards are not company-wide, that can create a big problem. If the newsroom has its own policy but that’s not communicated outside of the newsroom, then there’s not really a set policy.

And then another thing is like: Communicating to your readers. So we did a voter’s guide for the most recent election where you type in your address. And it would tell you everyone that was running in your district. And we made a point of saying: We are not collecting your address in any way or your information. Because the whole point of the tool was: You need to be able to feel comfortable to put in your personal information for this to be helpful, and how can we make sure we make people feel like they can give us that information without us abusing it?

KELLY: A lot of UX writing in there too. How do you break down these really wonky, complicated policies?

THOMAS: And how do you get around the fact that nobody ever reads the privacy policy?

AUDIENCE: Stop burying it in links. One of the things that came up in terms of who owns this is: Whoever is… Well, three things. One is: When at the point of hiring is that communicated? We generally suck at onboarding in an industry, but at least when we’re bringing people on, that’s an opportunity to say this is one thing that’s really important in the newsroom. And the second is: Whoever is owning that communication has to be responsible for holding people accountable for breaches. There’s no policy if there’s no enforcement and you can flagrantly violate it. And the last one, which goes back to the overall system, is you have to be able to find it, you have to be able to enforce it, but also people have to have mechanisms to report that it’s not being followed, in a way that’s non-retaliatory and non-punitive. If you say… We have advertising on our tips page, does that make you a pariah to your business team?

THOMAS: Thank you. How about over here?

AUDIENCE: I think everything has already been said already.

THOMAS: Okay. This team was so brilliant that they got both of those tables in one. What about over here? Anything novel? Okay.

AUDIENCE: Now I’m trying to remember what we…

AUDIENCE: We’re kind of covering what you just mentioned, but we were just talking a lot about onboarding and training. I’m a software engineer now, but I worked as a journalist very early in my career, and I was very rarely onboarded properly about our consent forms. And so mistakes happened oftentimes. We would be using outdated consent forms or we wouldn’t be archiving them properly. So that kind of training goes a long way.

AUDIENCE: I work at the Times, and I know last year we hired an information security trainer. That person’s time has been so valuable that in less than a year, we hired another one. About a month ago. And I think their time is probably gonna be snapped up pretty quickly. Just making that a priority and having a dedicated person who works both business and newsroom to communicate security best practices and just kind of be the person to go to when you need a human answer to these questions.

THOMAS: Thank you. Over here at this table? Anything? Yeah? Okay.

AUDIENCE: So we raised the question – or I did – of whether news organizations, when they do investigations that are based on algorithms, publish the code. And make that code available to other news organizations, so that I could do an investigation. He could do an investigation. And that would function sort of as a peer review, if you will. I’m from academia, so excuse me. But then other news organizations could use the same code to do a similar kind of an investigation, and then hopefully that would improve trust, because the same kinds of methods are being used and shared and more reliable. So… Publishing the code.

THOMAS: I think that’s a really great point. I would love to shout out to David Eats and Pro Publica Illinois, who published all the code for their ticketing database and did so in a way that anonymized the names and license plate numbers in a one-way hash. So you can find out if license plates are in there repeatedly, but you can’t actually figure out what that license plate is or who it is. You can just see the patterns in the code. Really interesting work. All Open Source, so that other people can use it. Which I think is a really strong thing in this community.

AUDIENCE: This was not my idea. This was Tiff’s idea. But when we worked together at The Times, one of the things that was apparent about our policy is that our legal staff asked us to not write a policy about what things we would keep and not keep. And instead deal with it on a case by case basis between reporters working on it, the editor working on it, and our legal staff. And one of the reasons why that’s true is because a policy can be subpoenaed. And if you don’t follow the policy 100% of the time, you are liable for the policy. And so a policy that is unwritten is a policy that you’re not responsible for. This is counter to a lot of the way we might think about how you would want to handle this, but it is an interesting thing to know, because it’s how your legal staffs are thinking about this. Because their primary thing is not about necessarily protecting personal information. Their thing is about protecting the organization and the organization’s ability to continue doing work effectively. Right?

So sometimes those things are at odds, even though it wouldn’t seem like that is necessarily the case.

THOMAS: And on that note, let’s write a policy.

AUDIENCE: I have a question.

KELLY: Hold on.

AUDIENCE: Just adding to that, how many people who are doing this kind of work have good relationships with people in their legal team? Stacy Marie? Some folks? Because there’s a lot of… One of the things I was thinking about as we were doing this – there’s often a lot of conflict between what you want to do and what your legal team thinks you should do and the only way to navigate that conflict is to be really close with the folks in the legal team and make sure you know that they’re getting at. If you don’t know them that well and there’s a gap there, you’re gonna have a really hard time pushing back. Because the legal team is always going to look at: What is the safest thing for the company? And to hell with everything else. That’s their job. And if you have a good legal team, they’re really good at that job, but you always want to push them in other directions. But that relationship building is really, really key.

THOMAS: Yeah. I think that’s a great point. Cool. So… What we’d like to do now is we actually want to start looking at putting together… What would… I think that’s the thing. We want to start kind of drafting the ideas of what this basic policy would look like, and we want to do it through kind of a collaborative process. Right? This is SRCCON. And so we want to give everybody a chance. And so we’ve got up here – we’ve got written four questions. We’re actually gonna reduce them to three, because we have six groups, and that’s a nice round number. So just so everybody’s prepared, everybody on your table – you should have sheets of 8.5x11 paper. You want to pick probably one person who’s gonna take notes. My rule of thumb is I’m always a very loud person at tables, so if you are similarly a loud person, maybe consider taking notes this time. Make sure that everybody else has a chance to talk.

But what we’re gonna do is we’re gonna do this in three 10-minute chunks. We should have time to do that. First 10 minutes, we’re gonna assign you one of these questions, and you’re gonna write out: What is the outline of a policy that would cover this stuff? Second 10 minutes. You are going to trade with another group that did not have your same question, and they are going to critically but very kindly offer revisions, notes, and critiques of what the policy is.

KELLY: Legal speedwriting.

THOMAS: And then you’re going to trade back. You’re going to take that feedback, you’re either going to decide that was a good idea or you’re going to critically but kindly throw it away, either one, and you’re going to make revisions to that policy, so that then at the end, we can take those up, and we’re gonna use those to start a Google doc. Other people – we will invite people to this on a private basis. And if you are interested, we will continue, then, to take that work, refine it, and hopefully turn it into something that can be adopted. Legal questions not withstanding. So our first question – and so I’m just gonna do one, two, three, around the room. Just to make it easy. So our first question, which we’re gonna assign to this table and Ryan’s table there is: What are your responses to data takedowns? So when people contact your newsroom, when this doesn’t come from within, but when this is actually externally sourced and people reach out to you, what is your response? What is your policy to try and approach that? Second question. For this table and the table back there in that corner. What are your plans for investigative data dumps or other things that are newsroom-sourced? Or for things that are company-sourced, if you are product-focused? So things that you are creating. What is your policy for doing that in a safe way for your readers? And then the final question, related to these two things, merging these two together, what are exceptions that need to be made around cultural recommendations, sensitivities, community norms, or public figures, that kind of thing? So what are the exceptions that you would need to make for cases in which your newsroom does need to be able to work with personally identified information? So that’ll be this group here and this group here. So go ahead. Take ten minutes. You’re gonna work on that, and then you will trade with the table next to you and you will get some back and forth going.

You guys are halfway done. Four more minutes. Four-minute warning. Your time is up!

THOMAS: It’s okay if your policy is still a little sketchy. That’s okay. In ten minutes, we don’t expect you to solve all the world’s problems. But what we want you to do now is we want you to take your policy, we want you to trade it with the table next to you, so we’re gonna ask these two to trade. We’re gonna get these two tables. Trade your policies back and forth. And then the back two tables will trade. And so on a separate sheet of paper, just to be clear, on a separate sheet of paper, so you’re not having to mark up all over theirs, write down additions, critiques, places where there may be a blind spot, or places where you feel like something should be elaborated or strengthened. Yep, trade back and forth. We’re running tight on time, so I’m gonna give you guys eight minutes to do this part.

You all should be probably about halfway done with your critique. Okay. Let’s start wrapping up. And if you’re close or if you’ve already handed it back, the next step is gonna be to take a look at the feedback that you got. Do any rewrites that you think are appropriate. And then we will try and wrap this up.

AUDIENCE: So trade back?

THOMAS: Yes, please. So I know you got your things back. You’re revising. Your heart rate is now recovering. You’re slowly from your panic. I want to talk a little bit, since we’ve only got five minutes left, while you continue to work, part of what our next steps are. As I mentioned, we’re gonna put this together into a Google doc. If people are interested in continuing to contribute, we have a sheet over there for you to sign up your email. Physically. Not in an etherpad. So we will take those emails. We’ll invite you to the doc. If you have a problem with Google docs, let us know. I totally understand that, given the subject of the session. And we are going to look at very quickly taking this input, trying to put it together, taking into account stuff like legal advice, putting together a policy, and then hopefully publishing that in a way that other organizations will be able to look at. So I’m hoping to take this back to NPR, run this by our lawyers, obviously, and talk to our team about whether or not this is something we can accept, and I would really love to have other organizations in the industry thinking about these issues and looking at this as a template for how we approach this idea of reporting. So go ahead and continue, and then if you also want to – once you finish your revisions, I don’t think we’re gonna have time to share them publicly, but if you want to leave them by the sign-up form, we’ll take care of putting them into the document and digitizing them.

AUDIENCE: What’s the policy with the email addresses you’re going to collect?

THOMAS: We’ll use them to share out and throw away the piece of paper. Apparently there’s a fire we can throw them into.

AUDIENCE: Is it a public trash can? Private trash can?

THOMAS: I think our OpSec is okay, but I’m open to suggestions.

KELLY: We’re gonna eat the paper!