16:59:38 #startmeeting anti-censorship weekly checkin 2019-10-17 16:59:38 Meeting started Thu Oct 17 16:59:38 2019 UTC. The chair is phw. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:59:38 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:59:47 hi everyone! 16:59:52 here's our meeting pad: https://pad.riseup.net/p/tor-censorship-2019-keep 17:00:01 it's quite the agenda today 17:00:30 let me start with our first discussion point 17:00:49 I will move two items to the bottom that can be skipped if there's not time today. 17:01:01 some context: we started supporting an ngo with private obfs4 bridges that it can distribute among its users 17:01:14 i tried to formalise the process a bit in this wiki page: https://trac.torproject.org/projects/tor/wiki/org/teams/AntiCensorshipTeam/NGOBridgeSupport 17:01:29 nice! 17:01:45 please add comments/suggestions/etc to the wiki page 17:02:00 so far, our private bridges exist in a csv file on my laptop. that's not great 17:02:21 ideally, they are in a shared, private location, and we keep track of who got what bridge 17:02:53 maybe i am misunderstanding this, but are these different from the unallocated bridges in bridgedb? 17:02:57 i could add the file to a private repository on dip.tpo unless anyone has a better idea 17:03:12 cohosh: yes, the bridges i'm talking about don't report themselves to bridgedb 17:03:19 ok cool 17:03:33 is the idea to use these two groups of bridges differently? 17:03:57 the unallocated bridgedb bridges are run by people we don't know and could go offline any time 17:04:20 the private bridges are run by womble, and are reliable and fast, and a much better fit for an ngo 17:04:32 ok thanks 17:04:47 i think gitlab sounds good 17:04:51 so far we've been using the unallocated bridges the way we're now using the private bridges 17:05:05 okay, i'll make the csv more usable and throw them in a repository 17:05:08 this also makes access to them more redundant 17:05:27 so that if something goes wrong with your laptop we can still give them out 17:05:39 yes 17:06:10 ok, shared access was the most important thing wrt this discussion item. as for the process: we can improve it as we go 17:06:39 sounds good 17:06:47 shall we talk about our ipv6 snowflake broker next? 17:07:34 * phw puts the mic on the ground and waits for somebody to pick it up 17:07:37 I've set up a new broker and documented the installation instructions. 17:07:54 https://trac.torproject.org/projects/tor/ticket/29258#comment:11 17:08:20 Figners crossed, I think all that's needed to start using it is to update some DNS records. 17:08:35 thanks for doing that dcf1! 17:08:36 But perhpas we should do a smaller-scale test first. 17:09:08 I mentioned on the ticket a proposal for dealing with concurrent logs; i.e., let them happen concurrently and merge them after we decomission the older broker. 17:09:41 One option is we give the new broker a hostname different than the snowflake-broker ones already in use; that way we can test it ourselves. 17:09:59 we have 3 different broker domains already 17:10:02 Another option is to only set up AAAA records now, so that IPv4 traffic goes to the old broker and IPv6 traffic goes to the new. 17:10:05 bamsoftware, freehaven, and tp.net 17:10:25 Yeah and freehaven in a CNAME to bamsoftware, so really we only need to update bamsoftware and torproject. 17:10:31 *is a CNAME 17:10:39 we could switch tp.net first and test with that 17:10:49 I forgot what names are used where. 17:10:54 since freehaven/bamsoftware is the deployed one 17:11:04 we haven't deployed tp.net in the client or proxies yet 17:11:24 Yeah I guess you're right. 17:11:24 due to concerns that some places (like the UK) are good places for proxies but may block tor project domains 17:11:54 And I guess that snowflake-broker.azureedge.net still points to the bamsoftware one, though I would have to check to be sure. 17:12:19 Okay, that's a good idea cohosh. We need to ask someone to update the torproject.net names to the IP addresses mentioned in the ticket. 17:12:30 i will make a ticket for that 17:13:14 Then we ourselves can test using the client with `-url https://snowflake-broker.torproject.net/' and proxy-go with `-broker https://snowflake-broker.torproject.net/` 17:13:22 or just cc anarcat on that ticket 17:13:58 I can handle making this ticket. 17:14:10 Thanks, unless there are any objections, I think this discussion point is covered. 17:14:24 thanks for this, dcf1 17:14:32 ok thanks 17:14:53 next up are some preliminary design considerations for a bridge test service: https://trac.torproject.org/projects/tor/ticket/29258#comment:11 17:15:07 to remind everyone: we currently have no service that tests a PT bridge 17:15:16 #31874 you mean 17:15:23 the one thing that gets closest is a simple port scan tool: https://bridges.torproject.org/scan/ 17:15:30 oops, yes 17:16:05 there are two parties that would benefit from such a service: bridge operators could test their bridges and bridgedb could test what bridges are reachable 17:16:36 basically, you paste your bridge line in there (which works for vanilla, obfs2, obfs3, ...) and the service tells you if it could bootstrap a tor connection over the given bridge 17:16:59 i'm curious what y'all think about the points in https://trac.torproject.org/projects/tor/ticket/31874#comment:2 17:17:52 One consideration is misuse: an obfs4 testing service is like a public fuzz-testing service, send garbage to any IP:port on demand. 17:18:35 I suppose it could check that the bridge line matches a bridge already in BridgeDB? I.e., something that's intending to be a bridge? 17:18:43 dcf1: yes. for bridges.tp.o/scan/ we have a simple rate limiter. 17:19:13 do we need to make it publicly accessible? if bridgedb is automatically doing these tests, maybe that's more convenient that allowing operators to do their own anyway 17:19:24 *than allowing 17:20:03 we certainly don't need to but "does my bridge work?" is a very common question among new operators and we should have a useful response to it 17:20:34 i suppose "wait a few hours and check your relay status page" may also be a useful answer 17:20:44 is the intention to log and try to contact operators whose bridges bridgedb detects as being offline? 17:21:07 cohosh: yes, and also to not hand out bridges that don't work. 17:21:12 cool 17:22:22 thanks dcf1 and cohosh, these are useful questions and ideas. i'll update the ticket and give it some more thought 17:23:02 we can move on to the gettor workflow unless anyone has anything else to say 17:24:43 cohosh, hiro: ^ 17:25:11 okay, so hiro has been doing some really awesome work on gettor, but we're not getting a lot of reviews done in time and so code is being merged without review 17:25:24 i'm wondering if we need to be better at communicating 17:25:34 we should require reviews before merging 17:25:58 yes we can do that 17:26:06 ok, so is the best way to do that moving forward to hand out reviews here? 17:26:14 email I think is ok 17:26:34 gaba: agreed. when i don't review something in time, it's because i either did not realise that i'm supposed to review, or i forgot. 17:26:38 we can check reviews in the meeting but if there is a review in the middle of the week people can communicate directly 17:26:39 is there a way to set it up with gitlab as well for us to get notifications for pull requests? 17:26:55 yes 17:27:05 i find trac useful for that but gettor is all on dip it seems 17:27:07 there is a bug I am trying to solve for which I haven't managed to do PR to the main repo atm 17:27:09 we can get a column for a needs review label 17:27:13 from my own repo 17:27:26 yeah i found the gettor board a bit confusing 17:28:16 yes, the gettor board in gitlab needs more work. I can try to change it so it is useful for all of us 17:28:27 we are in a weird space as we still did not migrate to gitlab 17:28:28 that would be really useful, thanks! 17:28:34 so trac is the source of truth right now 17:28:39 yup 17:29:00 does all merged code in gettor correspond to trac tickets? 17:29:22 well in the beginning it started with a massive refactoring from ilv 17:29:32 that I had to merge back into our repository 17:29:41 and in that there was some dangling code on the server itslef 17:29:55 that I didn't know about until I tried to test 17:30:24 it should correspond to trac tickets 17:30:31 until we migrate we need to have trac updated 17:30:32 okay 17:30:43 track tickets addressed features and some issues 17:31:15 hmm, if we update both trac and dip, i'm afraid we'll soon run into syncing issues 17:32:13 gaba: no? 17:32:16 I have been closing dip tickets corresponding to trac tickets 17:32:54 yes 17:33:09 * gaba will work on syncing both 17:33:18 hopefully we will have the migration done in december 17:33:40 * antonela hopes 17:33:51 ok, so 1) there needs to be a trac ticket for each gettor merge, 2) gitlab and trac will remain in sync, and 3) review requests go out over email? 17:34:08 ok 17:34:17 sounds good 17:34:18 thank you gaba for all this work 17:35:09 anything else wrt gettor workflow? 17:35:44 I think I am good 17:35:49 also, please ping me if i'm ever late on an email. reviews are high up on my list of priorities but sometimes i do forget 17:35:49 same, thanks! 17:35:56 s/email/review/ 17:35:59 sure! 17:36:13 ok, let's talk about s28/s30 next 17:36:36 regarding s28, aka RACE: we sent our part of the quarterly report to micah 17:36:39 and by "we" i mean gaba :) 17:37:04 and we have a prototype of our new obfs4 flow obfuscator: https://trac.torproject.org/projects/tor/ticket/30716#comment:16 17:37:23 (which i've been meaning to send to the traffic-obf@ list) 17:37:55 our partners are struggling with getting access to a dataset for evaluation 17:38:02 hopefully, we will sort that out in the coming weeks 17:38:25 and cohosh has been making steady snowflake progress as well 17:38:26 nice 17:39:05 regarding s30: also steady progress 17:39:23 with some tickets we are lagging behind, with others we're ahead 17:40:12 anything specific you'd like to know gaba? 17:40:25 just a check-in. thanks! 17:40:36 alright, next up is snowflake's poll interval 17:41:02 A few weeks ago I tailed the log of the broker, and requests were coming in furiously. 17:41:23 Lately those particular log lines have been removed, so it's not as apparent, but according to https://snowflake-broker.bamsoftware.com/debug there's 500 proxies, 17:41:39 and with a poll interval of 20 s, that's 25 incoming proxy requests per second. 17:41:42 i liked the idea of having the broker tell each snowflake when to come back 17:41:59 Something on the order of 1 or 2 per second is probably adequate. 17:42:07 arma2: serna has started on that ticket 17:42:14 i agree 17:42:23 arma2: yeah that's #25598, serna ran into some trouble with that. 17:42:42 we have metrics of how many idle proxies we have: https://metrics.torproject.org/collector/archive/snowflakes/ 17:42:57 and it is orders of magnitude more than the the number of client matches 17:43:31 Anyway, I think an interval of around 300 seconds would be workable. 17:43:37 sounds good 17:43:58 (I'm still confused as to why when a client connects and gets an ultra-fresh proxy, the proxy sometimes doesn't respond.) 17:44:25 Another thing we did in flash proxy was make the proxy not connect to the broker immediately, but wait about a minute before contacting the first time. 17:44:49 The idea there was to eliminate the cases where someone goes to the page or activates the extension momentarily just to see what it looks like. 17:44:56 I'm not sure if it really helped. 17:44:59 yeah, i'm doing some dog food and looking into that but it's *really* difficult to track down 17:45:18 it also seems to be 1/3 of all snowfalke proxies the last time i checked that do this 17:45:46 Hmm, yeah it's weird. It almost seems like it must be some systemic thing, like "all proxies on browser X" or something. 17:45:47 which suggests to me that it's probably not one-off user actions like that 17:45:53 yup 17:46:17 ok, I'll make a ticket to increase the poll interval. 17:46:30 do the snowflakes offer specifics about themselves? like, can we distinguish cupcakes from snowflakes, at the broker 17:46:31 you know, i haven't updated the chrome version in a while. i'll do that this afternoon 17:46:42 i'm still unsure what the status of cupcake is 17:47:06 arma2: we can distinguish snowflake webextensions from standalones, that's about it right now 17:47:10 Oh I thought the Chrome store was just reall slow at reviewing, I noticed they never listed 0.0.11 before 0.0.12 was available. 17:47:29 i'm working on #29207 at the moment and can add something there 17:47:42 dcf1: no >.< i thought cupcake was our chrome app now 17:47:51 but i think it's best to just keep going with snowflake on chrome in parallel 17:48:09 Oh aha I see, once again I misunderstand the situation :) 17:48:18 i mean, they are slow at reviewing but not that slow 17:48:30 have we heard anything from griffin since the dev meeting? 17:49:00 No, I updated some tickets he's cced on, but I should sent email, because his email used for trac may not have been working. 17:49:56 doing both in parallel sounds good, at least until we hear from griffin 17:49:58 cohosh: it does seem like "really difficult to track down" and "the snowflake could specify some details about itself when it contacts the broker" could go well together 17:50:11 yeah 17:51:05 alright i'll add that to the changes in #29207 17:51:34 are we done with the poll interval? if so, dedicated build server are next 17:51:57 yes please on the build server 17:52:12 So for the pion-webrtc tests I rented a VPS because building tor-browser is too much on a personal computer 17:52:14 i have a digital ocean instance that i use but my use is intermittent and not really worth what i pay for it 17:52:29 It's like 100+ GB and 48 hrs+ if starting from scratch. 17:52:42 And yeah, cohosh has been doing the same thing out of necessity. 17:52:59 Devs shouldn't be paying for this IMO. 17:53:13 yes, agreed 17:53:15 In the past, there was some EC2 server or something that was shared with the TB devs, and that was really nice. 17:53:38 But that got shut down and I'm not aware of any replacement. 17:54:00 let's file an admin ticket for getting such a vm? 17:54:01 One option would be to try eclips.is, but they have a credits system and we would probably need most of the 200 credits you get by default. 17:54:20 For comparison, the new snowflake broker with 10 GB disk and 2 CPUs is 25 credits. 17:54:29 storage space is a big issue here 17:54:54 10GB might not be enough 17:55:09 I'm suggesting 200 GB for the build server 17:55:24 I'm saying that what we need is much bigger than the new broker, and the new broker cost 25 credits. 17:55:26 oh yeah that's way better 17:55:41 So eclips.is may not be enough, though I could try it. 17:56:03 i think there are idle tor browser build machines currently 17:56:11 but i might be wrong. it's worth asking them. 17:56:58 we have one and i am using it usually to do release and test builds 17:57:15 but we could think about sharing that one more it that's helpful 17:57:28 *if 17:58:10 I think it's helpful, because for me for exmaple, just doing a test for #32076 before putting the ticket in needs_review is pretty cumbersome. 17:58:35 I happened to have a tor-browser-build VPS already set up, but I shut it down right after that ticket because it was costing money. 17:59:12 i think sharing the build machines with the browser team makes a lot of sense 17:59:23 since that's what you are both building 17:59:32 same here, i don't do incremental builds that often but it's nice to test patches before putting them in needs review 17:59:53 so it wouldn't be a lot of extra load on that machine 18:01:08 ok, we're out of time but have one item left on our agenda. shall we continue in #tor-dev? 18:01:43 ok 18:01:52 looks like the only "needs review" for this week is #31890. i'll send my weekly reminder to sina 18:02:08 phw: feel free to cc me on it if that's useful :) 18:02:23 #endmeeting