16:00:06 <meskio> #startmeeting tor anti-censorship meeting
16:00:06 <MeetBot> Meeting started Thu Sep 19 16:00:06 2024 UTC.  The chair is meskio. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:06 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
16:00:10 <meskio> hello everybody!!
16:00:14 <cohosh> hi
16:00:14 <meskio> here is our meeting pad: https://pad.riseup.net/p/r.9574e996bb9c0266213d38b91b56c469
16:00:16 <meskio> ask me in private to give you the link of the pad to be able to edit it if you don't have it
16:00:18 <meskio> I'll wait few minutes for everybody to add you've been working on and put items on the agenda
16:00:41 <onyinyang> hihi
16:01:26 <WofWca[m]> 👋
16:01:27 <shelikhoo> hi~hi~
16:03:26 <meskio> I guess we can start
16:03:34 <meskio> first an announcement from my side
16:03:48 <meskio> yesterday we finally moved to use rdsys for the email distributor
16:03:55 <meskio> TPA foudn the issue in the DKIM handling
16:04:01 <meskio> I believe everything works fine now
16:04:04 <onyinyang> yay! \o/
16:04:08 <meskio> I did shut down BridgeDB
16:04:10 <shelikhoo> nice!!!
16:04:24 <meskio> but everything is still there, and I left an issue to remove it in a month if everything is fine
16:04:33 <meskio> I'm right now writting an email to tor-relays about it
16:05:02 <meskio> I kept the discussion point from last week about the drop of unrestricted proxies
16:05:10 <meskio> do we have something to discuss about it?
16:05:20 <dcf1> WofWca[m] you left a comment on the issue
16:05:37 <meskio> looks like it might be only iptproxy?
16:05:52 <shelikhoo> here!
16:05:53 <shelikhoo> https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40387#note_3079668
16:06:01 <WofWca[m]> dcf1: yeah
16:06:19 <WofWca[m]> It's 2000 now
16:06:22 <WofWca[m]> Stable
16:06:28 <WofWca[m]> For 3 days
16:06:38 <dcf1> But down from ~7000 2 weeks ago?
16:06:39 <shelikhoo> I might have find out the symptom of this reduced number of unrestricted nat type
16:06:39 <WofWca[m]> (amount of unrestricted proxies)
16:06:47 <WofWca[m]> Yes, down from 7000
16:07:38 <shelikhoo> and the issue above described what I have seen, although I didn't get chance to investigate yet
16:08:31 <dcf1> I don't understand the point "I wonder if only iPtProxy is affected." According to https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40384#note_3077291, there are virtually no unrestricted iptproxy proxies anyway (maybe as the result of some bug), so how could they be the source of a decline in unrestricted proxies?
16:08:50 <meskio> ahh, sorry I think I'm mixing issues in my head
16:09:00 <meskio> I haven't being following the issue closely
16:09:11 <cohosh> i did a little digginng into the iptproxy and orbot code and i am not surprised iptproxy is failing NAT checks
16:09:12 <meskio> and there was another issue mentioning Iptproxy if I recall correctly
16:09:35 <meskio> the one about unkown nat, yes I'm mixing things
16:09:45 <dcf1> shelikhoo: why would tpo/anti-censorship/pluggable-transports/snowflake#40387 have a disproportionate effect on iptproxy and not other proxy types?
16:09:55 <cohosh> when the snowflake proxy is started in orbot, the STUN server is chosen randomly from this list: https://github.com/guardianproject/orbot/blob/939ce1d58db6810f51f833cb33dd51235f1eed69/orbotservice/src/main/java/org/torproject/android/service/OrbotService.java#L404
16:10:05 <dcf1> Also, do you know whether #40387 affects the existing production broker, or only the new broker you are currently setting up?
16:10:13 <cohosh> https://github.com/guardianproject/orbot/blob/939ce1d58db6810f51f833cb33dd51235f1eed69/orbotservice/src/main/assets/fronts#L3
16:10:37 <WofWca[m]> dcf1: Maybe we used to have a higher amount of unrestructed Iptproxy proxies? That's why I ask for historical Prometheus data
16:10:39 <dcf1> cohosh: 👀
16:10:43 <cohosh> and that list has several stun servers that either no longer work or no longer support RFC 5780
16:11:07 <cohosh> compare that to how we have the default standalone proxy configured to use a Google STUN server that does support it
16:11:37 <dcf1> cohosh: ah, so it is kind of the same issue as tpo/anti-censorship/pluggable-transports/snowflake#40304
16:11:41 <cohosh> yes
16:12:07 <dcf1> cohosh: but if I understand correctly, there's no need for the Orbot kindness mode proxy to use multiple STUN servers or choose one randomly, correct?
16:12:14 <cohosh> that's right
16:12:17 <dcf1> Like the standalone  proxy just hardcodes one
16:12:18 <dcf1> ok
16:12:47 <cohosh> there was some discussion of doing that for clients after the stunprotocol.org maintainer complained, as a kindess thing for the amount of load we proxuce
16:12:48 <dcf1> so it sounds like we want to encourage orbot devs to streamline and simplify their STUN server selection
16:12:51 <cohosh> *prodice
16:12:57 <cohosh> *produce, lol
16:13:06 <cohosh> but for google stun servers i don't think we need to care about that
16:13:22 <dcf1> cohosh: doing it for proxies, you mean? Not clients, because we already randomize over STUN servers for clients?
16:13:22 <cohosh> i think orbot also uses the same setting for both clients and proxies so they might need to refactor to change this
16:13:26 <shelikhoo> I just tested it with the currently deployed nat probetest and didn't observe the issue
16:13:27 <cohosh> dcf1: that's right
16:13:53 <shelikhoo> so I was unable to say for certain whether it impacts current production nat probetest
16:14:44 <dcf1> ok this is good gen
16:14:57 <cohosh> shelikhoo: oh sorry i forgot we're doing the probest for this library, hm
16:15:05 <dcf1> (gen: [chiefly UK, Ireland, Commonwealth, informal] Information.)
16:15:06 <cohosh> it still might fail if the stun url you happen to get isn't working though
16:15:51 <cohosh> but i'm second guessing now
16:16:55 <shelikhoo> regarding the issue I reported, the local sdp offer looks valid
16:17:22 <cohosh> WofWca[m]: how far back do you want for historical prometheus data? i can look and comment on the issue after the meeting
16:17:49 <cohosh> it's not in a format i can quickly deal with
16:18:08 <WofWca[m]> Well, I guess a few days before the drop occurred would work cohosh
16:18:17 <cohosh> ok
16:18:24 <WofWca[m]> Thanks!
16:18:47 <shelikhoo> As for the difference impact for different proxy implantation, one possibility is about how unknown result is process; some will just let it overwrite last result, others might retain the last result
16:19:13 <shelikhoo> in standalone proxy, it retain last result if the most recent nat test's result is unknown
16:19:36 <WofWca[m]> The Go lib never goes from a known type to "unknown" AFAIK
16:19:43 <WofWca[m]> Even when probetest fails
16:20:10 <shelikhoo> yes, so it will tolerate more failure of the nat type testing process
16:20:28 <WofWca[m]> About Orbot: we might want to try running a proxy with the Orbot's STUN servers to try to see if this is how we can get probetest to fail
16:20:56 <cohosh> yeah that's a good thought
16:22:33 <cohosh> i do think iptproxy should change how they configure their proxy stun servers regardless
16:22:57 <cohosh> but it would help answer as to whether that's what's causing issues here
16:23:39 <cohosh> because i would have expected it to result in more restricted NATs than necessary rather than more unknowns but it's been a while since I looked at this
16:25:36 <meskio> I guess we are done with this topic
16:25:40 <meskio> any more discussion points?
16:25:50 <cohosh> nothing from me, sounds like there are good next steps
16:26:01 <cohosh> thanks for bringing this up WofWca[m] :)
16:26:17 <WofWca[m]> 👍️
16:26:36 <meskio> there is one interesting link:     https://ntc.party/t/encrypted-clienthello-ech-is-now-enabled-on-cloudflare/10075
16:26:41 <shelikhoo> yes! thanks!
16:26:50 <meskio> cloudflare seems to have started using ECH
16:27:11 <meskio> great news, there might be a future were we can use ECH in a cloud provider as a signaling channel
16:27:54 <meskio> but I assume there is a fallback for old clients, so for censors blocking ECH is basically free for now
16:28:17 <dcf1> meskio: not necessarily, I have not looked at the details of the current implementations, but
16:28:37 <dcf1> ECH GREASE means that browsers send the ECH extension even when they are not actually using ECH
16:28:52 <dcf1> It's one of the most important differences from ESNI afaik
16:29:24 <shelikhoo> the question here is whether the browser will retry will ech extension removed
16:29:25 <dcf1> It seems the deployment of ECH by cloudflare was low-key, and maybe that was intentional
16:30:10 <meskio> yes, I assume they don't want much attention until they have tested it in a small chunk of their services
16:30:40 <shelikhoo> they are deploying ech on "free" plan domains
16:30:57 <dcf1> if it's all the free plan domains, that's a huge number of sites
16:31:07 <shelikhoo> nice free "domain fronting"
16:31:12 <meskio> yeah!!
16:31:25 <shelikhoo> yes, they said it is going to be a step by step deployment
16:31:35 <dcf1> shelikhoo: do you know where they said that?
16:32:47 <shelikhoo> "If you're a website, and you care about users visiting your website in a fashion that doesn't allow any intermediary to see what users are doing, enable ECH today on Cloudflare. We've enabled ECH for all free zones already. "
16:32:52 <shelikhoo> sorry it is already enabled
16:33:00 <shelikhoo> https://blog.cloudflare.com/announcing-encrypted-client-hello/
16:33:08 <shelikhoo> it is already enabled for all free plans
16:33:21 <dcf1> no, that is an old blog post, 2023-09-29
16:33:32 <dcf1> that was the first time they activated it, then they deactivated it quicly shortly after
16:33:46 <dcf1> just 1 week ago they re-activated it again, that's the new news
16:34:07 <dcf1> no blog post this time, as far as I know
16:34:48 <meskio> shelikhoo: your point on client behavior is a good question, will clients fail if an ECH connection is blocked or users will not notice, that will be the mayor difference if we can use ECH or not
16:35:48 <shelikhoo> sorry I couldn't remember where I saw it
16:35:50 <shelikhoo> but
16:35:52 <meskio> dcf1: I guess sending the extension let's the provider know if this client should have being communicating over ECH, but doesn't make the client experience fail
16:36:13 <shelikhoo> nslookup -q=https v2fly.org
16:36:13 <shelikhoo> Server:		127.0.0.53
16:36:13 <shelikhoo> Address:	127.0.0.53#53
16:36:13 <shelikhoo> Non-authoritative answer:
16:36:13 <shelikhoo> v2fly.org	rdata_65 = 1 . alpn="h3,h2" ipv4hint=104.21.41.217,172.67.152.9 ech=AEX+DQBBfwAgACDkVFhlmThFt985C15W9ie6x7s5KLh+9yLEbGJZNIJyJQAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700:3036::6815:29d9,2606:4700:3037::ac43:9809
16:36:38 <shelikhoo> but whether ech is enabled can be tested with a simple dns command
16:36:38 <dcf1> if the client is not really using ECH, it still sends an ECH extension with a random payload, and the handshake uses the SNI in the outer Client Hello, as I understand it
16:37:21 <meskio> mmm, do you mean we can domain front by using SNI and ECH?
16:37:21 <dcf1> Ah there is a BBS thread about the 2023 deployment that was shortly retracted
16:37:33 <meskio> as in using ECH as the real domain we want to talk to?
16:38:02 <dcf1> yes, there is an outer Client Hello with an overt SNI, and the ECH extension contains an inner Client Hello with the covert SNI
16:38:04 <shelikhoo> yes, that is what was possible if ECH is widely used
16:38:21 <meskio> ohh, so censors don't see the difference between using ECH or not
16:38:30 <dcf1> But browsers are summposed to send a dummy, random ECH extension even when there is no covert SNI and they just want to use the overt SNI as usual
16:38:30 <meskio> as we can keep a fake SNI there
16:38:32 <meskio> wow, nice
16:38:35 <dcf1> https://github.com/net4people/bbs/issues/292
16:38:39 <dcf1> https://community.cloudflare.com/t/early-hints-and-encrypted-client-hello-ech-are-currently-disabled-globally/567730
16:38:50 <dcf1> (2023-10-12)
16:38:57 <dcf1> > We have sadly had to disable both of these features globally whilst we address a number of issues with them. These issues are unrelated. We are in the process of adding a label to each of the toggles in dashboard to alert that they are disabled.
16:39:11 <dcf1> So that was the first attempt at deployment, now we are in a second attempt a year later
16:40:57 <dcf1> So far, I am "wait and see" on this issue, but it is potentially exciting for us
16:41:32 <meskio> yeah
16:41:46 <shelikhoo> I don't think we need to do too much until this ech is widely used
16:42:13 <shelikhoo> although V2Ray is already receiving pull request about it
16:42:19 <shelikhoo> as golang added ech support
16:42:52 <meskio> do we need uTLS to have support for it? or golang having support for it will bring it to uTLS?
16:42:59 <dcf1> my intuition says that circumvention developers should not move too quickly, it's not good if circumvention is the primary use of ECH in the early stage IMO
16:43:16 <shelikhoo> no it is in standard library
16:43:19 <dcf1> but of course there are a lot of projects and they will all make their own choices
16:43:38 <meskio> yes, I agree, let it be used more before jumping into it
16:44:08 <dcf1> https://github.com/golang/go/issues/63369 "crypto/tls: support Encrypted Client Hello in clients"
16:44:46 <dcf1> but yeah, we probably need uTLS support or similar as well
16:46:12 <meskio> antyhing else for todays meeting?
16:46:45 <shelikhoo> eof from me
16:46:48 <dcf1> I tried to summarize the discussion about probetest and iptproxy in the notes
16:47:00 <dcf1> I'm not sure I have a handle on how all these issues related
16:47:07 <meskio> thank you for keeping notes
16:47:10 <dcf1> But I wanted to quickly summarize the action items I saw
16:47:36 <dcf1> * cohosh is going to provide WofWca[m] will some past prometheus data on proxy NAT types
16:47:45 <meskio> #endmeeting