16:04:00 #startmeeting anti-censorship team 16:04:00 Meeting started Thu Jan 22 16:04:00 2026 UTC. The chair is cohosh. Information about MeetBot at https://wiki.debian.org/MeetBot. 16:04:00 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:04:39 hi! 16:04:52 hi~ 16:05:02 welcome to the anti-censorship team meeting 16:05:20 here is our meeting pad: https://pad.riseup.net/p/r.9574e996bb9c0266213d38b91b56c469 16:05:20 editable link available on request 16:06:48 i can start with an announcement 16:07:23 i wrote up a proposal for a change to proxy poll rates: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40507 16:08:04 the goal is to increase fairness in proxy distribution (though a proxy's capacity will still apply) 16:08:20 please feel free to take a look and comment 16:09:04 so the idea is to implement a rate limit within the broker itself, the rate limit being keyed on proxy IP address 16:09:04 i'm working with some others on simulations to see whether this will advantage censors or clients more in the event of an enumeration attack 16:09:20 yes, a dynamic rate limit 16:10:36 which could look like a map[net.IP]time.Time, a "no-sooner-than" timestamp for each IP address 16:11:31 yeah, that's what i was thinking 16:11:49 and i think we should either evict entries or wipe the map every 24 hours 16:11:54 I suppose entries in the hash table will have to expire eventually, I suppose (as long as we don't need longer history for an IP address) we could expire an entry as soon as its no-sooner-than timestamp is hit, since that IP is free to query again after that point 16:11:59 to avoid it growing too big 16:12:35 oh yeah, just use the next-poll timestamp as the expiry is a good idea 16:12:44 yes! 16:13:01 did we think about how to deal with proxies sharing the same ip address 16:13:24 should we distribute the quota among them 16:13:44 or find a way to get them synced in someway 16:14:12 my feeling is to just let the rate limit apply, or have a practical rate limit that is a small multiple of the one we advertize 16:14:50 or it could even be like a priority queue, just check for an IP's presence in the queue to see if it should be allowed yet, and pop entries off the front of the queue by their date. It's especially easy to implement such a queue if the rate limit is the same for all proxies, since all you ever do is push to the back and pop from the front 16:15:08 yeah, my fear is if there is 3 proxy running at the same time 16:15:30 since they don't know each other, the next poll time will only work for one of them 16:15:34 the turbotunnel ClientMap does that kind of priority queue expiry thing https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/7afd18e57fbb874a53a9452e9b84ac3f9e1b1e3a/common/turbotunnel/clientmap.go 16:15:49 dcf1: hmm, that's an interesting idea, and then if we *really* need the proxies we can use them even if they shouldn't be polling yet 16:17:04 as a result with n proxies, they will send (n-1) discarded poll for every n pull they send 16:17:04 ok i think i like that idea 16:17:50 you could have honest proxies add a short random delay to their poll interval (like add up to 20%), that would ensure one proxy on an IP address doesn't consistently starve out other proxies on that IP address. you would want to do that anyway, to smooth out resonance effects from e.g. sudden restarts 16:18:17 that's a good idea 16:18:49 oh yes, I think this priority queue expiry thing would work... I think it should work with a small timeout 16:19:08 as the proxy also have webrtc's timeout 16:19:33 so the request from client cannot wait in queue for extended period of time 16:19:54 if multiple proxies have the same IP, they will be distributed much less, but i don't think that's a problem. i think it's a realistic scenario: multiple proxies on the same IP are good if we experience too much load but don't improve censorship resistance 16:20:02 so the request from proxy cannot wait in queue for extended period of time 16:20:52 yes... I agree we can distribute the proxies running at same ip less... 16:21:18 oh i think i misunderstoon what is the priority queue 16:21:43 cohosh see clientMapInner https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/common/turbotunnel/clientmap.go?ref_type=heads#L65 16:22:11 it is not a bad thing, so long as we make them receive request from client in a "semi-fair" way 16:22:23 it has byAge (array) and byAddr (hash table) for quick indexing either by expiration date or by clientid (which in the rate limit case would be an IP address instead) 16:22:30 yeah, i was imagining that the pool of available proxies is a priority queue based on time to or since next allowed poll 16:22:57 but this is just for the map of last-seen timestamps 16:23:07 although i guess we could do both of those things 16:23:37 clientMapInner.removeExpired just pops entries off the front of the queue as long as they are in the past, because the array is a container/heap that keeps them sorted that way. It's how you avoid doing a full traversal of the map all the time. 16:23:53 yeah that is a really nice way to implement this 16:24:29 sorry, just some brainstorming, I don't want to get out too far ahead of requirements gathering 16:24:53 this is good, i've started doing some implementation and i like the ideas here 16:24:56 thanks! 16:24:58 do you think that's all we need? a single "no-sooner-than" timestamp per IP address, a uniform rate for all proxies? 16:25:18 for the rate limiting part, yes 16:25:41 for setting what the rate limit should be, i have some ideas on calculating it based on client poll rates or pool size 16:25:52 but i haven't tried those out yet 16:25:56 I think this should work for now... 16:26:10 ooh I suspect there must be some theorem from queuing theory that would inform such a policy 16:26:24 nice haha 16:26:57 * cohosh rubs hands together at the thought of reading and applying some cool theory 16:27:08 cause that sort of thing is like "if you have people calling the support telephone line Poisson distributed at rate lambda, how many agents do you need in your call center" 16:27:27 totally 16:28:20 I'll update the ticket with my immediate impressions and ideas 16:28:26 thanks! 16:28:37 "0, they will be handled with AI that couldn't understand or do anything" 16:28:52 i think that's it for now, this will take a while to implement and assess 16:29:03 i'll bring it up again later when we have some simulation results :) 16:29:07 Shelikhoo[mds]: lol 16:31:42 ok next thing on the agenda 16:31:49 i think it's from you Shelikhoo[mds] 16:32:04 this topic is from me 16:32:04 basiclly I have received a direct report from user that their network environment will block/restrict tls 1.2 traffic, 16:32:04 and when using randomlized fingerprint with utls, its connection will sometime get blocked because of the usage of tls1.2 16:32:04 I was thinking if we wants to fix this issue 16:32:15 Maybe support pinning tls1.3 when using uTLS random fingerprint? 16:32:15 https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/171 16:33:03 oh that's interesting 16:33:11 utls's random fingerprint setting has a probability 40% to set max tls version to 1.2 16:33:41 we could try to find a way to get around this issue 16:33:48 I don't know if the tls Config MinVersion is respected in utls randomization 16:34:06 https://github.com/refraction-networking/utls/blob/8fe0b08e9a0e7e2d08b268f451f2c79962e6acd0/common.go#L811 16:34:07 Shelikhoo[mds]: sorry it was 60% 16:35:54 theodorsm: The function call signature is 16:35:56 func generateRandomizedSpec(... (full message at ) 16:36:29 so I think it is very likely max or min version setting is not respected 16:38:17 it sounds like the easiest solution is to suggest a different fingerprint to use in this case 16:38:54 but i haven't been paying close attention to which fingerprints work well 16:39:11 have we been using random because it's difficult to keep the other ones up to date? 16:39:42 yeah, I think other fingerprints kind of get outdated really quickly 16:40:54 and many of these contributed fingerprint are problematic when get first introduced 16:41:13 like requires fix from time to time 16:42:05 ok so at least having an option to pin tls1.3 with a random fingerprint would be useful 16:42:44 yeah, if we wants to do that I can have a look into way to implement such a thing 16:43:28 the primary reason I think we wants to discuss this is that the bridgeline space situation is getting worse and worse 16:43:52 that is true 16:44:07 I wonder if I should implement this as a regular bridge line option 16:44:19 or we need something else 16:44:28 Seems like we could create a custom ClientHelloSpec @Shelikhoo: https://github.com/refraction-networking/utls/blob/8fe0b08e9a0e7e2d08b268f451f2c79962e6acd0/u_parrots.go#L2767 16:46:17 theodorsm: Yes! If we wants to get chrome’s fingerprint supported in this way, we might need to actually implement some of its tls extensions 16:46:46 which requires a lot of engineering effort 16:47:04 I can assist if needed, but do we want to support it? 16:47:51 We could perhaps upstream a change to utls to control min version too 16:48:41 Also, would be interesting to hear more about the report. Where did TLS 1.2 get blocked? 16:49:18 If I recall correctly it was somewhere in Russia 16:50:07 I think the question being discussed is what is the best approach moving forward about this issue 16:50:34 i don't have a strong opinion about this, but i would try to find a different working fingerprint in the meantime and then see how much work it would be to implement a tls1.3 only random fingerprint 16:51:44 we can make space in the bridge line and we can ship some backup stun servers in the ClientPluginTransport line too if needed 16:52:00 Yes.. I can have a estimate the work required first 16:52:16 Yes.. I can have a estimate of the work required first 16:52:41 And don’t think too much about the bridge line space issue for now 16:53:07 *ClientTransportPlugin line, heh 16:54:23 I think we can discuss this again when I have better idea about the work required to implement the “always tls1.3” work 16:54:28 ok sounds good 16:54:34 Nice! 16:54:36 thanks Shelikhoo[mds] 16:55:08 there's one more quick thing from me which is that i want to make breaking changes to some of the APIs in our snowflake packages, which would require a major version bump 16:55:24 i know we've discussed this before, so i don't necessarily want a long discussion 16:55:48 but we've acquired some technical debt in, for example, the messages package 16:56:16 https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/7afd18e57fbb874a53a9452e9b84ac3f9e1b1e3a/common/messages/proxy.go 16:56:35 and i just want to rewrite this 16:57:23 I think we can have an issue to discuss this issue asynchronously. 16:57:32 sure 16:57:39 We might wants to bundle all the breaking changes in one go 16:57:44 Oh you mean the "doSomething" and "doSomethingWithThisOtherThing" pattern? Normalizing that to something more regular? 16:57:57 dcf1: yes 16:58:09 and to make it more like the client.go file in that directory 16:58:22 so we're not invidually returning all the fields in the Decode functions 16:58:42 we can return the struct instead 16:58:55 oh, offhand I don't see a reason to break compatibility with those APIs, even if exposing a new more regular API 16:58:57 which is why we needed that pattern in the first place 16:59:23 but I think it's not a big deal, if you think it doesn't cause too much pain for our downstreams it's ok 16:59:23 dcf1: ah so you're saying just add new functions and keep the old ones? 16:59:33 ok 16:59:35 yeah 16:59:47 write the old ones in terms of the new ones 17:00:01 yeah that sounds fine too 17:00:10 I think most of down streams are not using these functions directly 17:00:17 but I haven't looked at this specifically to know if that's reasonable, use your own judgement I'd say 17:00:24 In theory we could search all public repo for references 17:00:28 i'll comment some of these other functions as deprecated maybe, and then if/when we do a major version bump we can remove them if we want to clean up the code 17:00:47 and it'll be easy to find the ones we're not using anymore 17:00:53 cohosh: Nice! 17:01:14 that's it from me, anything else for today? 17:01:31 We also have these kind of do…with…and in other parts of codebase 17:01:52 It will be great if we can get rid of them as well, but maybe later 17:02:00 Eof from me 17:02:02 Thanks 17:03:10 awesome, thanks everyone! 17:03:12 #endmeeting