This event should work similar to Trading.OnNewTrade() event - we can call it many times, especially once per each game being added, and if we're adding many games through a bundle or app_license_request, we might quickly run into having 30+ badge parsing tasks queued, which doesn't make much sense considering that just 2 are sufficient for an update.
Correct it in a way that makes both me and Steam servers happy.
Gifts/Login limiter is actually working decent, because response is nearly instant and fast enough to not worry about it in long-run.
With inventory things are entirely different, as inventory fetching might take even a very long time, and while fetching one inventory, we might already run out of our delay and start fetching another one.
This is not a big functionality-wise, as it's nothing new for ASF to parse multiple inventories concurrently, but Steam Community actually counts number of requests, and our inventory function might ask for multiple pages during execution, which could quickly lead to a situation of 10+ ongoing inventory requests being sent concurrently for too many accounts at once, as we can't predict not only how long the request will be handled, but also how many sub-requests we will do across one.
This means that for optimal performance in terms of rate-limiting, we must limit ASF to one inventory request at a time, with mandatory InventoryLimiterDelay before asking for another one.
This can degrade performance of previously fast !loot requests on multiple accounts at once (especially with bigger inventories), but it will also decrease significantly a chance of getting rate-limited and requests failing.
Previously we calculated amounts (and therefore, differences) based on cards only, which means that user could earn some extra 'points' in our calculation for giving us many trash cards that we miss, and in exchange for that swap cards from some other game that would normally results in non-neutral difference on that game basis, but neutral+ in overall basis (that we operated on).
This would not cause any serious problem, because ASF ensures that trade is fair before calculating actual neutrality, but it could result in accepting a trade that is neutral+ only at first sight, and non-neutral+ when we take a closer look what we're actually swapping.
The logic was enhanced to handle differences on per-game basis now, and taking minimum in account, so all swaps on per-game basis have to be neutral+ now
Now this is a nice bug that was found accidentally by ArchiBoT...
ReaderWriterLockSlim() is very decent solution, but it's thread-based, and we're using our ConcurrentHashSet in mixed async/sync context. This means that if we use something like:
foreach (var item in concHashSet) {
await AnythingAsync().ConfigureAwait(false);
}
It's totally possible that we'll request read lock as thread 1, and release the read lock as thread 2, which will lead to RWLock exception => System.Threading.SynchronizationLockException: The read lock is being released without being held.
Fortunately it looks like we didn't have any scenario like this in ASF, as this was possible only when we async/await while enumerating over ConcurrentHashSet, so that specific bug didn't affect ASF codebase (yet). Still, I must fix this as current implementation is not thread-safe, so our HashSet is in fact not concurrent in the first place.
I analyzed possible solutions and there are basically 3: either using ConcurrentDictionary and wrapping around it, replacing lock with SemaphoreSlim, or using third-party AsyncReaderWriterLock from StephenCleary. SemaphoreSlim entirely kills the concept of multiple readers one writer, and could affect performance negatively, moreover - it doesn't support upgreadable lock scenario we have with ReplaceIfNeededWith(). Concurrent dictionary would be nice if I didn't have that awful memory hit from storing mandatory pointless value, plus I don't really like concept of wrapping around conc dictionary if I can simply use it right away and drop my conc hashset entirely. AsyncReaderWriterLock seem to be really well written, and works on Mono + should be compatible with .NET core in the future, so we should go for it as it's the best bet both performance-wise and memory-wise.
This brings another package dependency and changes a bit backend of ConcurrentHashSet
Time to enforce some common file layout, as general mess started to annoying me. Sorry in advance for people using custom forks and having merge conflicts, this will help everybody in long-run
It seems that even if Steam responds with e.g. internal server error (500), the trade gets accepted 20-30 seconds later, which doesn't make ANY sense, but does anything in Steam do?
Let's improve the logic a bit by returning result even if we in fact failed in Accept/Decline function, this will allow us to deal with confirmations even if failed trade in fact succeeded.
It's really good for keeping memory low, but due to limited lifespan of those objects and the fact that we should focus on performance and not memory, it's better to avoid extra overhead and let GC do it's job, by dropping references to those objects ASAP
TrimExcess() is still being used for global objects, as those have longer (possibly infinite) lifespan and can benefit from that.