A research paper with implications for IOTA's future implementation was released in pre-print seven months ago entitled, "On Congestion Control for Distributed Ledgers in Adversarial IoT Networks". The authors listed on the paper include three members of the Dyson School of Design Engineering at Imperial College London, and two researchers at the IOTA Foundation.
The three members from Imperial include Andrew Cullen (a 2020 IBM PhD fellow), Pietro Ferraro (research associate), and Robert Shorten (professor and highly accomplished researcher). This group has been extremely active in the mobility space for the past two years - the titles of their papers speak for themselves:
1) Distributed Ledger Technology for Smart Mobility: Variable Delay Models (2019)
2) Ad-hocChain: Cooperative Sharing and Trading Infrastructure for Electric Vehicle Charging Networks (2019)
3) Spatial Positioning Token (SPToken) for Smart Mobility (2019)
4) On the Resilience of DAG-based Distributed Ledgers in IoT Applications (2020)
5) On Congestion Control for Distributed Ledgers in Adversarial IoT Networks (2020)
Professor Shorten himself has a resume dripping with experience and exuding gravitas. He was co-founder and director of the Hamilton Institute, department head at IBM research (leading the Control and Optimization Team at the Smart City Research lab), and is now professor of cyber-physical systems in the engineering school (with associations at both ICL and University College Dublin). He lists his interests: "Smart Mobility and Smart Cities; Control Theory and Dynamics; Hybrid Dynamical Systems; Networking; Linear Algebra; Sharing Economy; DAG based distributed ledgers."
If that's not enough, he has written three textbooks; one on the topic of distributed resource allocation with focus on the AIMD algorithm, another on EV vehicle networks, and most recently a book about the mathematics of the sharing economy.
He's such a broad thinker that his engineering expertise was sought out for the Covid19 pandemic by a Bloomberg article just days ago. Readers might also be familiar with his appearances in other main stream outlets which have elicited his thoughts on topics as diverse as augmented reality, digital twins, vehicle startups, climate, parked-cars-as-a-service, and many more.
On the IOTA Foundation side of this paper are Dr. Sanders and Dr. Vigneri.
Dr. Sanders is a mathematician who's researched commutative algebra and category theory, and now translates his skills in pure mathematics into applied mathematics as a researcher for IOTA. Dr. Vigneri is a PhD in mobile communications with a history as a researcher at Huawei, and who focuses on network optimization/modeling/performance analysis at IOTA. Both are extremely well published in their fields of expertise.
With that background, let's delve into their recent Congestion Control paper to get a glimpse into the future of IOTA's implementation.
The authors waste no time in identifying a key limitation of directed acyclic graphs (and all DLTs) as being the need for all nodes to receive the same information regardless of environment (even amidst other dishonest nodes). The topic of this paper is an algorithm that ensures dissemination of transactions among nodes is fair and evenly distributed.
While the algorithm does borrow from concepts in QoS (in the field of networking: bandwidth, latency, jitter, and error rate) and TCP (one of the main internet protocols), IoT networks are unique in that they don't allow nodes to rely upon packet acknowledgments or congestion notifications like the traditional internet network can.
Therefore, there must be a way to control for congestion in a decentralized and trustless manner in order to achieve a fully functioning IoT network.

IOTA enthusiasts know by now just how large the IoT sector is expected to get. Security, data transfer, speed, and scalability are a suite of issues that other DLTs have trouble solving for. Because of this, and without a viable alternative, IoT architectures have settled for centralized solutions. The authors are wise to immediately address this issue.
They proceed to give a brief definition of DLT, in which they stress the trustless aspect of such a network and bring up the fact that consensus among nodes must be reached without a centralized entity. While this may be obvious to those who have followed the development of various DLTs, it's helpful to the scientific community (audience of the paper) to avoid taking any explanatory shortcuts.
In their paragraph discussing the shortcomings of existing blockchains in terms of suitability for the IoT, one of the conclusions is that due to the nature of blockchain consensus mechanisms, blockchain nodes tend toward larger "centralized" nodes, which precludes the ability of small IoT devices from joining the network.

The tradeoff between blockchains and DAGs: DAGs allow for much greater throughput, but it comes at the cost of needing a new mechanism to control for network congestion. The hunt is on to find a non-computing power resource on which to allocate (in Bitcoin it's computing power, in ETH2.0 it'll be economic power). This paper proposes reputation as that resource.

The term given to the bottleneck here is "writing", and might be reminiscent of the bottleneck of block size in blockchains. Because nothing is changed between the two architectures in terms of every node needing to validate every transaction and storing the global ledger, and because throughput isn't arbitrarily limited by block size (secondary to compute power constraints that determine when the next block is mined) in DAGs, an algorithm is being proposed as the constraint.
This algorithmic solution seeks to balance maximum possible throughput with minimum possible delays. Here are the specifics of what the algorithm is aiming to achieve: Consistency, fairness, and security

From there, the reader is given a tour of related work in the field of networking before getting into a node model and congestion control algorithm. The authors consider placing this work in the context of existing literature to be a challenge since DLT congestion control is sort of in between established topics like QoS, TCP, network security, and gossip protocols.
For example, Renesse et al published a 2008 paper in which a 'flow algorithm' was proposed for a replicated database nodes to adaptively control update rates so as to avoid backlogs. But this algorithm is unsuited for a trustless network since 1) congestion is detected by 'overflows of updates in the buffer' (which could result in inconsistency across nodes in a trustless network, and 2) the 'flow algorithm' assumes that fairness is achieved by communication with neighboring nodes (again, not a reasonable assumption in a trustless network).
Another example is the 2002 paper by Grossman et al dealing with traffic with widely known DiffServ, and the 2000 Bernet et al paper about IntServ. These are two great traffic control mechanisms, however they rely on 'a backbone of trusted routers'. WFQ, DRR, and FQ-CoDel are some other queuing/packet schedulers that get mentioned. These work well in TCP where congested routers are able to announce their congestion by acknowledgements or ECNs (explicit congestion notifications). These methods can't be used in a trustless network.

Now we get into the proposed congestion control solution for IOTA. Congestion will be controlled with a two-pronged approach:
First, nodes will only be able to write a quantity of transactions into the network that's proportional to the amount of reputation they have built. Higher reputation nodes will be capable of sending a higher proportion of total network transactions than relatively lower reputation nodes. The assumption here is that malicious nodes will be prevented from earning significant reputation.
Second, rate setting allows for modulation of transaction throughput that's dependent on not creating congestive throughput anywhere else in the network.
Let's take these two components one at a time.

This proposed scheduling component adopts an algorithm akin to the aforementioned DRR, and looks similar to the FQ-CoDel implementation. The notation of the algorithm itself is probably too complicated to include here, but we'll let the reader be the judge of that:

The next component is the much more difficult to understand Rate Setting Algorithm. It might have been difficult to understand with the earlier summary - here's an easier way of thinking about it. When the network is fully saturated, the first algorithm (scheduling algo) determines which nodes get to issue their transactions. However, when the network isn't fully saturated with transactions, as is expected to be the case at least some of the time, nodes with low reputations should be able to issue more than their typical allotment of transactions since they won't be causing undue network congestion. This rate setting algorithm for what the authors are calling "best-effort" nodes borrows concepts from TCP, and follows rules of AIMD (remember from above that Professor Shorten wrote a textbook on AIMD!). The crucial insight here is that "local congestion at a node is all that is required to indicate congestion elsewhere in the network." Here's what this algorithm looks like:

To validate the algorithms, the team composed a python script to conduct a Monte Carlo analysis. There are a number of graphs included in the paper, with the most striking being this one:


And with that, this team of researchers has identified a solution to the network congestion problem inherent in non-compute resource dependent DAG architectures.
It is likely that few people in the general cryptocurrency community understand just how rigorous the math, algorithms, network protocols, and programming is for projects that are attempting to build something new. IOTA is decidedly not a dApp token that's been instantiated as a financial engineering tool, like so many others it's probably lumped in with.
IOTA has set out to pave a brand new path in the world of computer science, and as constant drama engulfs this entire space, the scientists are quietly hard at work. Momentum is building.