Finalized all exercises.

Added final report.
This commit is contained in:
Filipe Rodrigues 2023-12-17 18:10:44 +00:00
parent 2a7520ae80
commit cd7e47ef6c
22 changed files with 115 additions and 111 deletions

View File

@ -15,6 +15,8 @@
"DTMC",
"ggplot",
"ggsave",
"kbit",
"kbits",
"Khinchine",
"kleinrock",
"linebreak",
@ -45,5 +47,11 @@
"ylim",
"ymax",
"ymin"
],
"grammarly.selectors": [
{
"language": "typst",
"scheme": "file"
}
]
}

BIN
DDRS_L2_G2.pdf (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -14,7 +14,7 @@ calc_data <- function(sourcetype, arrival_rates, packet_size, priorities) {
)
min_arrival_rate <- min(sapply(Flows, function(flow) flow$arrivalrate))
endTime <<- 10000 * (1 / min_arrival_rate)
sim_res <- lapply(1:20, function(run_idx) {
sim_res <- lapply(1:10, function(run_idx) {
cat(sprintf("Run #%d\n", run_idx))
res <- ppl()

View File

@ -1,7 +1,7 @@
source("code/7.R")
source("code/ppl/ppl.R")
arrival_rates1 <- c(60, 80)
arrival_rates1 <- c(20, 80)
set.seed(0)
data <- lapply(arrival_rates1, function(arrival_rate1) {

View File

@ -1,7 +1,7 @@
source("code/7.R")
source("code/ppl/ppl.R")
arrival_rates1 <- c(60, 80)
arrival_rates1 <- c(20, 80)
set.seed(0)
data <- lapply(arrival_rates1, function(arrival_rate1) {

View File

@ -3,6 +3,7 @@ source("code/util.R")
stopping_condition <- 10000
set.seed(0)
mm1_stats <- lapply(1:25, \(...) calc_stats_mm1(0.5, 1, 0.99, 101, 0.01, stopping_condition))
mm1_avg_delay <- calc_ci(0.95, sapply(mm1_stats, \(stat) stat$avg_delay))

View File

@ -1,26 +1,26 @@
#import "/typst/util.typ" as util: indent_par
#indent_par[The 2-DTMC process is capable of performing both what the Bernoulli process can, as well as another interesting behavior. The following figure 1 illustrates this:]
#indent_par[The 2-DTMC process, illustrated in figure 1, is capable of performing both what the Bernoulli process can, as well as another interesting behavior.]
#figure(
image("/output/1.svg", width: 50%),
caption: [2-DTMC process]
)
#indent_par[Where α, β are the state transition probabilities. For example, state 0 has probability $α$ of staying on state 0, and probability $1 - α$ of moving to state 1. Meanwhile state 1 has probability $1 - β$ of staying on state 1 and probability $β$ of moving to state 0.]
#indent_par[α, β are the state transition probabilities. For example, state 0 has probability $α$ of staying in state 0, and probability $1 - α$ of moving to state 1. Meanwhile, state 1 has probability $1 - β$ of staying in state 1 and probability $β$ of moving to state 0.]
#indent_par[Based on the professor's code in `dtmc_bernoulli_plot.R`, we've created graphs to illustrate the different behaviors exhibited by the 2-DTMC.]
==== a. Interesting behavior
#indent_par[When the α and β parameters are on opposite sides of the spectrum (such as α close to 0.0 and β close to 1.0, or vice-versa), the 2-DTMC process exhibits an interesting behavior, such as shown in figure 2:]
#indent_par[When the α and β parameters are on opposite sides of the spectrum (such as α close to 0.0 and β close to 1.0, or vice-versa), the 2-DTMC process exhibits an interesting behavior, shown in figure 2:]
#figure(
image("/output/1b (α=0.9, β=0.1).svg", width: 50%),
image("/output/1b (α=0.9, β=0.1).svg", width: 80%),
caption: [2-DTMC and Bernoulli processes (α=0.9, β=0.1)]
)
#indent_par[Unlike the Bernoulli process, for which each event is independent from the previous, the 2-DTMC "remembers" it's previous state, ensuring that both states are very stable, not wanting to transition to the other one.]
#indent_par[Unlike the Bernoulli process, for which each event is independent of the previous, the 2-DTMC "remembers" its previous state, ensuring that both states are very stable, not wanting to transition to the other one, leading to this interesting behavior.]
#pagebreak()
@ -30,24 +30,22 @@
#grid(
columns: (1fr, 1fr, 1fr),
figure(
image("/output/1a (α=0.1, β=0.1).svg", width: 80%),
pad(1em, figure(
image("/output/1a (α=0.1, β=0.1).svg", width: 100%),
caption: [2-DTMC and Bernoulli processes (α=0.1, β=0.1)]
),
figure(
image("/output/1a (α=0.5, β=0.5).svg", width: 80%),
)),
pad(1em, figure(
image("/output/1a (α=0.5, β=0.5).svg", width: 100%),
caption: [2-DTMC and Bernoulli processes (α=0.5, β=0.5)]
),
figure(
image("/output/1a (α=0.9, β=0.9).svg", width: 80%),
)),
pad(1em, figure(
image("/output/1a (α=0.9, β=0.9).svg", width: 100%),
caption: [2-DTMC and Bernoulli processes (α=0.9, β=0.9)]
)
))
)
#indent_par[When α and β are close to 0.0 or 1.0, one of the states will become very stable while the other state will become very unstable, quickly wanting to transition to the other state.]
#indent_par[When α and β are close to 0.5, both states are very unstable.]
#indent_par[The existence of one or more unstable states imply that the system can no longer as easily "remember" it's previous state and thus the probability of finding the system in a given state can now be approximated by a bernoulli process]
#pagebreak()
#indent_par[The existence of one or more unstable states implies that the system can no longer as easily "remember" its previous state and thus the probability of finding the system in a given state can now be approximated by a Bernoulli process.]

View File

@ -1,6 +1,6 @@
#import "/typst/util.typ" as util: indent_par, code_figure
#indent_par[*NGINX* is a web server that is capable, among many other things, of being a load balancer. To achieve this, it employs several balancing algorithms, which will be detailed next.]
#indent_par[*NGINX* is a web server that is capable, among many other things, of being a load balancer. To achieve this, it employs several load-balancing algorithms, which will be detailed next.]
#indent_par[Each of these algorithms perform some tradeoff to achieve better average response times under different scenarios. To be able to compare them, we'll enumerate the situation each algorithm is best suited for.]
@ -10,13 +10,13 @@
- $S_1$, $S_2$, $S_3$, $S_1$, $S_2$, $S_3$, $S_1$, ...
#indent_par[This means that each request is treated equally, and thus the algorithm is best suited for when requests have a low variability in terms of workload.]
#indent_par[This means that each request is treated equally, and thus the algorithm is best suited for when requests have low variability in terms of workload.]
==== 2. Least connections
#indent_par[When using least connections, a request will be sent towards the server with the lowest number of active connections. However, *NGINX* also takes into account the relative computing capacity of each server to determine which server to route the request to. This means the algorithm is better described as weighted least connections.]
#indent_par[When using the least connections algorithm, a request will be sent to the server with the lowest number of active connections. However, *NGINX* also takes into account the relative computing capacity of each server to determine which server to route the request to. This means the algorithm is better described as weighted least connections.]
#indent_par[This algorithm is best suited for applications where requests are distributed with a relatively high variance, since the balancer will attempt to distribute new packets to servers which aren't already highly loaded.]
#indent_par[This algorithm is best suited for applications where requests are distributed with relatively high variance since the balancer will attempt to distribute new packets to servers that aren't already highly loaded.]
==== 3. Least time
@ -26,9 +26,9 @@
==== 4. Hash
#indent_par[This algorithm routes packets deterministically based on some it's properties, such as the ip address of the client who sent it.]
#indent_par[This algorithm routes packets deterministically based on some of its properties, such as the IP address of the client who sent it.]
#indent_par[It is closely related to a random algorithm, but it ensures that if a packet has the same property, it can be routed to the same server. This means that if using the client ip address as the hash, for example, a client who is sending many heavy packets will consistently get the same server, and so other clients which aren't associated to the same server won't have any response time impact.]
#indent_par[It is closely related to a random algorithm, but it ensures that if a packet has the same property, it can be routed to the same server. This means that, if using the client IP address as the hash, for example, a client who is sending many heavy packets will consistently get the same server, and so other clients who aren't associated to the same server won't have any response time impact.]
==== 5. Random with two choices

View File

@ -4,10 +4,10 @@
- Added a `queues_quantum` vector to hold the quantum of each queue
- Added a `queues_credit` vector to hold the current credits of each queue.
- Whenever a queue becomes empty, we reset it's credits to 0.
- When receiving a departure event, if all servers are full, we check if any of the left queues still has enough credits for the next packet, and if so, we serve it. Otherwise, we give all queues credits based on their quantum and find the first queue that can serve the next packet. If none are able to, we give them credits again and keep repeating until there is a suitable queue.
- Whenever a queue becomes empty, we reset its credits to 0.
- When receiving a departure event, if all servers are full, we check if any of the queues left in the cycle still have enough credits for the next packet, and if so, we serve it. Otherwise, we give all queues credits based on their quantum and find the first queue that can serve the next packet. If none are able to, we give them credits again and keep repeating until there is a suitable queue.
#indent_par[The following code 9 is our implementation:]
#indent_par[The following code 10 is our implementation:]
#code_figure(
columns(1, text(size: 0.7em, raw(read("/code/11.R"), lang: "R", block: true))),
@ -16,7 +16,7 @@
#pagebreak()
#indent_par[We've chosen to simulate several scenarios in which:]
#indent_par[We've chosen to simulate several scenarios with the following values:]
- Arrival rates: $0.75$, $1.50$ and $2.25$.
- Link capacity: $1000 "bits" dot s^(-1)$
@ -53,7 +53,7 @@
)),
)
#indent_par[In all images we see a (mostly) vertical gradient, implying that the throughput values don't change much across the x axis (average packet sizes). From this, we conclude that the relative quantum values seem to mostly determine the throughput of each queue, regardless of the queue's relative average packet size.]
#indent_par[In all images we see a (mostly) vertical gradient, implying that the throughput values don't change much across the x-axis (average packet sizes). From this, we conclude that the relative quantum values seem to mostly determine the throughput of each queue, regardless of the queue's relative average packet size.]
#indent_par[When the arrival rates are different, the vertical gradient starts to break at the extremes. However, outside of these parts, the pattern still fits extremely well.]

View File

@ -16,7 +16,7 @@
==== c. Scheduling
#indent_par[Scheduling, also called dequeuing, is the third step in the Qos chain. After entering it's assigned queue, a packet will be scheduled for processing. This procedure will employ a policy that will determine when each packet will be served.]
#indent_par[Scheduling, also called dequeuing, is the third step in the Qos chain. After entering its assigned queue, a packet will be scheduled for processing. This procedure will employ a policy that will determine when each packet will be served.]
#indent_par[In detail, the 7x50 _Nokia_ routers use a policy of strict priority, with two different levels of priorities. However, there can be multiple queues with the same priority. In this case, once a packet is assigned a priority, it will be distributed according to a round-robin policy to the respective queues.]

View File

@ -1,14 +1,14 @@
#import "@preview/tablex:0.0.6": tablex, rowspanx, colspanx
#import "/typst/util.typ" as util: indent_par, code_figure
#indent_par[The following code 10 is our implementation of the Kleinrock approximation. It takes the link capacities, flows and packet size similarly to the `pnet` simulator.]
#indent_par[The following code 11 is our implementation of the Kleinrock approximation. It takes the link capacities, flows and packet size similarly to the `pnet` simulator.]
#code_figure(
text(size: 0.8em, raw(read("/code/kleinrock.R"), lang: "R", block: true)),
caption: "Code for Kleinrock approximation",
)
#indent_par[To test the simulator, we've used the following network from the course slides (`pktnet`, page 14).]
#indent_par[To test the simulator, we've used the following network, in figure 21, from the course slides (`pktnet`, page 14).]
#figure(
image("/images/13-diagram.png", width: 75%),
@ -17,7 +17,7 @@
#pagebreak()
#indent_par[We developed the following code 11 to test our implementation and obtained the results in table 18:]
#indent_par[We developed the following code 12 to test our implementation and obtained the results in table 18:]
#code_figure(
text(size: 0.8em, raw(read("/code/13.R"), lang: "R", block: true)),
@ -34,7 +34,7 @@
colspanx(2)[ Average packet delay ($"μs"$) ],
rowspanx(2)[ Average packets (network) ],
[ Per flow ],
[ Per-flow ],
[ Network ],
[1],
@ -53,6 +53,4 @@
caption: [Results]
)
#indent_par[Our values are the same as calculated manually, and thus we conclude our script is valid.]
#pagebreak()
#indent_par[Our values are the same as when calculated manually, and thus we conclude our script is valid.]

View File

@ -1,7 +1,7 @@
#import "@preview/tablex:0.0.6": tablex, rowspanx, colspanx
#import "/typst/util.typ" as util: indent_par, code_figure
#indent_par[We used our previously developed Kleinrock script and the `pnet` simulator, running it 10 times and calculating 95% confidence intervals, acquiring the following results in table 18:]
#indent_par[We used our previously developed Kleinrock script and the `pnet` simulator, running it 10 times and calculating 95% confidence intervals, acquiring the following results in table 19:]
#figure(
pad(1em, tablex(
@ -41,6 +41,6 @@
caption: [Results]
)
#indent_par[Although the Kleinrock approximation never enters the confidence intervals obtained from the `pnet` simulator, for lower values of $ρ$, it is close to the confidence intervals. However for larger values of $ρ$, it starts to drift apart very noticeably.]
#indent_par[Although the Kleinrock approximation never enters the confidence intervals obtained from the `pnet` simulator, for lower values of $ρ$, it is close to the confidence intervals. However, for larger values of $ρ$, it starts to drift apart very noticeably.]
#pagebreak()

View File

@ -51,9 +51,9 @@
==== b.
#indent_par[We ran the `pnet` simulator 10 times, calculating 95% confidence intervals, and obtained the results in table 21:]
#indent_par[We ran the `pnet` simulator 10 times, calculating 95% confidence intervals, and obtained the results in table 21.]
#indent_par[The network average packet delay was calculated from each flow's average delay via the following formula:]
#indent_par[The network average packet delay was calculated from each flow's average delay via the following formula 16:]
$ W = (sum_i λ_i W_i) / (sum_j λ_j) $
@ -65,7 +65,7 @@ LinkCapacities <- replicate(7, 256 * 1000)
- ```R
Flows <- list(
list(rate = 215, packetsize = packet_size, route = c(1, 3, 6)),
list(rate = 64, packetsize = packet_size, route = c(2, 5)),
list(rate = 64 , packetsize = packet_size, route = c(2, 5)),
list(rate = 128, packetsize = packet_size, route = c(2, 5, 7)),
list(rate = 128, packetsize = packet_size, route = c(4))
)
@ -79,7 +79,6 @@ endTime <- 10000 * (1 / 64) # 156.25
columns: (auto, 1fr, 1fr),
align: center + horizon,
rowspanx(2)[ Flow ],
colspanx(2)[ Average packet delay ($"ms"$) ],
@ -103,19 +102,21 @@ endTime <- 10000 * (1 / 64) # 156.25
caption: [Results]
)
#indent_par[The results are pretty different from the Kleinrock approximated calculated in 15.a. This is expected, as this is a more complex network than the Kleinrock approximation can handle.]
#pagebreak()
==== c.
#indent_par[In order to determine the optimal bifurcation path, we first determine that the flow through the link $2 -> 4$ and $2 -> 5$ must be equal. Given that the flow 4 already uses the link $2 -> 5$, we must account for it in the calculations.]
#indent_par[In order to determine the optimal bifurcation path, we first determine that the flow through the link $2 -> 4$ and $2 -> 5$ must be equal. Given that flow 4 already uses the link $2 -> 5$, we must account for it in the calculations.]
#indent_par[With this in mind, we reach the following equation system 17, where $l_24$ is the flow through link $2 -> 4$ and $l_25$ is the flow through link $2 -> 5$]
#indent_par[With this in mind, we reach the following system of equations in equation 17, where $l_"xy"$ is the flow through link $x -> y$.]
$ cases( l_24 + l_25 = 215, l_24 = l_25 + 128 ) $
#indent_par[Solving these, we reach $l_24 = 171.5$ and $l_25 = 43.5$]
#indent_par[Solving these, we reach $l_24 = 171.5$ and $l_25 = 43.5$.]
#indent_par[In order to compare against our results in exercise 15.a, we used the Kleinrock script and got the following results in table 22.]
#indent_par[To compare with our results in exercise 15.a, we used the Kleinrock script again and got the following results in table 22.]
#figure(
pad(1em, tablex(
@ -152,7 +153,7 @@ $ cases( l_24 + l_25 = 215, l_24 = l_25 + 128 ) $
caption: [Results]
)
#indent_par[We can be sure that we have obtained the optimal bifurcation, because the average packet delay for each bifurcated flow has the same value. We also see that flow 2 was not affected at all, since flow 1 does not cross it on any of it's path or bifurcations. However, flow 3 and flow 4 have both been slightly affected due to the 2nd bifurcation of flow 1 passing through them. This is to be expected, as we are pushing more packets through the same links. Overall, the network average packet delay is lower, despite the increases to flow 3 and 4, due to flow 1 now having much lower delays.]
#indent_par[We can be sure that we have obtained the optimal bifurcation because the average packet delay for each bifurcated flow has the same value. We also see that flow 2 was not affected at all, since flow 1 does not cross it on any of its paths or bifurcations. However, flow 3 and flow 4 have both been slightly affected due to the 2nd bifurcation of flow 1 passing through them. This is to be expected, as we are pushing more packets through the same links. Overall, the network average packet delay is lower, despite the increases to flow 3 and 4, due to flow 1 now having much lower delays.]
#pagebreak()
@ -237,4 +238,4 @@ Flows <<- matrix(
caption: [Results]
)
#indent_par[Comparing with our previous attempt at bifurcation in exercise 15.c, we see an small overall improvement to the network average packet delay. The gradient projection algorithm discovered 2 new flows we hadn't used, namely $1 -> 3 -> 5 -> 6$ and $1 -> 2 -> 5$. Despite outputting the flow $1 -> 2 -> 5 -> 6$, it has no rate, so we did not include it in the kleinrock script.]
#indent_par[Comparing with our previous attempt at the bifurcation in exercise 15.c, we see a small overall improvement to the network average packet delay. The gradient projection algorithm discovered 2 new flows we hadn't used, namely $1 -> 3 -> 5 -> 6$ and $1 -> 2 -> 5$. Despite outputting the flow $1 -> 2 -> 5 -> 6$, it has no rate, so we did not include it in the Kleinrock script.]

View File

@ -10,7 +10,7 @@
==== a.
#indent_par[Our first approach consisted of a simple approach using the smallest number of links per flow, and then increasing the link capacities one by one until we achieved the desired blocking probability. We have detailed it in the following tables 25 and 26, and the results in table 27:]
#indent_par[Our first approach consisted of a simple approach using the smallest number of links per flow, and then increasing the link capacities by one until we achieved the desired blocking probabilities. We have detailed it in the following tables 25 and 26, and the results in table 27:]
#figure(
pad(1em, tablex(
@ -66,7 +66,7 @@
#pagebreak()
#indent_par[In our second approach, we determined that having the 3rd flow not share the circuits of the 1st and 2nd flows was desireable, especially since the links $1 -> 4$ and $4 -> 3$ are the cheapest, at only $100$ per circuit. Again, we started from 0 circuits in each link, and increased them until we had the desired blocking percentages. We've detailed the approach in tables 28 and 29, and the results in table 30.]
#indent_par[In our second approach, we determined that having the 3rd flow not share the circuits of the 1st and 2nd flows was desireable, especially since the links $1 -> 4$ and $4 -> 3$ are the cheapest, at only $100$ per circuit. Again, we started from 0 circuits in each link and increased them until we had the desired blocking percentages. We've detailed the approach in tables 28 and 29, and the results in table 30.]
#figure(
pad(1em, tablex(
@ -122,7 +122,7 @@
#pagebreak()
#indent_par[In our third and final approach, we determined that the link $1 -> 2$ was very expensive, and considered whether it'd be worth to instead use the links $1 -> 4$, $4 -> 3$ and $3 -> 2$ instead. We briefly considered $1 -> 4$ and $4 -> 2$, but this link is very expensive and not used by any other flow, so it was likely not worth it. Again, we started from 0 circuits in each link, and increased them until we had the desired blocking percentages. We've detailed the approach in tables 31 and 32, and the results in table 33.]
#indent_par[In our third and final approach, we determined that the link $1 -> 2$ was very expensive, and considered whether it'd be worth to instead use the links $1 -> 4$, $4 -> 3$ and $3 -> 2$ instead. We briefly considered $1 -> 4$ and $4 -> 2$, but this link is very expensive and not used by any other flow, so it was likely not worth it. Again, we started from 0 circuits in each link and increased them until we had the desired blocking percentages. We've detailed the approach in tables 31 and 32, and the results in table 33.]
#figure(
pad(1em, tablex(
@ -242,4 +242,4 @@ endTime <<- 10000 * (1 / 0.5)
caption: [Results]
)
#indent_par[From the results, we see that the results are higher than the `cnet` simulated results. This is to be expected, since the product bound is an estimate of the upper bound of the expected results. Since both the number of links our flows cross and our blocking probabilities are small, the product bound still produces a similar result to the simulation.]
#indent_par[From the results, we see that the results are higher than the `cnet` simulated results. This is to be expected since the product bound is an estimate of the upper bound of the expected results. Since both the number of links our flows cross and our blocking probabilities are small, the product bound still produces a similar result to the simulation.]

View File

@ -19,10 +19,12 @@
caption: "Data traversal"
)
#pagebreak()
#indent_par[Afterwards, we can divide each row by the number of occurrences in that row to obtain the transition probability matrix. The following code 1 is the code we developed to accomplish this:]
#code_figure(
text(size: 0.8em, raw(read("/code/2.R"), lang: "R", block: true)),
text(size: 1.0em, raw(read("/code/2.R"), lang: "R", block: true)),
caption: "Developed code",
)

View File

@ -2,21 +2,21 @@
==== a.
#indent_par[The following figures 7 and 8 show the throughput of slotted ALOHA for various values of $N$, $p$ and $σ$ on the same scale.]
#indent_par[The following figures 7 and 8 show the throughput of slotted ALOHA for various values of $N$, $p$ and $σ$, on the same scale.]
#grid(
columns: (1fr, 1fr),
figure(
image("/output/3a-aloha10.svg", width: 80%),
pad(1em, figure(
image("/output/3a-aloha10.svg", width: 100%),
caption: [Theoretical performance of slotted ALOHA (N = 10)]
),
figure(
image("/output/3a-aloha25.svg", width: 80%),
)),
pad(1em, figure(
image("/output/3a-aloha25.svg", width: 100%),
caption: [Theoretical performance of slotted ALOHA (N = 25)]
),
)),
)
#indent_par[For small values of $σ$, most users are stuck in the thinking state, without sending packets, while for larger values of $σ$, most users are simultaneously attempting to send packets, thus increasing the probability of a collision. In both cases, the throughput is low, regardless of the values of $p$]
#indent_par[For small values of $σ$, most users are stuck in the thinking state, without sending packets, while for larger values of $σ$, most users are simultaneously attempting to send packets, thus increasing the probability of a collision. In both cases, the throughput is low, regardless of the values of $p$.]
#indent_par[For values of $σ$ between 0.01 and 0.1, we see a sharp increase in throughput, given that this is a "sweet spot", where users will try to transmit more often, but not as often as to collide. However, in this range, the throughput depends sharply on the values of $N$ and $p$.]
@ -28,7 +28,7 @@
==== b.
#indent_par[We developed the following script in code 2 to simulate slotted ALOHA.]
#indent_par[We developed the following script in code 2 to simulate slotted ALOHA:]
#code_figure(
text(size: 0.8em, raw(read("/code/3b.R"), lang: "R", block: true)),
@ -51,7 +51,7 @@
)),
)
#indent_par[As we use the same scale for both the theoretical graphs (Figures 7 and 8) and simulated graphs (Figures 9 and 10, respectively), we can compare them side by side to get an idea of whether or not they are similar. By performing this comparison, we reach the conclusion that all graphs are very similar, with the exception of the graph with $N = 25$ and $p = 0.3$, which has the drop-off occur a fair bit later, and is quite noisy compared to the others.]
#indent_par[As we use the same scale for both the theoretical graphs (Figures 7 and 8) and simulated graphs (Figures 9 and 10, respectively), we can compare them side by side to get an idea of whether or not they are similar. By performing this comparison, we conclude that all graphs are very similar, except for the graph with $N = 25$ and $p = 0.3$, which has the drop-off occur a fair bit later, and is quite noisy compared to the others.]
==== c.

View File

@ -3,7 +3,7 @@
#indent_par[Figure 11 represents the 3-DTMC we'll be using for this exercise.]
#figure(
image("/images/4-diagram.png", width: 50%),
image("/images/4-diagram.png", width: 70%),
caption: "3-DTMC"
)
@ -30,10 +30,10 @@ $
#pagebreak()
#indent_par[We can then model this in R using the following code 2:]
#indent_par[We can then model this in R using the following code 3:]
#code_figure(
text(size: 0.8em, raw(read("/code/4-solve.R"), lang: "R", block: true)),
text(size: 1.0em, raw(read("/code/4-solve.R"), lang: "R", block: true)),
caption: "Code for solving the balance equations",
)
@ -53,7 +53,7 @@ $
==== (ii). Matrix multiplication
#indent_par[The following code 3 contains our approach to obtain the limiting state probabilities via matrix multiplication.]
#indent_par[The following code 4 contains our approach to obtain the limiting state probabilities via matrix multiplication.]
#code_figure(
text(size: 0.8em, raw(read("/code/4-matrix.R"), lang: "R", block: true)),
@ -80,7 +80,7 @@ $
==== (iii). Simulation
#indent_par[The following code 4 contains our approach to obtain the limiting state probabilities via simulation.]
#indent_par[The following code 5 contains our approach to obtain the limiting state probabilities via simulation.]
#code_figure(
text(size: 0.8em, raw(read("/code/4-sim.R"), lang: "R", block: true)),
@ -89,9 +89,9 @@ $
#indent_par[We initialize our current state to 1, then for 100000 rounds, save the current state, calculate the next state and save it to the current.]
#indent_par[In order to calculate the next state, we generate a uniformly random number in the $[0.0, 1.0]$ interval, and then choose the first index of the cumulative sum of the probabilities that is higher than the number we generated.]
#indent_par[To calculate the next state, we generate a uniformly random number in the $[0.0, 1.0]$ interval, and then choose the first index of the cumulative sum of the probabilities that is higher than the number we generated.]
#indent_par[This works because by calculating the cumulative sum of the probabilities, we're calculating it's cumulative density function. Then by definition, finding the input for which this function has value $<= x$, for $x [0.0, 1.0]$ is equal to sampling the original distribution.]
#indent_par[This works because by calculating the cumulative sum of the probabilities, we're calculating its cumulative density function. Then by definition, finding the input for which this function has value $<= x$, for $x [0.0, 1.0]$ is equal to sampling the original distribution.]
#indent_par[Finally, we get the limiting state probabilities by checking how many states there are out of all the ones we've visited for each state. The output of this process can be seen in table 5:]

View File

@ -1,9 +1,9 @@
#import "/typst/util.typ" as util: indent_par, code_figure
#indent_par[Figure 12 represents the 3-DTMC we'll be using for this exercise.]
#indent_par[Figure 12, from the guide, represents the 3-DTMC we'll be using for this exercise.]
#figure(
image("/images/5-diagram.png", width: 50%),
image("/images/5-diagram.png", width: 80%),
caption: "3-DTMC"
)
@ -31,10 +31,10 @@ $
#pagebreak()
#indent_par[We can then model this in R using the following code 2:]
#indent_par[We can then model this in R using the following code 6:]
#code_figure(
text(size: 0.8em, raw(read("/code/5-solve.R"), lang: "R", block: true)),
text(size: 1.0em, raw(read("/code/5-solve.R"), lang: "R", block: true)),
caption: "Code for solving the balance equations",
)
@ -54,14 +54,14 @@ $
==== (ii). Simulation using view 1
#indent_par[The following code 4 contains our approach to obtain the limiting state probabilities via simulation implementing the 1st view.]
#indent_par[The following code 7 contains our approach to obtain the limiting state probabilities via simulation implementing the 1st view.]
#code_figure(
text(size: 0.8em, raw(read("/code/5-sim1.R"), lang: "R", block: true)),
caption: "Code using simulation view 1",
)
#indent_par[View 1 is similar to how a DTMC works, but before each jump, a state will wait an exponentially distributed amount of time, with rate given by it the transition rate out of it.]
#indent_par[View 1 is similar to how a DTMC works, but before each jump, a state will wait an exponentially distributed amount of time, with the rate given by it the transition rate out of it.]
#indent_par[After running the code, we ended up with the results in table 7:]
@ -79,7 +79,7 @@ $
==== (iii). Simulation using view 2
#indent_par[The following code 5 contains our approach to obtain the limiting state probabilities via simulation implementing the 2nd view.]
#indent_par[The following code 8 contains our approach to obtain the limiting state probabilities via simulation implementing the 2nd view.]
#code_figure(
text(size: 0.8em, raw(read("/code/5-sim2.R"), lang: "R", block: true)),
@ -104,6 +104,6 @@ $
#indent_par[Both of the results obtained in tables 7 and 8 are very close to the theoretical values presented in table 6.]
#indent_par[This is, in part, because we used a high limit for the maximum time of 100000. This limit impacts the accuracy of the results greatly, with low maximum times have limiting state probabilities that are very far from the theoretical values.]
#indent_par[This is, in part, because we used a high limit for the maximum time of 100000. This limit impacts the accuracy of the results greatly, with low maximum times having limiting state probabilities that are very far from the theoretical values.]
#pagebreak()

View File

@ -2,14 +2,14 @@
#indent_par[Figure 13 shows the average delay as a function of the stopping condition for system loads $ρ$ equal to 0.5, 1 and 2.]
#indent_par[We've chosen to include $ρ = 0.5$ to serve as a comparison against the other values]
#indent_par[We've chosen to include $ρ = 0.5$ to serve as a comparison against the other values.]
#figure(
image("/output/6.svg", width: 50%),
image("/output/6.svg", width: 80%),
caption: "Average delay as a function of the stopping condition"
)
#indent_par[The y axis is on a log-scale, because the function grows very rapidly, for certain values we'll discuss shortly. There also exists a ribbon around each value, representing the confidence intervals.]
#indent_par[The y-axis is on a log-scale, because the function grows very rapidly, for certain values we'll discuss shortly. There also exists a ribbon around each value, representing 95% confidence intervals for the average delay.]
#indent_par[To calculate the confidence intervals, we perform 50 rounds of the simulator.]
@ -17,6 +17,6 @@
#indent_par[For $ρ = 1$ and $ρ = 2$, the average delay increases without bound as the stopping condition grows. This indicates these systems are unstable, with $ρ = 2$ being more unstable than $ρ = 1$.]
#indent_par[As a fun note, another difference between $ρ = 1$ and $ρ = 2$ is that the former has a huge confidence interval, while the latter's is very tight around the mean. This implies that as the system becomes more unstable, the run-to-run stability increases.]
#indent_par[As a fun note, another difference between $ρ = 1$ and $ρ = 2$ is that the former has a huge confidence interval, while the latter's is very tight around the mean. This implies that as the system becomes more unstable, the run-to-run variance decreases.]
#pagebreak()

View File

@ -1,18 +1,19 @@
#import "@preview/tablex:0.0.6": tablex, rowspanx, colspanx
#import "/typst/util.typ" as util: indent_par, code_figure
#indent_par[For all calculations and tests, we set the link capacity at $100 "kbit" s^(-1)$ and the average packet size at $800 "bits"$.]
==== a.
#let solution = csv("/output/7a.csv", delimiter: "\t")
#indent_par[We first calculated all theoretical values and arrived at the following, in table 10:]
#indent_par[We first calculated all theoretical values and arrived at the following results, in table 9:]
#figure(
pad(1em, tablex(
columns: (auto, auto, 1.5fr, 2fr, 2fr),
align: center + horizon,
rowspanx(2)[ $λ$ ],
rowspanx(2)[ $μ$ ],
rowspanx(2)[ Average delay ($s$) ],
@ -35,14 +36,13 @@
caption: [Theoretical calculations]
)
#indent_par[Finally, for each value, we ran the simulator 20 times, and obtained the following confidence intervals, in table 11:]
#indent_par[Finally, for each value, we ran the simulator 10 times, and obtained the following results, with 95% confidence intervals, in table 10:]
#figure(
pad(1em, tablex(
columns: (auto, auto, 1fr, 1fr, 1fr, 1fr),
align: center + horizon,
rowspanx(2)[ $λ$ ],
rowspanx(2)[ $μ$ ],
colspanx(2)[ Average delay ($s$) ],
@ -72,15 +72,13 @@
#indent_par[However, for $λ = #solution.at(2).at(0)$, although the results still line up with the throughput, the average delay has a much bigger confidence interval, despite the mean still being spot-on.]
#pagebreak()
==== b.
#let solution = csv("/output/7b.csv", delimiter: "\t")
#indent_par[The values chosen for $𝜆$ and $𝜇$ are the same as in the previous exercise, to enable us to compare the results:]
#indent_par[The following table 12 are the updated theoretical results:]
#indent_par[The following table 11 are the updated theoretical results:]
#figure(
pad(1em, tablex(
@ -110,7 +108,9 @@
caption: [Theoretical calculations]
)
#indent_par[Finally, just like in the previous exercise, for each value, we ran the simulator 20 times, and obtained the following confidence intervals, in table 13:]
#pagebreak()
#indent_par[Finally, just like in the previous exercise, for each value, we ran the simulator 10 times, and obtained the results, with 95% confidence intervals, in table 12:]
#figure(
pad(1em, tablex(
@ -143,19 +143,15 @@
caption: [Simulation results]
)
#indent_par[Similarly to the previous exercise, for $λ = #solution.at(1).at(0)$, the values line up quite well, but for $λ = #solution.at(2).at(0)$, we see some deviation on the average delay. In this case the deviation is so strong the confidence intervals don't even include the theoretical value.]
#indent_par[Similarly to the previous exercise, for $λ = #solution.at(1).at(0)$, the values line up quite well, but for $λ = #solution.at(2).at(0)$, we see some large deviations in the average delay. In this case, the mean is no longer spot-on with the theoretical values.]
#indent_par[When comparing this approach, using fixed packet sizes, against the previous, with exponentially distributed packet size, we see that the average delays are much smaller. This implies that, despite the mean being the same, the net negative effect of the larger packets outweighs the positive effect of the smaller packets that the exponential distribution yields.]
#pagebreak()
#indent_par[When comparing this approach, using fixed packet sizes, against the previous, with exponentially distributed packet sizes, we see that the average delays are smaller. This implies that, despite the mean being the same, the net negative effect of the larger packets outweighs the positive effect of the smaller packets that the exponential distribution yields.]
==== c.
#let solution = csv("/output/7c.csv", delimiter: "\t")
#indent_par[]
#indent_par[The following table 14 are the updated theoretical results:]
#indent_par[The following table 13 are the updated theoretical results:]
#figure(
pad(1em, tablex(
@ -185,14 +181,13 @@
caption: [Theoretical calculations]
)
#indent_par[Finally, just like in the previous 2 exercises, for each value, we ran the simulator 20 times, and obtained the following confidence intervals, in table 15:]
#indent_par[Finally, just like in the previous 2 exercises, for each value, we ran the simulator 10 times, and obtained the following results, with 95% confidence intervals, in table 14:]
#figure(
pad(1em, tablex(
columns: (auto, auto, auto, 1fr, 1fr, 1fr, 1fr),
align: center + horizon,
rowspanx(2)[ $λ_1$ ],
rowspanx(2)[ $λ_2$ ],
rowspanx(2)[ $μ$ ],
@ -214,6 +209,6 @@
caption: [Simulation results]
)
// TODO: Compare with a. and b.
#indent_par[Comparing the simulated results with the theoretical, we see that the values of the latter are included in the confidence intervals of the former.]
#pagebreak()

View File

@ -56,7 +56,7 @@
#let results_b = csv("/output/8b.csv", delimiter: "\t")
#indent_par[The following code 8 contains out approach to calculate the variance. The functions `calc_stats_mm1` and `calc_stats_mg1` simulate the corresponding systems and return a list where `$avg_delay` contains the simulated average delay.]
#indent_par[The following code 9 contains our approach to calculate the variance. The functions `calc_stats_mm1` and `calc_stats_mg1` simulate the corresponding systems and return a list where `$avg_delay` contains the simulated average delay.]
#code_figure(
text(size: 0.8em, raw(read("/code/8b-report.R"), lang: "R", block: true)),
@ -100,9 +100,7 @@
#indent_par[We've included the workload variability from the previous exercise as the column $C^2$ to compare against. We can thus conclude that the variance and variability are correlated.]
#indent_par[This makes sense, as despite our _elephants_ occurring less often, their larger size ensures that the users that come after them have a much higher average queue delay, which in turn increases the variance of the system.]
#pagebreak()
#indent_par[This makes sense, as, despite our _elephants_ occurring less often, their larger size ensures that the users that come after them have a much higher average queue delay, which in turn increases the variance of the system.]
==== c.
@ -110,7 +108,7 @@
#indent_par[In our current implementation, we treat both the _elephants_ and _mice_ the same. _Elephants_ will always have a large size and thus need more time in queue, but this shouldn't affect the _mice_ that can be dispatched quickly.]
#indent_par[To remedy this, we can treat both categories of users differently, by performing *packet scheduling*. In specific, we can use a *strict priority* with _mice_ having a higher priority than the _elephants_. This leads to a higher average delay for the _elephants_, but with the tradeoff of the _mice_ having much lower average delay.]
#indent_par[To remedy this, we can treat both categories of users differently, by performing *packet scheduling*. In specific, we can use a *strict priority* with _mice_ having a higher priority than the _elephants_. This leads to a higher average delay for the _elephants_, but with the tradeoff of the _mice_ having a much lower average delay.]
#indent_par[To be able to calculate exactly whether this tradeoff is valuable, we can use the following formulas 11 through 15 to calculate the average queuing delay for each type of user:]
@ -122,7 +120,7 @@ $ W_"q2" = (λ_1 s_1^2 + λ_2 s_2^2) / (2 (1 - λ_1 s_1) (1 - λ_1 s_1 - λ_2 s_
$ W_q = (λ_1 W_"q1" + λ_2 W_"q2") / (λ_1 + λ_2) $
#indent_par[Where $W_"q1"$ is the average queueing delay of _mice_, $W_"q2"$ is the average queueing delay of _elephants_ and $W_q$ is the total average queueing delay]
#indent_par[Where $W_"q1"$ is the average queueing delay of _mice_, $W_"q2"$ is the average queueing delay of _elephants_ and $W_q$ is the total average queueing delay.]
#indent_par[With these formulas in hand, we have computed the values in the following table 17:]

View File

@ -1,12 +1,12 @@
#import "/typst/util.typ" as util: indent_par, code_figure
#indent_par[We have run the simulator for a range of $ρ [0.1, 1.5]$, with an interval of $0.1$ and obtained the following graph in figure 14:]
#indent_par[We have run the simulator for a range of $ρ [0.1, 1.5]$, with a hop of $0.1$ between each sample, and obtained the following graph in figure 14:]
#figure(
image("/output/9.svg", width: 100%),
caption: [Results]
)
#indent_par[For stables values of the system ($ρ < 1.0$), both policies are very comparable. However, when the system becomes unstable ($ρ >= 1.0$), the *JSQ* policy outperforms the random policy.]
#indent_par[For stables values of the system ($ρ < 1.0$), both policies are very comparable. However, when the system becomes unstable ($ρ >= 1.0$), the *_JSQ_* policy outperforms the random policy.]
#pagebreak()