From 98df406b54af190db38b45a2d4c00e0e8825cf09 Mon Sep 17 00:00:00 2001 From: jaffala Date: Wed, 2 Nov 2022 08:37:22 -0700 Subject: [PATCH] assignment submission --- 7159CEM-Portfolio-main/ACO.ipynb | 87 +++++++++++++++++++++++--------- 7159CEM-Portfolio-main/DP.ipynb | 9 ++-- 7159CEM-Portfolio-main/DT.ipynb | 4 +- 7159CEM-Portfolio-main/LP.ipynb | 3 +- 7159CEM-Portfolio-main/RL.ipynb | 8 +-- 5 files changed, 76 insertions(+), 35 deletions(-) diff --git a/7159CEM-Portfolio-main/ACO.ipynb b/7159CEM-Portfolio-main/ACO.ipynb index 8865ed5b..ba028b97 100644 --- a/7159CEM-Portfolio-main/ACO.ipynb +++ b/7159CEM-Portfolio-main/ACO.ipynb @@ -411,7 +411,7 @@ "\n", "$$ Euclidean \\space Plot $$\n", " \n", - "![title](ACO/Eucplt100.png)\n", + "![title](ACO/Ecuplt100.png)\n", "\n", "\n", "$$ Asymmetric \\space graph $$\n", @@ -452,7 +452,7 @@ "\n", "$$ Euclidean \\space Plot $$\n", " \n", - "![title](ACO/Eucplt150.png)\n", + "![title](ACO/Ecuplt100.png)\n", "\n", "\n", "$$ Asymmetric \\space graph $$\n", @@ -473,6 +473,15 @@ "![title](ACO/symplot150.png)\n", "\n", "\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "## For n = 200\n", "Results are as follows :- \n", "Cost of travel, Euclidean = 287.6057902102739\n", @@ -505,9 +514,13 @@ "\n", "$$ Symmetric \\space Plot $$\n", " \n", - "![title](ACO/symplt200.png)\n", - "\n", - "\n", + "![title](ACO/symplt200.png)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "## For n = 250\n", "Results are as follows :- \n", "Cost of travel, Euclidean = 280.671643559593\n", @@ -527,7 +540,7 @@ "\n", "$$ Asymmetric \\space graph $$\n", "\n", - "![title](ACO/Asygrf250.png)\n", + "![title](ACO/Asygrf.250.png)\n", "\n", "$$ Asymmetric \\space Plot $$\n", " \n", @@ -540,25 +553,57 @@ "\n", "$$ Symmetric \\space Plot $$\n", " \n", - "![title](ACO/symplt250.png)\n", - "\n" + "![title](ACO/symplt250.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "# Conclusion" + "## For n = 300\n", + "Results are as follows :- \n", + "Cost of travel, Euclidean = 329.06640228446554\n", + "Cost of travel, Euclidean - Greedy= 378.66916482167045\n", + "Cost of travel for asymmetric graph is = 271\n", + "Cost of travel for asymmetric graph Greedy= 270\n", + "Cost of travel for symmetric graph is = 289\n", + "Cost of travel for symmetric graph Greedy= 263\n", + "\n", + "$$ Euclidean \\space graph $$\n", + "\n", + "![title](ACO/Ecugrf300.png)\n", + "\n", + "$$ Euclidean \\space Plot $$\n", + "\n", + "![title](ACO/Ecuplt300.png)\n", + "\n", + "$$ Asymmetric \\space graph $$\n", + "\n", + "![title](ACO/Asygrf300.png)\n", + "\n", + "$$ Asymmetric \\space Plot $$\n", + " \n", + "![title](ACO/Asyplt300.png)\n", + "\n", + "\n", + "$$ Symmetric \\space graph $$\n", + "\n", + "![title](ACO/symgrf300.png)\n", + "\n", + "$$ Symmetric \\space Plot $$\n", + " \n", + "![title](ACO/symplt300.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "```\n", - "............................................................\n", - "............................................................\n", - "```" + "# Conclusion\n", + "In conclusion we were successful to solve the traveling salesman problem using the ant colony optimization alorigthm. As required, we created three diffirent types of weighted graphs (Euclidian, Assymetric & Symetric). After studing the scikit-learn we preformed ant colony optimization on the graph using the ACA_TSP() from the package. Everytime we compared the cost of traveling with the greedy neighbor approach as required, we did the expriment for vertices n=50, 100, 150, 200, 250, 300 which are the results shown above. \n", + "It is important to note that the accuracy of ant colony optimization alogrithim is directly dependent on size_pop & max_iter which is the maximum number of iterations. We can say that if we increase the maximum number of iterations from 200 to 1000, accuracy of the results is better. As seen above, for n= 300, max_iter=200 and size_pop=50 we can see that greedy neighbor gives better results (lower cost of travel) than the ant optimzation. Therefore it is needed that max_iter > n. \n", + "\n", + "Please refer to ACO.py for python code. " ] }, { @@ -572,18 +617,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "```\n", - "............................................................\n", - "............................................................\n", - "```" + "\n", + "M. Alhanjouri and Belal Alfarra. Ant colony versus genetic algorithm based on travelling salesman\n", + "problem. 2013\n", + "\n", + "scikit-opt. (n.d.-b). Retrieved 2 November 2022, from https://scikit-opt.github.io/scikit-opt/" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/7159CEM-Portfolio-main/DP.ipynb b/7159CEM-Portfolio-main/DP.ipynb index 3cf15e7a..e738e93d 100644 --- a/7159CEM-Portfolio-main/DP.ipynb +++ b/7159CEM-Portfolio-main/DP.ipynb @@ -366,7 +366,9 @@ "metadata": {}, "source": [ "# Conclusion\n", - "Topdown and bottom up approaches where both successful as the results for this approach is identical. \n" + "Topdown and bottom up approaches where both successful as the results for this approach is identical. \n", + "\n", + "Please refer to DP.py for python code\n" ] }, { @@ -376,11 +378,6 @@ "# List of references\n", "Ghadage, O. P. (2021, November 22). What is Dynamic Programming? Top-down vs Bottom-up Approach. Simplilearn.com. https://www.simplilearn.com/tutorials/data-structure-tutorial/what-is-dynamic-programming\n" ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [] } ], "metadata": { diff --git a/7159CEM-Portfolio-main/DT.ipynb b/7159CEM-Portfolio-main/DT.ipynb index 4b631c91..e20b9d80 100644 --- a/7159CEM-Portfolio-main/DT.ipynb +++ b/7159CEM-Portfolio-main/DT.ipynb @@ -305,7 +305,9 @@ "# Conclusion\n", "\n", "By using scikit-learn we implemented the decision tree with the hyper parameters being temperature, humidity and windy for \"Numeric\" as well as outlook, temperature, humdity, windy for \"Nominal\" therefore having the target as play. Firstly, we store the data set into paramters naming the \"Data1 and Data2\" then we store the hyper parameters in variable X and target in variable Y. Then we use the train_ test_split function from Sklearn library to split the dataset into training parameters and test parameters. After that we create the decison tree classfier as d1tree and d2tree for the decision trees of our data sets. Now we use fit function to train the decision tree and make predcition using d1tree.predict. Then we plotted the decision tree and print the accuracy score. \n", - "From obeservation and experimentation we can say that if we increase the size of the train set, the accuracy of the decision tree also increases. \n" + "From obeservation and experimentation we can say that if we increase the size of the train set, the accuracy of the decision tree also increases. \n", + "\n", + "Please refer to DT.py for python code. \n" ] }, { diff --git a/7159CEM-Portfolio-main/LP.ipynb b/7159CEM-Portfolio-main/LP.ipynb index b9b3b7be..a9384416 100644 --- a/7159CEM-Portfolio-main/LP.ipynb +++ b/7159CEM-Portfolio-main/LP.ipynb @@ -261,7 +261,8 @@ "model += 1*T1 + 1 * T2 + 1 * T3 <= 20000, \"Molding_machine\"\n", "model += 0.3*T1 + 0.5 * T2 + 0.4 * T3 <= 900, \"Assembly_machine\"\n", "Objective value: 390.00000000\n", - "\n" + "\n", + "Please refer LP.py for python code. " ] }, { diff --git a/7159CEM-Portfolio-main/RL.ipynb b/7159CEM-Portfolio-main/RL.ipynb index 3e64e46a..a438b62b 100644 --- a/7159CEM-Portfolio-main/RL.ipynb +++ b/7159CEM-Portfolio-main/RL.ipynb @@ -638,8 +638,9 @@ "source": [ "# Conclusion\n", "\n", - "The various reinforcement learning application was studied using the bandit algorithms with various experiments \n", - "\n" + "The various reinforcement learning application was studied using the bandit algorithms with various experiments. \n", + "\n", + "Please refer to RL.py for python code. " ] }, { @@ -648,7 +649,8 @@ "source": [ "# List of references\n", "\n", - "Reinforcement Learing - An introduction (Sutton and Barto, 2018)\n" + "Reinforcement learning : an introduction / Richard S. Sutton and Andrew G. Barto.\n", + "Description: Second edition. | Cambridge, MA : The MIT Press, [2018] http://incompleteideas.net/book/RLbook2020trimmed.pdf\n" ] } ],