Probability in Electrical Engineering and Computer Science : An Application-Driven Course.

Bibliographic Details
Main Author: Walrand, Jean.
Format: eBook
Language:English
Published: Cham : Springer International Publishing AG, 2021.
Edition:1st ed.
Subjects:
Online Access:Click to View
LEADER 09845nam a22004213i 4500
001 EBC6647753
003 MiAaPQ
005 20231204023217.0
006 m o d |
007 cr cnu||||||||
008 231204s2021 xx o ||||0 eng d
020 |a 9783030499952  |q (electronic bk.) 
020 |z 9783030499945 
035 |a (MiAaPQ)EBC6647753 
035 |a (Au-PeEL)EBL6647753 
035 |a (OCoLC)1319210231 
040 |a MiAaPQ  |b eng  |e rda  |e pn  |c MiAaPQ  |d MiAaPQ 
050 4 |a QA76.9.M35 
100 1 |a Walrand, Jean. 
245 1 0 |a Probability in Electrical Engineering and Computer Science :  |b An Application-Driven Course. 
250 |a 1st ed. 
264 1 |a Cham :  |b Springer International Publishing AG,  |c 2021. 
264 4 |c ©2021. 
300 |a 1 online resource (391 pages) 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
505 0 |a Intro -- Preface -- Acknowledgements -- Introduction -- About This Second Edition -- Contents -- 1 PageRank: A -- 1.1 Model -- 1.2 Markov Chain -- 1.2.1 General Definition -- 1.2.2 Distribution After n Steps and Invariant Distribution -- 1.3 Analysis -- 1.3.1 Irreducibility and Aperiodicity -- 1.3.2 Big Theorem -- 1.3.3 Long-Term Fraction of Time -- 1.4 Illustrations -- 1.5 Hitting Time -- 1.5.1 Mean Hitting Time -- 1.5.2 Probability of Hitting a State Before Another -- 1.5.3 FSE for Markov Chain -- 1.6 Summary -- 1.6.1 Key Equations and Formulas -- 1.7 References -- 1.8 Problems -- 2 PageRank: B -- 2.1 Sample Space -- 2.2 Laws of Large Numbers for Coin Flips -- 2.2.1 Convergence in Probability -- 2.2.2 Almost Sure Convergence -- 2.3 Laws of Large Numbers for i.i.d. RVs -- 2.3.1 Weak Law of Large Numbers -- 2.3.2 Strong Law of Large Numbers -- 2.4 Law of Large Numbers for Markov Chains -- 2.5 Proof of Big Theorem -- 2.5.1 Proof of Theorem 1.1 (a) -- 2.5.2 Proof of Theorem 1.1 (b) -- 2.5.3 Periodicity -- 2.6 Summary -- 2.6.1 Key Equations and Formulas -- 2.7 References -- 2.8 Problems -- 3 Multiplexing: A -- 3.1 Sharing Links -- 3.2 Gaussian Random Variable and CLT -- 3.2.1 Binomial and Gaussian -- 3.2.2 Multiplexing and Gaussian -- 3.2.3 Confidence Intervals -- 3.3 Buffers -- 3.3.1 Markov Chain Model of Buffer -- 3.3.2 Invariant Distribution -- 3.3.3 Average Delay -- 3.3.4 A Note About Arrivals -- 3.3.5 Little's Law -- 3.4 Multiple Access -- 3.5 Summary -- 3.5.1 Key Equations and Formulas -- 3.6 References -- 3.7 Problems -- 4 Multiplexing: B -- 4.1 Characteristic Functions -- 4.2 Proof of CLT (Sketch) -- 4.3 Moments of N(0, 1) -- 4.4 Sum of Squares of 2 i.i.d. N(0, 1) -- 4.5 Two Applications of Characteristic Functions -- 4.5.1 Poisson as a Limit of Binomial -- 4.5.2 Exponential as Limit of Geometric -- 4.6 Error Function. 
505 8 |a 4.7 Adaptive Multiple Access -- 4.8 Summary -- 4.8.1 Key Equations and Formulas -- 4.9 References -- 4.10 Problems -- 5 Networks: A -- 5.1 Spreading Rumors -- 5.2 Cascades -- 5.3 Seeding the Market -- 5.4 Manufacturing of Consent -- 5.5 Polarization -- 5.6 M/M/1 Queue -- 5.7 Network of Queues -- 5.8 Optimizing Capacity -- 5.9 Internet and Network of Queues -- 5.10 Product-Form Networks -- 5.10.1 Example -- 5.11 References -- 5.12 Problems -- 6 Networks-B -- 6.1 Social Networks -- 6.2 Continuous-Time Markov Chains -- 6.2.1 Two-State Markov Chain -- 6.2.2 Three-State Markov Chain -- 6.2.3 General Case -- 6.2.4 Uniformization -- 6.2.5 Time Reversal -- 6.3 Product-Form Networks -- 6.4 Proof of Theorem 5.7 -- 6.5 References -- 7 Digital Link-A -- 7.1 Digital Link -- 7.2 Detection and Bayes' Rule -- 7.2.1 Bayes' Rule -- 7.2.2 Circumstances vs. Causes -- 7.2.3 MAP and MLE -- Example: Ice Cream and Sunburn -- 7.2.4 Binary Symmetric Channel -- 7.3 Huffman Codes -- 7.4 Gaussian Channel -- Simulation -- 7.4.1 BPSK -- 7.5 Multidimensional Gaussian Channel -- 7.5.1 MLE in Multidimensional Case -- 7.6 Hypothesis Testing -- 7.6.1 Formulation -- 7.6.2 Solution -- 7.6.3 Examples -- Gaussian Channel -- Mean of Exponential RVs -- Bias of a Coin -- Discrete Observations -- 7.7 Summary -- 7.7.1 Key Equations and Formulas -- 7.8 References -- 7.9 Problems -- 8 Digital Link-B -- 8.1 Proof of Optimality of the Huffman Code -- 8.2 Proof of Neyman-Pearson Theorem 7.4 -- 8.3 Jointly Gaussian Random Variables -- 8.3.1 Density of Jointly Gaussian Random Variables -- 8.4 Elementary Statistics -- 8.4.1 Zero-Mean? -- 8.4.2 Unknown Variance -- 8.4.3 Difference of Means -- 8.4.4 Mean in Hyperplane? -- 8.4.5 ANOVA -- 8.5 LDPC Codes -- 8.6 Summary -- 8.6.1 Key Equations and Formulas -- 8.7 References -- 8.8 Problems -- 9 Tracking-A -- 9.1 Examples -- 9.2 Estimation Problem. 
505 8 |a 9.3 Linear Least Squares Estimates -- 9.3.1 Projection -- 9.4 Linear Regression -- 9.5 A Note on Overfitting -- 9.6 MMSE -- 9.6.1 MMSE for Jointly Gaussian -- 9.7 Vector Case -- 9.8 Kalman Filter -- 9.8.1 The Filter -- 9.8.2 Examples -- Random Walk -- Random Walk with Unknown Drift -- Random Walk with Changing Drift -- Falling Object -- 9.9 Summary -- 9.9.1 Key Equations and Formulas -- 9.10 References -- 9.11 Problems -- 10 Tracking: B -- 10.1 Updating LLSE -- 10.2 Derivation of Kalman Filter -- 10.3 Properties of Kalman Filter -- 10.3.1 Observability -- 10.3.2 Reachability -- 10.4 Extended Kalman Filter -- 10.4.1 Examples -- 10.5 Summary -- 10.5.1 Key Equations and Formulas -- 10.6 References -- 11 Speech Recognition: A -- 11.1 Learning: Concepts and Examples -- 11.2 Hidden Markov Chain -- 11.3 Expectation Maximization and Clustering -- 11.3.1 A Simple Clustering Problem -- 11.3.2 A Second Look -- 11.4 Learning: Hidden Markov Chain -- 11.4.1 HEM -- 11.4.2 Training the Viterbi Algorithm -- 11.5 Summary -- 11.5.1 Key Equations and Formulas -- 11.6 References -- 11.7 Problems -- 12 Speech Recognition: B -- 12.1 Online Linear Regression -- 12.2 Theory of Stochastic Gradient Projection -- 12.2.1 Gradient Projection -- 12.2.2 Stochastic Gradient Projection -- 12.2.3 Martingale Convergence -- 12.3 Big Data -- 12.3.1 Relevant Data -- 12.3.2 Compressed Sensing -- 12.3.3 Recommendation Systems -- 12.4 Deep Neural Networks -- 12.4.1 Calculating Derivatives -- 12.5 Summary -- 12.5.1 Key Equations and Formulas -- 12.6 References -- 12.7 Problems -- 13 Route Planning: A -- 13.1 Model -- 13.2 Formulation 1: Pre-planning -- 13.3 Formulation 2: Adapting -- 13.4 Markov Decision Problem -- 13.4.1 Examples -- 13.5 Infinite Horizon -- 13.6 Summary -- 13.6.1 Key Equations and Formulas -- 13.7 References -- 13.8 Problems -- 14 Route Planning: B -- 14.1 LQG Control. 
505 8 |a 14.1.1 Letting N →∞ -- 14.2 LQG with Noisy Observations -- 14.2.1 Letting N →∞ -- 14.3 Partially Observed MDP -- 14.3.1 Example: Searching for Your Keys -- 14.4 Summary -- 14.4.1 Key Equations and Formulas -- 14.5 References -- 14.6 Problems -- 15 Perspective and Complements -- 15.1 Inference -- 15.2 Sufficient Statistic -- 15.2.1 Interpretation -- 15.3 Infinite Markov Chains -- 15.3.1 Lyapunov-Foster Criterion -- 15.4 Poisson Process -- 15.4.1 Definition -- 15.4.2 Independent Increments -- 15.4.3 Number of Jumps -- 15.5 Boosting -- 15.6 Multi-Armed Bandits -- 15.7 Capacity of BSC -- 15.8 Bounds on Probabilities -- 15.8.1 Applying the Bounds to Multiplexing -- 15.9 Martingales -- 15.9.1 Definitions -- 15.9.2 Examples -- 15.9.3 Law of Large Numbers -- 15.9.4 Wald's Equality -- 15.10 Summary -- 15.10.1 Key Equations and Formulas -- 15.11 References -- 15.12 Problems -- Correction to: Probability in Electrical Engineering and Computer Science -- Correction to: Probability in Electrical Engineering and Computer Science (Funding Information) -- A Elementary Probability -- A.1 Symmetry -- A.2 Conditioning -- A.3 Common Confusion -- A.4 Independence -- A.5 Expectation -- A.6 Variance -- A.7 Inequalities -- A.8 Law of Large Numbers -- A.9 Covariance and Regression -- A.10 Why Do We Need a More Sophisticated Formalism? -- A.11 References -- A.12 Solved Problems -- B Basic Probability -- B.1 General Framework -- B.1.1 Probability Space -- B.1.2 Borel-Cantelli Theorem -- B.1.3 Independence -- B.1.4 Converse of Borel-Cantelli Theorem -- B.1.5 Conditional Probability -- B.1.6 Random Variable -- B.2 Discrete Random Variable -- B.2.1 Definition -- B.2.2 Expectation -- B.2.3 Function of a RV -- B.2.4 Nonnegative RV -- B.2.5 Linearity of Expectation -- B.2.6 Monotonicity of Expectation -- B.2.7 Variance, Standard Deviation. 
505 8 |a B.2.8 Important Discrete Random Variables -- B.3 Multiple Discrete Random Variables -- B.3.1 Joint Distribution -- B.3.2 Independence -- B.3.3 Expectation of Function of Multiple RVs -- B.3.4 Covariance -- B.3.5 Conditional Expectation -- B.3.6 Conditional Expectation of a Function -- B.4 General Random Variables -- B.4.1 Definitions -- B.4.2 Examples -- B.4.3 Expectation -- B.4.4 Continuity of Expectation -- B.5 Multiple Random Variables -- B.5.1 Random Vector -- B.5.2 Minimum and Maximum of Independent RVs -- B.5.3 Sum of Independent Random Variables -- B.6 Random Vectors -- B.6.1 Orthogonality and Projection -- B.7 Density of a Function of Random Variables -- B.7.1 Linear Transformations -- B.7.2 Nonlinear Transformations -- B.8 References -- B.9 Problems -- References -- Index. 
588 |a Description based on publisher supplied metadata and other sources. 
590 |a Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2023. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.  
655 4 |a Electronic books. 
776 0 8 |i Print version:  |a Walrand, Jean  |t Probability in Electrical Engineering and Computer Science  |d Cham : Springer International Publishing AG,c2021  |z 9783030499945 
797 2 |a ProQuest (Firm) 
856 4 0 |u https://ebookcentral.proquest.com/lib/matrademy/detail.action?docID=6647753  |z Click to View