Statistics and Probability with Applications for Engineers and Scientists
Buy Rights Online Buy Rights

Rights Contact Login For More Details

  • Wiley

More About This Title Statistics and Probability with Applications for Engineers and Scientists

English

Introducing the tools of statisticsand probability from the ground up

An understanding of statistical tools is essential for engineers and scientists who often need to deal with data analysis over the course of their work. Statistics and Probability with Applications for Engineers and Scientists walks readers through a wide range of popular statistical techniques, explaining step-by-step how to generate, analyze, and interpret data for diverse applications in engineering and the natural sciences.

Unique among books of this kind, Statistics and Probability with Applications for Engineers andScientists covers descriptive statistics first, then goes on to discuss the fundamentals of probability theory. Along with case studies, examples, and real-world data sets, the book incorporates clear instructions on how to use the statistical packages Minitab® and Microsoft® Office Excel® to analyze various data sets. The book also features:

Detailed discussions on sampling distributions, statistical estimation of population parameters, hypothesis testing, reliability theory, statistical quality control including Phase I and Phase II control charts, and process capability indices

A clear presentation of nonparametric methods and simple and multiple linear regression methods, as well as a brief discussion on logistic regression method

Comprehensive guidance on the design of experiments, including randomized block designs, one- and two-way layout designs, Latin square designs, random effects and mixed effects models, factorial and fractional factorial designs, and response surface methodology

A companion website containing data sets for Minitab and Microsoft Office Excel, as well as JMP ® routines and results

Assuming no background in probability and statistics, Statistics and Probability with Applications for Engineers and Scientists features a unique, yet tried-and-true, approach that is ideal for all undergraduate students as well as statistical practitioners who analyze and illustrate real-world data in engineering and the natural sciences.

English

BHISHAM C. GUPTA, PhD, is Professor in the Department of Mathematics and Statistics at the University of Southern Maine. Dr. Gupta has written four books and more than thirty articles.

IRWIN GUTTMAN, PhD, is Professor Emeritus of Statistics in the Department of Mathematics at the State University of New York at Buffalo and Department of Statistics at the University of Toronto, Canada. Dr. Guttman has written five books and over 140 articles.

English

Preface xvii

Chapter 1 | Introduction 1

1.1 Designed Experiment 2

1.1.1 Motivation for the Study 2

1.1.2 Investigation 2

1.1.3 Changing Criteria 2

1.1.4 A Summary of the Various Phases of the Investigation 3

1.2 A Survey 5

1.3 An Observational Study 6

1.4 A Set of Historical Data 6

1.5 A Brief Description of What is Covered in This Book 6

PART I

Chapter 2 | Describing Data Graphically and Numerically 11

2.1 Getting Started with Statistics 12

2.1.1 What Is Statistics? 12

2.1.2 Population and Sample in a Statistical Study 12

2.2 Classification of Various Types of Data 15

2.2.1 Nominal Data 15

2.2.2 Ordinal Data 16

2.2.3 Interval Data 16

2.2.4 Ratio Data 16

2.3 Frequency Distribution Tables for Qualitative and Quantitative Data 17

2.3.1 Qualitative Data 17

2.3.2 Quantitative Data 20

2.4 Graphical Description of Qualitative and Quantitative Data 25

2.4.1 Dot Plot 25

2.4.2 Pie Chart 25

2.4.3 Bar Chart 27

2.4.4 Histograms 30

2.4.5 Line Graph 35

2.4.6 Stem-and-Leaf Plot 37

2.5 Numerical Measures of Quantitative Data 41

2.5.1 Measures of Centrality 42

2.5.2 Measures of Dispersion 46

2.6 Numerical Measures of Grouped Data 55

2.6.1 Mean of a Grouped Data 56

2.6.2 Median of a Grouped Data 56

2.6.3 Mode of a Grouped Data 57

2.6.4 Variance of a Grouped Data 57

2.7 Measures of Relative Position 59

2.7.1 Percentiles 59

2.7.2 Quartiles 60

2.7.3 Interquartile Range 60

2.7.4 Coefficient of Variation 61

2.8 Box-Whisker Plot 62

2.8.1 Construction of a Box Plot 62

2.8.2 How to Use the Box Plot 63

2.9 Measures of Association 68

2.10 Case Studies 71

2.11 Using JMP1 73

Review Practice Problems 73

Chapter 3 | Elements of Probability 83

3.1 Introduction 84

3.2 Random Experiments, Sample Spaces, and Events 84

3.2.1 Random Experiments and Sample Spaces 84

3.2.2 Events 85

3.3 Concepts of Probability 88

3.4 Techniques of Counting Sample Points 93

3.4.1 Tree Diagram 93

3.4.2 Permutations 94

3.4.3 Combinations 95

3.4.4 Arrangements of n Objects Involving Several Kinds of Objects 96

3.5 Conditional Probability 98

3.6 Bayes’s Theorem 100

3.7 Introducing Random Variables 104

Review Practice Problems 105

Chapter 4 | Discrete Random Variables and Some Important Discrete

Probability Distributions 111

4.1 Graphical Descriptions of Discrete Distributions 112

4.2 Mean and Variance of a Discrete Random Variable 113

4.2.1 Expected Value of Discrete Random Variables and Their Functions 113

4.2.2 The Moment-Generating Function–Expected Value of a Special Function of X 115

4.3 The Discrete Uniform Distribution 117

4.4 The Hypergeometric Distribution 119

4.5 The Bernoulli Distribution 122

4.6 The Binomial Distribution 123

4.7 The Multinomial Distribution 126

4.8 The Poisson Distribution 128

4.8.1 Definition and Properties of the Poisson Distribution 128

4.8.2 Poisson Process 128

4.8.3 Poisson Distribution as a Limiting Form of the Binomial 128

4.9 The Negative Binomial Distribution 132

4.10 Some Derivations and Proofs (Optional) 135

4.11 A Case Study 135

4.12 Using JMP 135

Review Practice Problems 136

Chapter 5 | Continuous Random Variables and Some Important Continuous Probability Distributions 143

5.1 Continuous Random Variables 144

5.2 Mean and Variance of Continuous Random Variables 146

5.2.1 Expected Value of Continuous Random Variables and Their Function 146

5.2.2 The Moment-Generating Function–Expected Value of a Special Function of X 149

5.3 Chebychev’s Inequality 151

5.4 The Uniform Distribution 152

5.4.1 Definition and Properties 152

5.4.2 Mean and Standard Deviation of the Uniform Distribution 155

5.5 The Normal Distribution 157

5.5.1 Definition and Properties 157

5.5.2 The Standard Normal Distribution 158

5.5.3 The Moment-Generating Function of the Normal Distribution 164

5.6 Distribution of Linear Combination of Independent Normal Variables 165

5.7 Approximation of the Binomial and Poisson Distribution by the Normal Distribution 169

5.7.1 Approximation of the Binomial Distribution by the Normal Distribution 169

5.7.2 Approximation of the Poisson Distribution by the Normal Distribution 171

5.8 A Test of Normality 171

5.9 Probability Models Commonly Used in Reliability Theory 175

5.9.1 The Lognormal Distribution 176

5.9.2 The Exponential Distribution 180

5.9.3 The Gamma Distribution 184

5.9.4 The Weibull Distribution 187

5.10 A Case Study 191

5.11 Using JMP 192

Review Practice Problems 192

Chapter 6 | Distribution of Functions of Random Variables 199

6.1 Introduction 200

6.2 Distribution Functions of Two Random Variables 200

6.2.1 Case of Two Discrete Random Variables 200

6.2.2 Case of Two Continuous Random Variables 202

6.2.3 The Mean Value and Variance of Functions of Two Random Variables 204

6.2.4 Conditional Distributions 206

6.2.5 Correlation between Two Random Variables 208

6.2.6 Bivariate Normal Distribution 211

6.3 Extension to Several Random Variables 214

6.4 The Moment-Generating Function Revisited 214

Review Practice Problems 218

Chapter 7 | Sampling Distributions 223

7.1 Random Sampling 224

7.1.1 Random Sampling from an Infinite Population 224

7.1.2 Random Sampling from a Finite Population 225

7.2 The Sampling Distribution of the Mean 228

7.2.1 Normal Sampled Population 228

7.2.2 Nonnormal Sampled Population 228

7.2.3 The Central Limit Theorem 228

7.3 Sampling from a Normal Population 234

7.3.1 The Chi-Square Distribution 234

7.3.2 The Student t-Distribution 240

7.3.3 Snedecor’s F-Distribution 244

7.4 Order Statistics 247

7.5 Using JMP 247

Review Practice Problems 247

Chapter 8 | Estimation of Population Parameters 251

8.1 Introduction 252

8.2 Point Estimators for the Population Mean and Variance 252

8.2.1 Properties of Point Estimators 253

8.2.2 Methods of Finding Point Estimators 256

8.3 Interval Estimators for the Mean m of a Normal Population 262

8.3.1 s2 Known 262

8.3.2 s2 Unknown 264

8.3.3 Sample Size Is Large 266

8.4 Interval Estimators for the Difference of Means of Two Normal Populations 272

8.4.1 Variances Are Known 272

8.4.2 Variances Are Unknown 273

8.5 Interval Estimators for the Variance of a Normal Population 280

8.6 Interval Estimator for the Ratio of Variances of Two Normal Populations 284

8.7 Point and Interval Estimators for the Parameters of Binomial Populations 288

8.7.1 One Binomial Population 288

8.7.2 Two Binomial Populations 290

8.8 Determination of Sample Size 294

8.8.1 One Population Mean 294

8.8.2 Difference of Two Population Means 295

8.8.3 One Population Proportion 296

8.8.4 Difference of Two Population Proportions 296

8.9 Some Supplemental Information 298

8.10 A Case Study 298

8.11 Using JMP 299

Review Practice Problems 299

Chapter 9 | Hypothesis Testing 307

9.1 Introduction 308

9.2 Basic Concepts of Testing a Statistical Hypothesis 308

9.2.1 Hypothesis Formulation 308

9.2.2 Risk Assessment 310

9.3 Tests Concerning the Mean of a Normal Population Having Known Variance 312

9.3.1 Case of a One-Tail (Left-Sided) Test 312

9.3.2 Case of a One-Tail (Right-Sided) Test 316

9.3.3 Case of a Two-Tail Test 317

9.4 Tests Concerning the Mean of a Normal Population Having Unknown Variance 324

9.4.1 Case of a Left-Tail Test 324

9.4.2 Case of a Right-Tail Test 326

9.4.3 The Two-Tail Case 326

9.5 Large Sample Theory 330

9.6 Tests Concerning the Difference of Means of Two Populations Having Distributions with Known Variances 332

9.6.1 The Left-Tail Test 332

9.6.2 The Right-Tail Test 333

9.6.3 The Two-Tail Test 334

9.7 Tests Concerning the Difference of Means of Two Populations Having Normal Distributions with Unknown Variances 339

9.7.1 Two Population Variances Are Equal 339

9.7.2 Two Population Variances Are Unequal 342

9.7.3 The Paired t-Test 344

9.8 Testing Population Proportions 349

9.8.1 Test Concerning One Population Proportion 349

9.8.2 Test Concerning the Difference between Two Population Proportions 351

9.9 Tests Concerning the Variance of a Normal Population 355

9.10 Tests Concerning the Ratio of Variances of Two Normal Populations 358

9.11 Testing of Statistical Hypotheses Using Confidence Intervals 362

9.12 Sequential Tests of Hypotheses 367

9.12.1 A One-Tail Sequential Testing Procedure 367

9.12.2 A Two-Tail Sequential Testing Procedure 371

9.13 Case Studies 374

9.14 Using JMP 375

Review Practice Problems 375

PART II

Chapter 10 | Elements of Reliability Theory 389

10.1 The Reliability Function 390

10.1.1 The Hazard Rate Function 391

10.1.2 Employing the Hazard Function 398

10.2 Estimation: Exponential Distribution 399

10.3 Hypothesis Testing: Exponential Distribution 406

10.4 Estimation: Weibull Distribution 407

10.5 Case Studies 414

10.6 Using JMP 416

Review Practice Problems 416

Chapter 11 | Statistical Quality Control—Phase I Control Charts 419

11.1 Basic Concepts of Quality and Its Benefits 420

11.2 What a Process Is and Some Valuable Tools 420

11.2.1 Check Sheet 422

11.2.2 Pareto Chart 422

11.2.3 Cause-and-Effect (Fishbone or Ishikawa) Diagram 425

11.2.4 Defect Concentration Diagram 427

11.3 Common and Assignable Causes 427

11.3.1 Process Evaluation 427

11.3.2 Action on the Process 428

11.3.3 Action on Output 428

11.3.4 Variation 428

11.4 Control Charts 429

11.4.1 Preparation for Use of Control Charts 430

11.4.2 Benefits of a Control Chart 431

11.4.3 Control Limits Versus Specification Limits 433

11.5 Control Charts for Variables 434

11.5.1 Shewhart X and R Control Charts 434

11.5.2 Shewhart X and R Control Charts When Process Mean m and Process Standard Deviation s Are Known 440

11.5.3 Shewhart X and S Control Charts 441

11.6 Control Charts for Attributes 448

11.6.1 The p Chart: Control Chart for the Fraction of Nonconforming Units 449

11.6.2 The p Chart: Control Chart for the Fraction Nonconforming with Variable Sample Sizes 454

11.6.3 The np Control Chart: Control Chart for the Number of Nonconforming Units 456

11.6.4 The c Control Chart 458

11.6.5 The u Control Chart 461

11.7 Process Capability 468

11.8 Case Studies 470

11.9 Using JMP 472

Review Practice Problems 472

Chapter 12 | Statistical Quality Control—Phase II Control Charts 479

12.1 Introduction 480

12.2 Basic Concepts of CUSUM Control Chart 480

12.3 Designing a CUSUM Control Chart 483

12.3.1 Two-Sided CUSUM Control Chart Using a Numerical Procedure 484

12.3.2 The Fast Initial Response (FIR) Feature for CUSUM Control Chart 489

12.3.3 The Combined Shewhart–CUSUM Control Chart 492

12.3.4 The CUSUM Control Chart for Controlling Process Variability 493

12.4 The Moving Average (MA) Control Chart 495

12.5 The Exponentially Weighted Moving Average (EWMA) Control Chart 499

12.6 Case Studies 504

12.7 Using JMP 505

Review Practice Problems 506

Chapter 13 | Analysis of Categorical Data 509

13.1 Introduction 509

13.2 The Chi-Square Goodness-of-Fit Test 510

13.3 Contingency Tables 517

13.3.1 The 2  2 Case Parameters Known 517

13.3.2 The 2  2 Case with Unknown Parameters 519

13.3.3 The r  s Contingency Table 521

13.4 Chi-Square Test for Homogeneity 525

13.5 Comments on the Distribution of the Lack-of-Fit Statistics 528

13.6 Case Studies 529

Review Practice Problems 531

Chapter 14 | Nonparametric Tests 537

14.1 Introduction 537

14.2 The Sign Test 538

14.2.1 One-Sample Test 538

14.2.2 The Wilcoxon Signed-Rank Test 541

14.2.3 Two-Sample Test 543

14.3 Mann–Whitney (Wilcoxon) W Test for Two Samples 548

14.4 Runs Test 551

14.4.1 Runs Above and Below the Median 551

14.4.2 The Wald–Wolfowitz Run Test 553

14.5 Spearman Rank Correlation 556

14.6 Using JMP 559

Review Practice Problems 559

Chapter 15 | Simple Linear Regression Analysis 565

15.1 Introduction 566

15.2 Fitting the Simple Linear Regression Model 567

15.2.1 Simple Linear Regression Model 567

15.2.2 Fitting a Straight Line by Least Squares 569

15.2.3 Sampling Distribution of the Estimators of Regression Coefficients 573

15.3 Unbiased Estimator of s2 578

15.4 Further Inferences Concerning Regression Coefficients (b0, b1), E(Y), and Y 580

15.4.1 Confidence Interval for b1 with Confidence Coefficient (1 a) 580

15.4.2 Confidence Interval for b0 with Confidence Coefficient (1a) 581

15.4.3 Confidence Interval for E(YjX) with Confidence Coefficient (1 a) 582

15.4.4 Prediction Interval for a Future Observation Y with Confidence Coefficient (1 a) 585

15.5 Tests of Hypotheses for b0 and b1 590

15.5.1 Test of Hypotheses for b1 590

15.5.2 Test of Hypotheses for b0 590

15.6 Analysis of Variance Approach to Simple Linear Regression Analysis 596

15.7 Residual Analysis 601

15.8 Transformations 609

15.9 Inference About r 615

15.10 A Case Study 618

15.11 Using JMP 619

Review Practice Problems 619

Chapter 16 | Multiple Linear Regression Analysis 627

16.1 Introduction 628

16.2 Multiple Linear Regression Models 628

16.3 Estimation of Regression Coefficients 632

16.3.1 Estimation of Regression Coefficients Using Matrix Notation 633

16.3.2 Properties of the Least-Squares Estimators 635

16.3.3 The Analysis of Variance Table 636

16.3.4 More Inferences about Regression Coefficients 639

16.4 Multiple Linear Regression Model Using Quantitative and Qualitative Predictor Variables 646

16.4.1 Single Qualitative Variable with Two Categories 646

16.4.2 Single Qualitative Variable with Three or More Categories 647

16.5 Standardized Regression Coefficients 658

16.5.1 Multicollinearity 660

16.5.2 Consequences of Multicollinearity 661

16.6 Building Regression Type Prediction Models 662

16.6.1 First Variable to Enter into the Model 662

16.7 Residual Analysis and Certain Criteria for Model Selection 665

16.7.1 Residual Analysis 665

16.7.2 Certain Criteria for Model Selection 667

16.8 Logistic Regression 672

16.9 Case Studies 676

16.10 Using JMP 677

Review Practice Problems 678

Chapter 17 | Analysis of Variance 685

17.1 Introduction 686

17.2 The Design Models 686

17.2.1 Estimable Parameters 686

17.2.2 Estimable Functions 688

17.3 One-Way Experimental Layouts 689

17.3.1 The Model and Its Analysis 689

17.3.2 Confidence Intervals for Treatment Means 695

17.3.3 Multiple Comparisons 700

17.3.4 Determination of Sample Size 706

17.3.5 The Kruskal–Wallis Test for One-Way Layouts (Nonparametric Method) 707

17.4 Randomized Complete Block Designs 710

17.4.1 The Friedman Fr-Test for Randomized Complete Block Design (Nonparametric Method) 718

17.4.2 Experiments with One Missing Observation in an RCB-Design Experiment 719

17.4.3 Experiments with Several Missing Observations in an RCB-Design Experiment 719

17.5 Two-Way Experimental Layouts 722

17.5.1 Two-Way Experimental Layouts with One Observation per Cell 724

17.5.2 Two-Way Experimental Layouts with r>1 Observations per Cell 725

17.5.3 Blocking in Two-Way Experimental Layouts 734

17.5.4 Extending Two-Way Experimental Designs to n-Way Experimental Layouts 734

17.6 Latin Square Designs 736

17.7 Random-Effects and Mixed-Effects Models 742

17.7.1 Random-Effects Model 742

17.7.2 Mixed-Effects Model 744

17.7.3 Nested (Hierarchical) Designs 746

17.8 A Case Study 752

17.9 Using JMP 753

Review Practice Problems 753

Chapter 18 | The 2k Factorial Designs 765

18.1 Introduction 766

18.2 The Factorial Designs 766

18.3 The 2k Factorial Design 768

18.4 Unreplicated 2k Factorial Designs 776

18.5 Blocking in the 2k Factorial Design 782

18.5.1 Confounding in the 2k Factorial Design 783

18.5.2 Yates’s Algorithm for the 2k Factorial Designs 788

18.6 The 2k Fractional Factorial Designs 790

18.6.1 One-half Replicate of a 2k Factorial Design 790

18.6.2 One-quarter Replicate of a 2k Factorial Design 795

18.7 Case Studies 799

18.8 Using JMP 801

Review Practice Problems 801

Chapter 19 | Response Surfaces

This chapter is not included in text, but is available for download via the book’s website: www.wiley.com/go/statsforengineers

Appendices 807

Appendix A | Statistical Tables 809

Appendix B | Answers to Selected Problems 845

Appendix C | Bibliography 863

Index 867

English

“Considering the size and wealth of information that the book is providing for a one- or two-semester undergraduate course sequence, it is indeed reasonably priced and should be a strong candidate for serious consideration of a softcover edition in the course of time.”  (Journal of Statistical Theory and Practice, 10 February 2014)

 

loading