# Introduction he main objective of this paper is to describe the history of the evolution of Artificial Intelligence over time. The past two decades have shown tremendous progress in the application of artificial intelligence (AI) including in a few medical images based specialties of radiology, dermatology, ophthalmology, and pathology. First, we explore how AI began about 65 years back and its progression in various disciplines including healthcare/medicine and particularly pathology. Second, we review books available on AI in general as well as AI in medicine and in pathology. Next, we define the necessary terms in AI and various AI algorithms that are utilized to get acceptance by the physicians to assist patients in a more efficient fashion. After, we review AI literature pertinent to healthcare and pathology. Finally, the various challenges and barriers AI faces for use in pathological applications are then discussed. # II. # AI Theory in Textbooks In 1955 Artificial intelligence (AI) was termed by McCarthy et al. as the subdivision of computer science in which machine based methodologies were used to make predictions to imitate what human intellect may do in the identical situation. 1 The origin of Digital Pathology (DP) began in 1966, as Prewitt et al. photographed images from a microscopic field from a blood smear and then transformed the information into a matrix of optical density numbers for mechanized image investigation. 2 The AI field is built on statistics and Vapnik provides a more detailed description of the statistical learning theory in his two books. 3,4 In 2003, Russell and Norig introduced an idea of an intellectual agent that mechanically plans and performs a sequence of activities to attain a goal as a novel form of AI. 5 Good fellow et al.'s comprehensive textbook on the AI is written by some of the most innovative and prolific researchers in the field. 6 Kelleher explains how deep learning is useful in understanding big data and covers methodologies of Autoencoders, Recurrent neural networks, Generative Adversarial Networks, Gradient descent and Backpropagation. 7 # a) AI books in medicine and pathology There are many excellent textbooks on AI's applications in medicine including note taking, drug development, remote patient monitoring, surgery, laboratory discovery, and healthcare delivery. 8,9,10,11,12,13 In this section our emphasis is on review of the latest textbooks on AI in pathology. Sucacet et al. in Digital Pathology (DP) discuss how technology over a decade has seen tremendous growth in its applications. They observe that DP offers the hope of providing pathology consulting and educational services to underserved areas of the world that otherwise would not experience high level of services. 14 In Artificial Intelligence and Deep Learning in Pathology, Cohen observes how recent advances in computational algorithms, and the arrival of whole slide imaging (WSI) as a platform for combining AI, are assisting both diagnosis and prognosis by transforming pattern recognition and image interpretation. The book focuses on various AI applications in pathology and covers important topics of WSI for 2D and 3D analysis, principles of image analysis, and deep learning. 15 Holzinger et al. in their book describe why AI and Machine Learning (ML) is very promising in the disciplines of DP, radiology, and dermatology. They observe that in some cases Deep Learning (DL) even exceeds human performance and stress the importance that a human expert regardless should always verify the outcome. The authors cover 'biobanks,' which offer large collections of high quality and well labeled samples as big data is required for training and covering a variety of diseases in different organs. 16 Belciug in his book covers theoretical concepts and practical techniques of AI and its applications in cancer management. The author describes the impactful role of AI during diagnosis and how it can help doctors make better decisions including AI tools to help pathologists identify exact types of cancer and assist surgeons and oncologists. The book discusses over 20 cancer examples in which AI was used and in particular AI algorithms utilized for them. 17 III. # AI Basics In this section we cover Learning theory, important AI terminology, and algorithms for Machine Learning. # a) Learning theory and machine learning Vapnik introduces the learning model from examples using three constituents of a) a creator of random vectors, b) a supervisor that yields an output vector for each input vector, and c) a learning machine qualified of applying a set of functions. The next step is the Risk Minimization Problem. So as to find the best obtainable estimate to the supervisor's reply, one should measure the difference between the reply of the supervisor to a specified input and the answer offered by the learning machine. 18 In 2015, Deo's review on ML found that only a few papers out of thousands applying ML algorithms to medical data have contributed meaningfully to clinical care unlike how ML has been impactful in other industries. 19 # E. Support Vector Machine or SVM: The SVM algorithm classifies available data by defining a hyperplane that best differentiates the presence of two groups. The differentiation for the two groups is maximized by growing the space on either side of the hyperplane and the hyperplane enclosed area with the greatest possible distance is then chosen for the evaluation. It finds an onlinear relationship using a kernel function but has tendency for overfitting. 27 F. Naive Bayes: Naive Bayes approach assumes that the features under evaluation are independent of each other. For simple tasks it can produce good results, but in general their performance is inferior to the other ML algorithms. 28 G. Decision Tree and Boosted Tree: A decision tree comprises a root, nodes, branches, and leaves. The node is where the characteristic is examined and the branch is where the result of this examined query is then assigned. The decision tree provides a set of guidelines that defines the passageway from the root all the way to the leaves. Gradient boosting machine uses weak predictors (a Decision tree) that are boosted, which provide a better performing model (a Boosted tree). This method can work with unbalanced data sets but may produce overfitting. 29 H. Random Forest or RF: Breiman provides how RFs are an effective tool in accurate prediction of classifiers and regressors as it avoids overfitting due to the Law of LargeNumbers. 30 However, it might be more time exhausting and less efficient vs. the nonparametric (SVM and k-NN) and parametric (logistic regression) modeling. IV. # AI Research in Pathology In this section we cover research in topics of origins of image analysis, computational pathologist, machine learning in pathology, Digital Pathology, Convolutional Neural Network in pathology, and other AI in cancer applications. The pipeline comprised of three phases. First, their processing steps included a) separating the tissue from its background, b) partitioning the image into smaller regions with a consistent appearance recognized as superpixels, c) finding nuclei inside the superpixels, and d) constructing cytoplasmic and nuclear characters within the superpixels. Next, within every superpixel they estimated the size, shape, intensity, and texture of the superpixel and its neighbors. Afterwards, to create more biologically significant features, they categorized superpixels as either epithelium or stroma. They used an ML based approach of L 1 -regularized logistic regression, in which they hand-annotated superpixels from 158 photos and utilized those images to train the classifier. The resultant classifier composed of 31 characters achieved a categorization accuracy of 89% on detained data. The authors using a series of relational characters produced a set of 6,642 features per image. Predicting survival based on the images from patients who were alive 5 years after surgery and also from patients who had died at 5 years after surgery they built the prognostic model. After constructing the model, it then was utilized to a verify set of breast carcinomaphotos which were not part of the model creation to categorize patients as either low or high risk of dying at 5 years. A bootstrap examination on the data set and for each of the 6,642 features the authors obtained a 95% Confidence Interval for the feature's coefficient estimate. 32 # c) Machine learning in pathology To achieve optimum Supervised Machine Learning Model Rashidi et al. proposed four questions: i) Does the endeavor tackle a necessity?, ii) Is enough data accessible which is appropriate type that scrutinized by clinical specialists?, iii) Which Machine Learning method to utilize?, and iv) Are the enhanced ML simulations appropriate and general enough when used with a new data set? The authors support a balanced approach using clinical trial data merged with real world data to optimize ML training. They recommend that pathologists/laboratorians must be sufficiently familiar with available modeling options in order to make meaningful contributions within the team. 33 Moxley-Wyles et al. introduce the basics of AI in pathology and discuss the future and challenges for the discipline with focuse on surgical pathology instead of cytology. The authors foresee AI's potential to obtain derive novel biological insights by identifying subtle cell changes, which are not recognized by pathologists (using the Haematoxylin and Eosin (H&E) stain) that can predict specific mutations within the cell. Predictions using AI have been proven for Speckle-Type POZ Protein (SPOP) mutation in prostate cancer, BRAF in melanoma, and many mutations in lung adenocarcinoma. They observe that with robustly validated AI tools second opinions from other pathologists could become not necessary. The authors expect AI's potential assistance in predicting outcomes of responses to treatments after regulatory approvals. However, in their opinion the use of Artificial Intelligence in diagnostic practice is rare due to some of the limits of Artificial Intelligence including regulatory and validation issues, as well as a high cost. 34 Li et al. used the fluorescence hyperspectral imaging technique to acquire spectral images for the early diagnosis of gastric cancer. They combined DL with spectral-spatial categorization techniques utilizing 120 fresh tissue specimens with an established diagnosis by histopathological assessments. The method was utilized to detect and extract the 'spectral + spatial' characters to create an early cancer diagnosis model. It resulted in the accuracy of 96.5%, specificity of 96%, and sensitivity of 96.3% for non-precancerous lesion, precancerous lesion, and cancer groups. 35 # d) Digital Pathology (DP) Hartman et al. numerate how DP is more advantageous over traditional pathology based on' physical slide on a physical microscope.' This tool development did benefit from 24 public challenges based publications in specific pathological diagnostic tasks. However, there is a true disconnect between the types of organs studied in these public challenges and the large volume of specimens typically available in clinical practice. Even though disciplines of dermatology and gastrointestinal collect a majority of samples in pathology laboratories, so far there are no pathology based dermatology public challenges while only a few in regards to the gastrointestinal field. This mismatch is the key reason there being a limit on the wider adoption of AI in pathology field. 36 Niazi et al. have developed the generation of synthetic digital slides that can be used for educational purposes to train future pathologists. Their Conditional Generative Adversarial Networks approach contains two main components of the generator and the discriminator. The generator creates fake stained images, while the discriminator tries to catch them. Their approach of distinguishing between 15 real and 15 synthetic images yielded an accuracy of 47.3% amongst three pathologists and two image analysts. The authors do see a role for AI in quality assurance by improving the pathologist's performance with the use of intelligent deep learning and AI tools. 37 DP involving the slide digitization process in some instances does create artifacts that are 'Out-Of-Focus' or OOF. OOF is typically noticed after a careful review which requires a whole-slide rescanning, as the manual screening for OOF affecting only parts of a slide isnot feasible. Kohlberger et al. developed a ConvFocus using a refined semi-synthetic OOF information production process and was assessed using seven slides covering three dissimilar tissue and three dissimilar stain types and then was digitized. For 514 separate regions representing 37.7K 35 ?m × 35 ?m image patches, and 21 digitized "z-stack" Whole Slide Images containing known Out-Of-Focus patterns, ConvFocus scored Spearman rank coefficients of 0.81 and 0.94 on two separate scanners, and it replicated the expected Out-Of-Focus patterns from z-stack scanning. More importantly the authors observed a decrease in the accuracy with increasing OOF. 38 Hartman et al. investigated a US healthcare organization with 20+ hospitals, 500 outpatient sites, international affiliations of one hospital in Italy and a lab in China. The organization employs 100+ pathologists, does consultations by telepathology from the Chinese lab, and uses Digitized Pathology scanned over 40,000 slides. Their conclusion for attainment of successful DP is performing a combination of pre-imaging adjustments, integrated software, and post-imaging evaluations. 39 Parwani observed that to attain DP in a lab requires an essential alteration in how tissue is handled and the workflow is harmonized, and the laboratory has attained a digital workflow. It is more than making the workflow to digital and acquiring WSI scanners. He numerates a key advantage the digital workflow provides of reduction in errors in DP and obtaining a second opinion. 40 In DP problems of color variations do arise in tissue appearance due to the disparity in preparation of tissues, difference in stain reactivity between different batches and different manufacturers, user and/or protocol dissimilarity, and the use of scanners from diverse vendors. Khan et al. present a novel preprocessing approach to histopathology image stain normalization using representation derived from color deconvolution based on non-linear mapping of a source image to a target image. A method of color deconvolution obtains stain intensity values when the stain matrix, which describes how the colour is changed by the stain intensity is made available. Instead of using the standard stain matrices, which might be unsuitable for a specified image, they recommend the utilization of a colour based classifier incorporating a new stain colour descriptor to compute image explicit stain matrix. 41 Janowczyk et al. developed a tutorial on focusing on the critical components needed by DP experts in automating tasks of grading or investigating clinical hypothesis of prognosis prediction. The authors examined seven use cases of (i) nuclei segmentation, (ii) epithelium segmentation, (iii) tubule segmentation, (iv) lymphocyte detection, (v) mitosis detection, (vi) IDC detection, and (vii) lymphoma classification, and demonstrated how DL can be applied to the most common image analysis tasks in DP using open source framework Caffe. They further subdivided the seven tasks into three categories of detection (mitotic events, lymphocytes), segmentation (nuclei, epithelium, tubules), and tissue classification (IDC, lymphoma subtypes), as the approaches used are similar within each analysis category. With over 1200 DP images used during evaluation produced the following: (i) nuclei subdivision with F score of 0.83, (ii) epithelium subdivision with F score of 0.84, (iii) tubule subdivision with F score of 0.83, (iv) lymphocyte detection with F score of 0.90, (v) mitosis recognition with F score of 0.53, (vi) invasive ductal cancer recognition with F score of 0.77, and (vii) lymphoma categorization with categorization accuracy of 0.97. In many of these cases the results are excellent versus seen from the modern feature-based categorization approaches. 42 To guide surgical decisions further, intraoperative frozen sections are useful for rapid pathology-based diagnosis. However, the quality of frozen sections is lower compared to formalin fixed paraffin embedded tissue 43 and that they must be diagnosed within 20 min of receipt. In current clinical practice, thyroid nodule surgeries are the most common in requiring intraoperative consultations. However, using traditional approach the sensitivity for diagnosing thyroid nodules from frozen sections is around 75%. 44 Li et al. investigated for the first time if a 'patch-based diagnostic system' with DL methodology can diagnose thyroid nodules from intraoperative frozen sections. They approached the problem as a three-category classification problem of benign, uncertain, and malignant classes. In order to reduce the overall time cost, they applied tissue localization first in the whole slide diagnosis to locate thyroid tissue regions. This rule-based system considered the conservative diagnosis manner of the practical thyroid frozen section diagnosis. Their computerized diagnostic technique demonstrated a precision of malignant and benign of thyroid nodules of 96.7% and, 95.3% respectively, and 100% sensitivity for the unsure category. Moreover, the methodology resulted in diagnosis of a typical Whole Slide Image in less than one min. 45 Paeng's presentation covers limitations of pathology and relative advantages of DP of reproducibility, accuracy, and workload reduction. Key applications of DP are a) Tumor proliferation score prediction -breast resection, and b) Gleason score prediction -prostate biopsy. The author's method scored the best in Tumor Proliferation Assessment Challenge. He achieved Gleason score prediction of 83% for core-level performance and discussed overcoming: how to handle gigapixel images, how to handle quality variation between slides, and how to handle ambiguous ground-truth. 46 # e) Convolutional Neural Network (CNN) in pathology Hegde et al. for histopathology images introduced 'SMILY' (Similar Medical Images Like Yours) which is a DL based reverse image search tool. Their tool follows the steps of: a) Create a database of image patches and a numerical portrayal of each patch's image fillings called the embedding, b) Calculate the embedding utilizing a CNN, c) SMILY calculates the embedding of the selected query image and matches it in a proficient manner with those in the database, and in the last step d) SMILY yields the k most similar patches, where k is customizable. To create the database the authors used images from TCGA with the evaluations utilizing127K image patches from 45 slides while the question set included 22.5K patches from additional 15 slides. The CNN algorithm, instead of using large, pixelannotated datasets of histopathology images, was trained using a dataset of images of people, animals, and manmade and natural objects. In the assessment of prostate specimens for finding similar histologic features, SMILY scored 62.1% on average which is, considerably higher than the random search results score of 26.8% with p -value< 0.001. SMILY's score for histologic feature match, when queried from multiple organs, was appreciably higher than random with the score of 57.8% vs. 18.3% with p-value < 0.001.The authors claim that SMILY can be used as a general purpose tool in multiple applications of diagnosis, research, and education even though it will have lower accuracy than an application specific tool. 47 Autoencoder (AE) use in breast cancer: An AE can be described as an ANN with a symmetric construction in which middle tiers encode the entered data and then aim to construct a form of its input onto the yield tier and avoids using a direct copy of the data along with the network. 50 Macías-García et al. developed a structure to process DNA methylation to obtain meaningful data from pertinent genes regarding breast cancer recurrence and tested it using The Cancer Genome Atlas (TCGA) data portal. The method is based on AEs to preprocess DNA methylation and generate AE features to characterize breast cancer recurrence and demonstrated how it improved recurrence prediction. 51 AI in cervical cancer: Out of half million annual cervical cancer cases in the world about 80% occur in low and middle income nations. Hu et al. followed over 9,000 women ages 18 to 94 from Costa Rica over period of seven years from 1993 to 2000 identifying cancers up to 18 years. They developed a DL based visual evaluation algorithm based on digitized cervical images taken with a fixed focus camera (cervicography), which did automatically identify cervical precancer or cancer. The DL method recognized cumulative precancer and cancer cases with higher AUC of 0.91compared to the original cervigram interpretation with AUC of 0.69 or conventional cytology with AUC of 0.71. The authors therefore recommend use of automated visual evaluation of cervical images from contemporary digital cameras. 52 AI in prostate cancer: Ström et A,' 'Luminal B,' 'HER2enriched,' and 'Basal-like.' The authors examined 3 cohorts of main breast carcinoma specimens totaling 436 (up to 28 years of survival) and scored them for ER, PR, HER2, and Ki67 rank by Digital Image Analysis (DIA) and manually. DIA approach beat manual scoring in both sensitivity and specificity for the Luminal B subtype, and achieved slightly superior concordance and Cohen's ? agreement in reference with PAM50 gene expression assays. The manual biomarker and DIA approaches were close in comparison of each other for Cox regression hazard ratios. In addition DIA faired superior in terms of Spearman's rank-order correlations, and prognostic value of Ki67 scores in terms of likelihood ratio thus adding appreciably more prognostic information to the manual scores. The authors concluded that overall the DIA approach was clearly a better substitute to the method of manual biomarker scoring. 48 A manual process identifying the existence and degree of breast carcinoma by a pathologist is serious for patient administration for tumor staging, including an assessment of treatment response, but it is subject to variability between inter-and intra-reader. As a decision support tool any computerized technique needs to be robust to data acquisition from different sources, different scanners, and different staining/cutting approaches. Cruz-Roa et al.'s CNN approach trained the classifier using 400 exemplars from various sites and using TCGA data to validate it with 200 cases. Their approach attained a Dice coefficient of 75.9%, a PPV of 71.6%, and a NPV or of 96.8% regarding the evaluation for pixel-by-pixel in reference with manually annotated regions of invasive ductal carcinoma. 49 resulting networks were tested with independent 1,631 biopsies from 246 men from STHLM3 for the presence, extent, and Gleason grade of malignant tissue and an exterior data from 73 men of 330 biopsies. They also compare drating performance by 23 pathology experts on grading 87 biopsies. The AI networks attained an AUC of 0?997 for differentiating between benign and malignant biopsy cores on the independent dataset and 0?986 on the external verification data between benign and malignant. The correlation found between carcinoma length predicted by the Artificial Intelligence networks and given by the pathology experts was 0?96 for the impartial data and 0?87 for the external verification dataset. The AI methodology for allotting Gleason grades attained a mean pairwise kappa of 0?62which was within the range of values for the pathology experts of 0?60-0?73. The authors recommend using the AI approach resulting in reduction of the evaluation of benign biopsies and automating the work of determining cancer length in the cases of positive biopsy cores. This AI approach by standardizing grading can be utilized as a second opinion in cancer assessment. 53 AI in stomach and colon cancer: Iizuka et al. in their study utilized biopsy histopathology WSIs of stomach and colon trained CNNs and RNNs to classify them into adenocarcinoma, adenoma, and non-neoplastic. They gathered datasets of stomach and colon consisting of 4,128 and 4,036 WSIs, respectively which were then manually annotated by pathologists. The authors using millions of tiles extracted from the WSIs then trained a CNN based on the Inception-V3 architecture for each organ to categorize a tile into one of the three classification tags. Next they summed the projections from all the tiles in the WSI to acquire a final categorization using two approaches of a RNN and a Max Pooling. The models were successfully evaluated on three independent test sets each and achieved Area Under the Curves (AUCs) for gastric adenocarcinoma and adenoma was 0.97 and 0.99 respectively, and for colonic adenocarcinoma and adenoma of 0.96 and 0.99 respectively. Further they evaluated the stomach model versus a collection of pathology experts and medical scholars that were not part of labeling the teaching set utilizing an investigation set of 45 images (15 WSI of adenoma, 15 of adenocarcinoma, and 15 of nonneoplastic lesions). The categorization time for Whole Slide Image using the educated model ranged from 5-30 seconds. The average accuracy of diagnoses achieved by pathologists was 85.9%, medical school students was 41.2%, while the stomach model achieved an accuracy of 95.6% in a 30 sec assessment. 54 AI in lung cancer: Kriegsmann et al.'s evaluation of CNNs included the classification of the very usual lung carcinoma subtypes of pulmonary adenocarcinoma (ADC), pulmonary squamous cell carcinoma (SqCC), and small-cell lung cancer (SCLC). To validate the appropriateness of the outcomes, skeletal muscle was also integrated in the investigation, as histologically the difference between skeletal muscle and the three tumor entities is unambiguous. They assembled a group of 80 ADC, 80 SqCC, 80 SCLC, and 30 skeletal muscle specimens. TheInceptionV3, VGG16, and Inception ResNetV2 architectures were qualified to categorize the four entities of interest.InceptionV3 based on the CNN model produced the highest classification accuracy and hence was used for the classification of the test set. The final model received an image patch categorization accuracy of 88% in the training as well as in the verification set. In the test set they achieved image patch and patient-based CNN classification results of 95% and 100%. 55 To predict carcinoma in WSIs, Kanavati et al. trained a deep learning CNN founded on the EfficientNet-B3 design, using transfer learning and weakly-supervised learning to calculate carcinoma using a training dataset of 3,554 WSIs from a sole medical establishment. The model was then applied to four independent test sets from distinct hospitals in order to validate its generalization on unseen data. The authors obtained excellent results that did show differentiation amongst lung cancer and non-neoplastic with an elevated Receiver Operator Curve based AUCs on impartial investigation of four sets of 0.98, 0.97, 0.99, and 0.98, respectively. Out of two methodologies to train the simulations of 'fully supervised learning' and 'weakly supervised learning,' the last performed always superior with an improvement of 0.05 in the AUC on the experiment sets. 56 V. # AI -Regulation The FDA's vision is that with suitable regulatory oversight, Software as a Medical Device (SaMD) based on AI-ML will deliver safe and effective software functionality that will be able to improve the quality of patient care. Their guidance for software modifications focuses on the risk to patients resulting from the software change. For a traditional application three classes of software alterations that might necessitate a premarket submission include: a) a change that introduces a novel risk or changes an existing risk that can produce significant harm, b) an alteration to risk controls to avoid substantial harm, and c) a modification that considerably affects clinical functionality of the device. For SaMD, any modifications would require a premarket submission to the FDA when the AI/ML software changes significantly, the alteration is to the device's envisioned use, or the alteration introduces a key change to its algorithm. The FDA to date has approved several AI/ML-based SaMD algorithms that are locked before marketing and algorithm modifications will possibly require an FDA pre-market assessment for the modifications beyond the initial approval. However, K the SaMD has the capability to constantly learn, as the alteration or modification to the algorithm is recognized after the SaMD has learned from real world experience might provide a significantly dissimilar output in contrast to the output originally approved for a specified set of inputs. Therefore, the AI/ML tools require a new, Total Product Life Cycle (TPLC) regulatory approach. 57 VI. # AI -Issues to be Resolved Over the last 100 years both The Covid19 and The Spanish Flu pandemics have shown their disproportionate impacts on patients of low income and racial minorities. A combination of diagnostics bias and sample bias have been the culprit for the global healthcare disparities. Evans argues that present diagnostic tools often fail patients who do not fit the prospects of the majority. 58 Even though there is an active effort to involve females in clinical study samples there are many treatment and drug advices that are founded on findings taken from the samples of Caucasian males. The author proposes, going forward, to decode the present and reshape existing practices before implementing AI to avoid existing biases and further increasing health disparities. 59 Colling et al. propose a UK-wide strategy for AI and DP. If the requirements of proper slide image management software, integrated reporting systems, improved scanning speeds, and high-quality images for DP systems are achieved then it will provide time and cost saving benefits over the traditional microscope based pathology approach and reduce problem of inter-observer variation. The successful introduction of AI and DP tools to the healthcare system will need proper regulatory approved and evidence based validation, and a lowering of the resistance to collaborate between academic and industry developers. 60 Robertson et al.'s work discusses the limitations of deep learning as it works well in supervised learning but not for unsupervised learning. The deep learning approach is not suitable for the discovery of novel biomarkers, as it being an unsupervised learning problem. If the model is educated only by means of images attained from imaging equipment from a single merchant then it may fail to react acceptably to images acquired from the equipment of another merchant. They observe the challenges to having a full digital workflow, a must for deep learning, due to the high costs and the dependence on solid IT support systems. 61 Typically, training of DL models requires many of annotated samples that belong to dissimilar categories. However, in reality it can be hard to collect a balanced dataset for training because of the fact that certain ailments have a low prevalence causing problem of data. Studies have shown that many models that perform well on balanced datasets do not when it comes to their imbalanced counterparts. 62 Most of medical image datasets possess this imbalance problem. One-class classification, which emphases on learning a model using examples from only a single given class, is used as an approach to overcome the problem of imbalance. Gao et al. proposed a novel method which allows DL models to leverage the concept of imaging complexity to optimally learn single-class-relevant inherent imaging features. They then compared the effects of perturbing operations used on images to realize imaging intricacy to boost feature learning, and allowing their method outperforming four advanced methods. 63 Tizhoosh et al. explore problems that must be solved in order to exploit opportunities for the AI promises in computational pathology. The challenges discussed include: i) Lack of labeled or annotated data can be overcome by using active learning applied to labeling with public datasets, ii) Pervasive variability: infinite number of patterns due to presence of several tissue types (connective tissue, nervous tissue, epithelium, and muscle) required by AI algorithms to be learned, iii) Non-Boolean nature of diagnostic tasks as binary language of 'yes' or 'no' can be possible in only easy pathological cases but is rarity in the clinical practice, iv) Dimensionality obstacle: Use of "Patching" (divide an image into small tiles) as WSI sizes typically are larger than 50Kx 50K pixels, v) Turing test dilemma: A machine can be as intelligent as a human and Turing test for DP is explicitly not known, vi) Uni-task orientation of weak artificial intelligence as Deep ANNs are designed to perform only one task requiring independently training multiple AIs for tasks of classification, segmentation, and search, vii) Affordability of required computational expenses for adoption of DP is a challenge due to high costs of acquisition and storage of gigapixel histopathological scans, viii) Adversarial attacks (Targeted manipulation of a very small number of pixels inside an image can mislead the network) as negligible presence of artifacts produce misdiagnosis, ix) Lack of transparency and interpretability which is not acceptable to the physicians as there is a lack of explanation on why AI made a specific decision in reference to histopathology scans, and x) Realism of AI as the pathology community has yet to buy in fully due to its issues related to ease of use, financial return, and trust. The authors describe multiple opportunities of: a) Deep features -Pretraining is better using Transfer learning instead of training a new network from scratch, b) Handcrafted features (such as gland shape and nuclear size)-Do not forget computer vision as it can be applied in DP to attain high identification accuracies, c) Generative frameworks: Learning to see and not judge as Generative models, focus on acquiring to reproduce data instead of making any decision such as pulmonary disease categorization and for functional MRI analysis, d) Unsupervised learning: When we do not need annotations in selforganizing plots and hierarchical clustering, and effectively combine them in the workflow of usual practice of pathology as annotating images is not portion of the everyday work of pathology experts, e) Virtual peer review -Placing the pathologist in the central to both algorithm development and execution: Algorithms extract reliable information from proven archived diagnosed cases similar to the relevant features of the patient, that are diagnosed and treated by other physicians; Comparing for example diagnosis of patient's cervix biopsy to a prior Pap test assessment for real-time cytologic-histopathologic correlation, f) Automation with AI can assist with case triage by performing laborious tasks for example of screening for easily identifiable cancer types or counting mitoses, and with simplification of complex tasks (e.g., triaging biopsies which require immediate action and ordering suitable stains upfront when specified); AI algorithms have attained sensitivity above 92% for breast cancer recognition, g) Re-birth of the hematoxylin and eosin image: combination of computational pathology and emerging technologies of multiplexing and threedimensional imaging allows analysis of individual pixels of pathological images to understand diagnostic, and theoretically available prognostic information, h) Making data science accessible to pathologists will enhance their accuracy with the use of AI tools to generate/analyze big image data. 64 To integrate AI based algorithms into the workflow of pathologists, Jiang et al. outlined and discussed various challenges facing their implementation in pathology. The challenges include: i) Validation: AI models are typically established on small-scale data and images from single-center and therefore they need to be sufficiently validated using multi-institutional data before clinical adoption, ii) Interpretability: DL-based AI methods are rightfully perceived as 'black-box' methods due to their lack of interpretability which is an obstacle towards the clinical adoption by doctors, iii) Computing system: Histopathological photo file dimensions are typically 1,000x of an X-ray and 100x of a CT image files requiring powerful computing and storage, and bandwidth to transmit gigapixel-sized images, iv) Attitude of pathologists: Due to the lack of AI based model's interpretability, pathologists are afraid of the change in workflow and worry about how to describe the evidence from AI in the diagnosis report, v) Attitude of clinicians and patients: In order to have both clinicians and patients have trust, AI based diagnostic and prognostic/predictive assays ought to have a high accuracy, and vi) Regulators: The clinical adoption of AI digital pathology needs approval by regulatory agencies and the lack of interpretability limits the approval. 65 Samek et al. present two methods that describe predictions of deep learning models to overcome DL's black box approach. The first method which computes the sensitivity of the prediction with respect to changes in the input and the second approach meaningfully decomposes the decision in terms of the input variables. 66 Some of problems that need to be overcome to achieve the progress of DP and ML in their daily usage in pathology practice are: a) Make interfaces user friendly which currently are not, b) Require a single image format instead of current existence of several proprietary image formats, c) Overcome issue of the large image file sizes using technological advances in storing, and d) Enhance interactions between AI experts and pathologists. 67 AI machine learning model development, a multi-step process, includes important technical, regulatory, and clinical barriers. The model should overcome these barriers, which collectively define a "translation gap," in order to being accepted in a real world. The translation gap in digital pathology includes a variability caused by the manual nature of the tissue acquisition process and histopathology slide preparation, differences introduced during tissue sampling, tissue fixation, sectioning, and staining. During model development and validation these variations must be accounted for in order to achieve its widespread adoption. Also, since DP is relatively immature, at present only two manufacturers have received FDA approval to market digital pathology systems for primary diagnosis. 68,69 Similarly Steiner et al. discuss how the low penetration of digital pathology has negatively affected integration of AI into pathologist's diagnostic workflow and validation of algorithms in live clinical settings. 70 VII. # Conclusion Artificial Intelligence (AI) has come a long way over the last 65 years. Over the last two decades research in AI has gained traction in healthcare and it is now being applied across many medical subspecialties of dermatology, radiology, and pathology. A nationwide or global strategy for AI and Digital Pathology (DP) will be necessary in order to be used for automated diagnosis, triaging cases for improved workflow, or deriving novel insights for pathologists. If DP system's requirements of proper slide image management software, integrated reporting systems, improved scanning speeds, and high-quality images, are achieved then it will provide time and cost saving benefits over the traditional microscope based pathology approach, offer a second opinion, and in addition it will reduce the problem of inter-observer variation. However, AI approaches including deep learning do face rightful criticism, as their internals to make decisions by design are not known and hence will require legal and regulatory issues worked out to reap the possible benefits. The successful introduction of AI and DP tools to the healthcare system will need proper regulatory approved evidence based validation, and Artificial Intelligence (AI) in Pathology-A Summary and ChallengesYear 202128Volume XXI Issue II Version ID D D D ) K(Medical Researchf) AI in cancer applications AI in breast cancer: Stålhammar et al. for prognostic andGlobal Journal ofpredictive value categorized breast cancers by using four gene expressions 'Luminal© 2021 Global Journals © 2021 Global JournalsArtificial Intelligence (AI) in Pathology-A Summary and Challenges * A proposal for the Dartmouth summer research project on artificial intelligence JMccarthy MLMinsky CEShannon August 31, 1955. 2006 27 Ai Mag * The analysis of cell images JPrewitt MMendelsohn 1035-53.10.1111/j.1749-6632.1965.tb11715 Ann N Y Acad Sci 128 3 1966 * VVapnik The Nature of Statistical Learning Theory Springer-Verlag 1995 Second edition * VVapnik Statistical Learning Theory Wiley 1998 * Artificial intelligence: a modern approach SRussell PNorvig 2003 Prentice Hall Upper Saddle River * Deep Learning (Adaptive Computation and Machine Learning series) Illustrated Edition IGoodfellow YBengio 2016 The MIT Press * Deep Learning (The MIT Press Essential Knowledge series) Paperback -Illustrated JKelleher 2019 The MIT Press * Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again ETopol 2019 Basic Books * Artificial Intelligence in Healthcare: AI, Machine Learning, and Deep and Intelligent Medicine Simplified for Everyone. 2 nd Edition, MedMantra PMahajan 2019 * Artificial Intelligence in Healthcare ABohr KMemarzadeh 2020 Academic Press 1 st Edition * Intelligence-Based Medicine: Artificial Intelligence and Human Cognition in Clinical Medicine and Healthcare. 1 st Edition AChang 2020 Academic Press * AI in Health: A Leader's Guide to Winning in the New Age of Intelligent Health Systems TLawry CRC Press 2020 1 st Edition * Artificial Intelligence in Medicine: Technical Basis and Clinical Applications LXing MGiger JMin 2020 Academic Press 1st Edition * Digital Pathology YSucaet WWaelput 2014 Springer * SCohen Artificial Intelligence and Deep Learning in Pathology E-Book. 1 st Edition 2020 Elsevier * Artificial Intelligence and Machine Learning for Digital Pathology: State-ofthe-Art and Future Challenges (Lecture Notes in Computer Science Book AHolzinger RGoebel 12090. 2020 Springer 1 st Edition * Artificial Intelligence in Cancer: Diagnostic to Tailored Treatment SBelciug 2020 Academic Press 1 st Edition * An overview of statistical learning theory VVapnik IEEE Trans Neural Netw 10 1999 * Machine Learning in Medicine RDeo doi:10.1161/ CIRCULATIONAHA.115.001593 Circulation 132 20 2015 * Machine learning in laboratory medicine: waiting for the flood? FCabitza GBanfi 29055936 Clin Chem Lab Med 56 4 2018 Mar 28 * Deep learning YLecun YBengio GHinton Nature 521 7553 2015 * Gradientbased learning applied to document recognition YLecun LBottou YBengio Proc. IEEE IEEE 1998 86 * Long short-term memory SHochreiter JSchmidhuber Neural Comput 9 1997 * Common pitfalls in statistical analysis: Linear regression analysis RAggarwal PRanganathan doi:10.4103/ 2229-3485.203040 Perspect Clin Res 8 2 2017 * Common pitfalls in statistical analysis: Logistic regression PRanganathan CPramesh RAggarwal 10.4103/picr.PICR_87_17 Perspect Clin Res 8 3 2017 * Choice of neighbor order in nearest-neighbor classification PHall BPark RSamworth 10.1214/07-AOS537 Annals of Statistics 36 2008 * Support vector machines MHearst SDumais EOsuna 10.1109/5254.708428 IEEE Intelligent Systems and their Applications July-Aug. 1998 13 * Estimating continuous distributions in Bayesian classifiers JGeorge PLangley Paper presented at: Eleventh Conference on Uncertainty in Artificial Intelligence Montréal, Qué, Canada August 18-20. 1995 * A working guide to boosted regression trees JElith JLeathwick THastie J Anim Ecol 2008 Jul * 10.1111/j.1365-2656.2008.01390 18397250 2008 Apr 8 77 * Random forests LBreiman Mach Learn 45 2001 * Origins of ... image analysis in clinical pathology GMeijer JBeliën PVan Diest 10.1136/jcp.50.5.365 J Clin Pathol 50 5 1997 * Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival ABeck ASangoi SLeung 10.1126/scitranslmed.3002564 Science Translational Medicine 3 09 Nov 2011 * Artificial "Intelligence and Machine Learning in Pathology: The Present Landscape of Supervised Methods HRashidi NTran EBetts doi:10.1177/ 2374289519873088 Acad Pathol 6 2019. 23742895 19873088. 2019 Sep 3 * Artificial intelligence in pathology: an overview BMoxley-Wyles RColling CVerrill 10.1016/j.mpdhp.2020.08.004 Diagnostic Histopathology 1756-2317 26 11 2020 Pages 513-520 * Early diagnosis of gastric cancer based on deep learning combined with the spectral-spatial classification method YLi LDeng XYang doi:10.1364/ BOE.10.004999 Biomed Opt Express 10 10 2019. 2019 Sep 9 * Value of Public Challenges for the Development of Pathology Deep Learning Algorithms DHartman JVan Der Laak MGurcan 32318315 J Pathol Inform 11 7 2020 Feb 26 * Digital pathology and artificial intelligence MNiazi AParwani MGurcan 10.1016/S1470-2045(19)30154-8 31044723 Lancet Oncol 20 5 2019 May * Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection TKohlberger YLiu MMoran 10.4103/jpi.jpi_11_19 J Pathol Inform 10 39 2019. 2019 Dec 12 * Enterprise Implementation of Digital Pathology: Feasibility, Challenges, and Opportunities DHartman LPantanowitz JMchugh 10.1007/s10278-017-9946-9 J Digit Imaging 30 5 2017 * Next generation diagnostic pathology: use of digital pathology and artificial intelligence tools to augment a pathological diagnosis AParwani Diagn Pathol 14 138 2019 * 10.1186/s13000-019-0921-2 * A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution AKhan NRajpoot DTreanor IEEE Trans Biomed Eng 2014 Jun * 10.1109/TBME.2014.2303294 24845283 * Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases AJanowczyk AMadabhushi doi:10.4103/ 2153-3539.186902 J Pathol Inform 7 29 2016. 2016 Jul 26 Published * Interinstitutional comparison of frozen section consultation in small hospitals: a college of American pathologists Qprobes study of 18532 frozen section consultation diagnoses in 233 small hospitals DNovis GGephardt RZarbo Arch Pathol Lab Med 120 12 1087 1996 * Utility of intraoperative frozen sections during thyroid surgery RKahmke WLee LPuscas 10.1155/ 2013/496138 Int J Otolaryngol 2013 * Rule-based automatic diagnosis of thyroid nodules from intraoperative frozen sections using deep learning YLi PChen ZLi 10.1016/j.artmed.2020.101918 Artificial Intelligence in Medicine 0933-3657 108 2020. 101918 * ARTIFICIAL INTELLIGENCE FOR DIGITAL PATHOLOGY KPaeng 2017 PowerPoint presentation * GPU Technology Conference San Jose, CA, USA * Similar image search for histopathology: SMILY NHegde JDHipp YLiu 10.1038/s41746-019-0131-z Digit. Med 2 56 2019 * Digital image analysis outperforms manual biomarker assessment in breast cancer GStålhammar MFuentes MLippert 10.1038/modpathol.2016.34 26916072 Mod Pathol 29 4 2016 Apr. 2016 Feb 26 * Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent ACruz-Roa HGilmore ABasavanhally 10.1038/srep46450 Sci Rep 7 2017. 2017 Apr 18 * A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines DCharte FCharte SGarcía 10.1016/j.inffus.2017.12.007 2018 44 Information Fusion * Autoencoded DNA methylation data to predict breast cancer recurrence: Machine learning models and gene-weight significance LMacías-García MMartínez-Ballesteros JLuna-Romera 10.1016/j.artmed.2020.101976 Artificial Intelligence in Medicine 0933-3657 110 2020. 101976 * An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening LHu DBell SAntani doi:10.1093/ jnci/djy225 J Natl Cancer Inst 111 9 2019 * Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: a population-based, diagnostic study PStröm KKartasalo HOlsson 10.1016/S1470-2045(19)30738-7 Lancet Oncol 21 2 2020 Feb. 2020 Jan 8 * 31926806 Lancet Oncol 21 2 e70 2020 Feb * Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours OIizuka FKanavati KKato Sci Rep 10 1504 2020 * 10.1038/s41598-020-58467-9 * Deep Learning for the Classification of Small-Cell and Non-Small-Cell Lung Cancer MKriegsmann CHaag CWeis 10.3390/cancers12061604 Cancers 2020 6 1604 * Weakly-supervised learning for lung carcinoma classification using deep learning FKanavati GToyokawa SMomosaki Sci Rep 10 9297 2020 * 10.1038/s41598-020-66333-x * Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) -Discussion Paper and Request for Feedback 2019 US Food and Drug Administration * Covid's Color Line -Infectious Disease, Inequity, and Racial Justice MEvans 10.1056/NEJMp2019445 10.1056/ NEJMp2019445 N Engl J Med 383 July (5. 2020 * The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future IStraw 10.1016/j.artmed.2020.101965 Artificial Intelligence in Medicine 0933-3657 110 2020. 101965 * Artificial intelligence in digital pathology: a roadmap to routine use in clinical practice RColling HPitman KOien J Pathol 2019 * 10.1002/path.5310.Epub 31144302 Oct. 2019 Jul 18 249 * Digital image analysis in breast pathology-from image processing techniques to artificial intelligence SRobertson HAzizpour KSmith 10.1016/j.trsl.2017.10.010 29175265 Transl Res 194 2018 Apr. 2017 Nov 7 * A survey on addressing high-class imbalance in big data JLeevy TKhoshgoftaar RBauder 10.1186/s40537-018-0151-6 J Big Data 5 42 2018 * Handling imbalanced medical image data: A deep-learningbased one-class classification approach LGao LZhang CLiu 10.1016/j.artmed.2020.101935 Artificial Intelligence in Medicine 0933-3657 108 2020. 101935 * Artificial Intelligence and Digital Pathology: Challenges and Opportunities HTizhoosh LPantanowitz 10.4103/jpi.jpi_53_18 J Pathol Inform 9 38 2018. 2018 Nov 14 * Emerging role of deep learning-based artificial intelligence in tumor pathology YJiang MYang SWang Cancer Commun (Lond) 2020 * 10.1002/cac2.12012 40 * Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models WSamek TWiegand K-RMüller arXiv:170808296 2017 arXiv Preprint * Artificial Intelligence and Machine Learning for Digital Pathology PRegitnig HMüller AHolzinger 10.1007/978-3-030-50402-1_1 Lecture Notes in Computer Science Holzinger A., Goebel R., Mengel M., Müller H. 12090 2020 Springer Expectations of Artificial Intelligence for Pathology * Whole slide imaging versus microscopy for primary diagnosis in surgical pathology: a multicenter blinded randomized noninferiority study of 1992 cases SMukhopadhyay MFeldman EAbels doi: 10.1097/ PAS.0000000000000948 Am. J. Surg. Pathol 42 2018 * Food and Drug Administration, 510(k) Substantial Equivalence Determination Decision Memorandum-K190332 U * Closing the translation gap: AI applications in digital pathology DSteiner PChen CMermel Biochimica et Biophysica Acta (BBA) -Reviews on Cancer 0304-419X 1875 Issue 1, 2021, 188452