Content area
A beginner’s guide to automating your daily tasks using PowerShell scripting. Develop end-to-end and dynamic automation solutions with the help of PowerShell scripting that can easily be extended to any software installation. About This Video: Learn and implement advanced concepts such as file backup, archival, and purge solution. Comes bundled with all the resource files, PPTs, and assignment questions to leverage your learning. A well-balanced, application-based, and complete course to use PowerShell for automating tasks. In Detail: PowerShell is a task automation and configuration management program from Microsoft, consisting of a command-line shell and the associated scripting language. Professionals who want to start with PowerShell and have some basic idea of the command line will find it extremely easy to understand the underlying concepts of PowerShell and will be able to integrate PowerShell with non-Microsoft products as well. Here, you will look at the PowerShell logging module, installing software with PowerShell, automation solution for daily validation reports, database interaction using PowerShell, automation for Web/App Service status, Windows Task Scheduler and scheduling PowerShell scripts to run, pulling reports from Windows event viewer using this PowerShell, looking at PowerShell advanced functions and modules, building validation, PowerShell with Windows Event Viewer, PowerShell for programming, and using PowerShell as an automation tool. You will be working on a project where you will develop a robust automation solution for ‘Application and System Validation’, which generates a consolidated HTML report in the end, displaying all different test case results. By the end of this course, you will have learned advanced-level knowledge of PowerShell scripting. You will easily automate your daily repetitive work using PowerShell scripting. All the resource files are added to the GitHub repository at: https://github.com/PacktPublishing/PowerShell-for-Automating-Administration
[00:00:00]Hello you awesome people. Welcome to this lecture. One of [00:00:05]the most common tasks for perhaps everyone related to IT [00:00:10]is software installation. In fact, the browser [00:00:15]or application you are watching this lecture currently is also a [00:00:20]piece of software that needs to be installed by someone if [00:00:25]not you. Correct. In this lecture, we will explore how [00:00:30]our good friend, Powershell, can be of tremendous help [00:00:35]while dealing with software installations. And most importantly, we [00:00:40]are going to develop an end to end and a dynamic automation [00:00:45]solution with the help of power shell scripting that can be easily extended [00:00:50]to any software installation. Does that sound exciting? Well, [00:00:55]let's get started then. If I give you a machine and ask [00:01:00]you to ensure a particular software is installed in it, what is the [00:01:05]process you will follow? Because if you clearly identify the steps [00:01:10]involved in this process, our job is very easy. We just have to identify [00:01:15]the correct powershell statements for those steps and just stitch [00:01:20]those together to form a script out of it. Right. So if you are asked [00:01:25]to ensure a particular software is installed in a system, [00:01:30]first of all, you will check whether it is already installed [00:01:35]or no. Correct. Because if the software is already installed, you simply [00:01:40]want to skip the next steps. Right? Why would you bother if it is already [00:01:45]installed? Correct. Because few installations need a system [00:01:50]reboot in the end. There is no good reason why would you want to do that [00:01:55]if the aim is just to ensure the software is installed? Right. So this [00:02:00]is the number one step. Next what you will do is you have to grab [00:02:05]the installer file.
Correct?
The installer file could be in a shared network [00:02:10]location, it could be on the Internet, and you have to download it from there or [00:02:15]maybe any other place, but you have to locate the installer file that you [00:02:20]have to use. Right? Next step will be, of course, to [00:02:25]install the software, right? And another important thing is, while doing [00:02:30]all of this, we must log the information so that if [00:02:35]something goes wrong, we can always come to the logs and find out what is that [00:02:40]was failed and how we can fix it.
Right.
[00:02:45]Okay. Now, since we have clearly [00:02:50]identified the steps involved in this process, it is going to be absolutely [00:02:55]fun to automate this process with the help of Powershell. So let's [00:03:00]get started. First of all, I'm opening Powershell ISE, [00:03:05]and because we have to deal with software installation, I'm running my Powershell [00:03:10]ISE as administrator.
Okay, now [00:03:15]my working directory is this drive [00:03:20]automation folder. I'm switching to this, [00:03:25]let me create our script file. So it's going to be software [00:03:30]installation with Powershell Dots one.
Okay?
Okay.
[00:03:35]So first of all, we want to know whether our required software [00:03:40]is already installed or no.
[00:03:45]The different softwares that are installed can be [00:03:50]found from the registry setting. So it is a good idea to go to your registry setting, [00:03:55]list down all different softwares that are installed, and check whether [00:04:00]our software of interest is there in that list or no. [00:04:05]Right? So at this location, all different 32 [00:04:10]bit software applications are installed. Okay? So with [00:04:15]power shell, we can always prepare a list of these applications and then [00:04:20]check, right? So this is for 32 bit applications. And similarly, [00:04:25]we have this another location where all different 64 bit [00:04:30]applications are listed.
Depending upon whether our application [00:04:35]is 32 bit or 64 bit, we can prepare a list and [00:04:40]check in this, right?
So this is an explanation. [00:04:45]Now let's talk about implementation. In order to [00:04:50]access the registry setting, we can use the powershell Commandlet called [00:04:55]gate item property.
We just have to specify the location [00:05:00]and it will give us all different items available over there. Right, [00:05:05]If you want to store this result, we can do that like [00:05:10]this. Now we are storing this information inside this [00:05:15]32 bit software variable, and we are selecting only particular [00:05:20]columns.
Okay? Now, this list looks like this. Okay? [00:05:25]Similarly. Can prepare a list for our 64 [00:05:30]bit software also, right?
So this is a 64 [00:05:35]bit software list.
[00:05:40]Now if we have to prepare a list of all 32 bit [00:05:45]and 64 bit softwares, well, it's just an addition of these two lists, [00:05:50]right?
So we can do it like this, and this will give us [00:05:55]all different softwares, whether it is 32 bit or 64 bit.
[00:06:00]The moment we have this complete list of all different softwares, [00:06:05]be it 32 bit or 64 bit, installed in our system, it [00:06:10]must be a cake walk to compare display name, [00:06:15]like our software, for instance, we are [00:06:20]trying to install Winrar. So I want to check whether it is already installed or no.
[00:06:25]Right?
So I'll just write a statement like this and it suggests, [00:06:30]yes, it is installed, this is its version, and it's a 64 [00:06:35]bit software. This is the display name of my tool. Okay. While [00:06:40]this confirms the software is already installed, we can also valid it in [00:06:45]this list. And indeed this software is installed, right? [00:06:50]If, if it is already installed, why do we care [00:06:55]about installing it again and restarting system? Maybe all of that is not needed. [00:07:00]That is the logic we are trying to build. Now, it is perfectly fine to keep [00:07:05]this code in your script and write a roof around this [00:07:10]logic. I want to make the things simple. Why? [00:07:15]Because I'm very sure the software I'm trying to install is a 64 [00:07:20]bit and I'm only going to deal with 64 bit softwares, [00:07:25]okay? I do not want to carry this baggage of 32 bit softwares at all [00:07:30]for this reason.
I just want to remove this, okay? And keep [00:07:35]the logic simple. I'll just keep a variable over here. [00:07:40]Okay? So there is some software name that I want to check if it is there.
[00:07:45]And I have kept this registry location of place where 64 bit [00:07:50]applications are listed. Okay?
Around this we have this if loop.
[00:07:55]If the software is there, it will go inside.
It will say software [00:08:00]is already installed and it will say exiting. Right?
[00:08:05]We ideally should keep an exit statement, which should [00:08:10]take us out of our power Sal. But for demonstration part, I want to keep [00:08:15]this statement commented out. Okay. In the L's [00:08:20]block, it will simply lock, software is not installed [00:08:25]proceeding with this installation. Fair enough?
Right.
All right. My dear [00:08:30]friends with this, we have completed this first step. We will continue [00:08:35]working on the rest of steps in the next lecture.
See you there. Take care.
[00:08:40][00:08:45]Moving on, it is time to grab [00:08:50]the installer file and proceed with installation. If software [00:08:55]is not installed already, right, as per your organization [00:09:00]standards, you have to identify from where you want to take the installer [00:09:05]file. For some organizations, it is completely okay to download [00:09:10]the software for some specific websites directly from Internet and [00:09:15]use it. Whereas some organizations strictly prohibit [00:09:20]Internet download and enforce, only the certified installers [00:09:25]which they themselves place in the network location should be [00:09:30]used for carrying out any installations. Right. So based on your policy, [00:09:35]you can continue. My approach is if your package [00:09:40]that needs to be installed is available at a network location, you can [00:09:45]always specify it over here as a UNC path and then [00:09:50]use Powershell Commandlet called copy hyphen item to [00:09:55]copy that package to the destination directory. So if you do not want [00:10:00]to download it from Internet, you are good. And if for your organization [00:10:05]it is okay to download the package from Internet, we just [00:10:10]need the TTP path of that installer file and we can proceed from [00:10:15]there. And how do we grab this? Ttp URL is very simple. [00:10:20]Just search for Winrar or whichever software you want to install, just [00:10:25]search for it. And once you click on this link with which file [00:10:30]download begins, we are good at that point. Just go to download, [00:10:35]cancel this download as we are not interested in this.
We just want to [00:10:40]grab this URL and we are good, right?
[00:10:45]We just can come here and paste this. You can use the same approach [00:10:50]for any other software. Also, for example, seven install if you [00:10:55]want to download and install seven. Just follow the same steps. [00:11:00]Here you have this download link, click on it. Did it trigger [00:11:05]a download? Yes it did. Just go ahead, cancel it, grab [00:11:10]this location and we are good because whenever we will hit this STP [00:11:15]URL, it will download the file, which we want in our case, right?
[00:11:20]This is the approach. We grab the URL. [00:11:25]Now, let's add the piece.
Now let's add the logic to download [00:11:30]the file, right? This is my source file location. [00:11:35]Now the installation file is inside my destination [00:11:40]directory and it will look like this.
You can give whatever name you want to give to your [00:11:45]installation file. Right Win.
Okay, [00:11:50]then installation file, should we always download [00:11:55]the installation file from Internet? Well, if it is already downloaded, [00:12:00]why Your powershell script should always download the file not needed, right [00:12:05]In win. If it is just three or four MB file, you don't care. But [00:12:10]installer file could be bigger than that also, right? So if it is [00:12:15]not essential for you to download your installer file all the time, you can keep this [00:12:20]check.
So we're checking whether, okay, let me first load [00:12:25]these variables and then test path. So I'm testing [00:12:30]whether this file is available over here and it is false.
And if I execute [00:12:35]this much, let's say it is giving true, [00:12:40]right? What this means is the installer file is not there. [00:12:45]So let's go ahead and download it, right?
And if it goes [00:12:50]inside, first of all, it will create the installer file directory installers, [00:12:55]and it will force it.
If this folder [00:13:00]is already there and files are available inside it, it will not touch anything.
[00:13:05]It is completely safe to execute this statement, right? If it is not there, it [00:13:10]will create this directory. That's it. And after that, we are invoking the URL that we [00:13:15]grabbed earlier and then we are specifying the installation file. That's it, [00:13:20]right? Let me execute a script which we have developed so far. Clearing [00:13:25]my screen, saving everything, selecting it all. Run it.
[00:13:30]There [00:13:35]we go.
Our installation seems to be completed and this file [00:13:40]is available over here, which is pretty good, right?
I just want [00:13:45]to reiterate this. Ideally, since the software is already installed, [00:13:50]our control, our script should have ended. At this point itself, [00:13:55]I have purposefully commented this out so that I can continue in this script. [00:14:00]Don't bother thinking, because the script is telling, here it is exiting.
[00:14:05]Why did it continue? It's purely intentional, Right? Okay, [00:14:10]great.
We have successfully checked whether the [00:14:15]software is installed already or no. And after that, we are able to grab [00:14:20]the software directly from Internet to our local directory. Now what we're [00:14:25]left with installing the software in our system. What we're waiting for, [00:14:30]let's do it. In order to install the package, what we have [00:14:35]to do is use start process power seals command Led.
[00:14:40]Specify what is the installer file that you want to install [00:14:45]and pass the switch. Las, this [00:14:50]is for silently installing the installer file because you do not want to [00:14:55]interact, right? And then we have these few other optional switch [00:15:00]for our powerful command lead and we just have to execute this once [00:15:05]the installation is completed.
It is a good idea to validate again [00:15:10]that whether in the end the software is installed successfully or not.
[00:15:15]Right? So for this reason we are keeping this, okay. I'm saving this [00:15:20]and executing the entire script, so it has downloaded [00:15:25]and then it is saying software is installed successfully.
Well, you have [00:15:30]every reason to not trust me because our software was already [00:15:35]installed, right?
So I'm uninstalling it. First you can see [00:15:40]it is gone. Now I'm also deleting this file. [00:15:45]Okay. Then we will install it fresh, [00:15:50]executing the entire script again. This time [00:15:55]because the file is missing in this location, it is again downloading the file. [00:16:00]As you can see, it is clearly indicating software [00:16:05]is installed successfully. It's time to go and valid it here, and we are able [00:16:10]to see nor is installed successfully.
Congratulations, [00:16:15]in the beginning itself, we discussed that this entire [00:16:20]script. Write into a log file because we will not be attending the [00:16:25]installation.
Logging becomes important. For this reason, we can [00:16:30]keep this very simple log. I'm just starting the transcript. In the beginning, [00:16:35]we are writing the log messages into this particular location. [00:16:40]In the end, we are stopping the transcript. Stop transcript. [00:16:45]With this, we can expect the log messages to be [00:16:50]written in this file, clearing my screen and triggering a fresh [00:16:55]and running the script again. Good. Now let's [00:17:00]go to this location one level up. Yes, [00:17:05]inside log folder, this file is available, which we can open [00:17:10]and see for ourselves. Yes, our log messages are here. [00:17:15]Software is already installed exiting, but it didn't exit of [00:17:20]the. This is our installation process. And then [00:17:25]software is installed successfully. Transcript is ended. Right. [00:17:30]Let me close this. And what we will do is go here. [00:17:35]I want to uncomment this. Okay. Saving [00:17:40]it with this. Let me again run this. [00:17:45]I'm launching Windows Powershell. Again. Run as administrators. [00:17:50]Run as administrator, yes. And coming here, [00:17:55]I want to switch to this directory, then [00:18:00]execute our script software installation with Powershell.
[00:18:05]Run. This file is already existing and [00:18:10]it is not able to overwrite it. What I want to do is remove this Kolb [00:18:15]and pass this [00:18:20]force, save it, [00:18:25]clear [00:18:30]my screen and run this script again. Okay, [00:18:35]this time, since the software is already installed, It exited at this very end [00:18:40]and the information is successfully recorded in our transcript.
[00:18:45]This is good. But if we cannot extend this script [00:18:50]to another software without making major changes, then there is no [00:18:55]fun.
Right? What I want to do, make a clone of this script, and [00:19:00]I want to install this time with power shell.
We [00:19:05]want to install another software called Notepad plus plus.
I want to [00:19:10]name my file as this one. Install hyphen, [00:19:15]not pad plus plus. Okay, and open it in [00:19:20]power shell.
[00:19:25]It is same script, we just have to [00:19:30]make minor tweaks here and there. This time file I want to name a [00:19:35]log file. Should be not pad plus plus, indicating this software installation, [00:19:40]right? And we have to grab not pad plus plus installation file. [00:19:45]I'm saying n p plus plus download. I'm clicking [00:19:50]here and then download button. And clicking on this button [00:19:55]is triggering the download of this file. So I can just go to the downloads [00:20:00]and grab this link, copy, link address, come to powershell. [00:20:05]Just replace this, okay. I don't care whatever it is. For me, [00:20:10]what is important is whenever I click on, whenever I hit this URL [00:20:15]in my browser, it is downloading the installer file for me, [00:20:20]this is there. Then I want to name this file as not pad [00:20:25]download file, right? And this also I want to change as not pad [00:20:30]this software name. We have to keep it aligned with how it will look [00:20:35]inside our registry settings. Okay, So we have to tweak keeping this [00:20:40]in mind. Okay, after making the couple of small changes, I'm [00:20:45]saving my script going here and here. [00:20:50]I want to execute install, not pad [00:20:55]script. Okay, let me run this transcript. Started [00:21:00]writing, okay. So this is downloading the file. [00:21:05]Good file is downloaded and then software is installed successfully.
[00:21:10]Within a couple of seconds, it has installed the software. Software is installed successfully.
[00:21:15]And this can be validated here as well as now I can [00:21:20]see Not pad plus plus is available in my system. How cool is that, [00:21:25]right?
For your knowledge? I want to add this.
[00:21:30]It depends a lot on the developer, how they have packaged a particular installer [00:21:35]file, for example, for this VLC, if I just go here. [00:21:40]And pass this switch.
You can see it is opening this help section [00:21:45]for us, where it is itself suggesting what are the parameters it will accept. [00:21:50]If I just give it quite then it will quietly install [00:21:55]the package for me, right? Whereas for Winrar, if I do the same [00:22:00]thing, instead of giving that help section, it is directly opening [00:22:05]the GI. Not every installation file is same. You have to be careful [00:22:10]about it. While most of the times this way of installing the [00:22:15]software file will work, sometimes it may not because [00:22:20]it is not in our hands. Right. Another thing that I want to add to [00:22:25]your knowledge is if you're working in Windows, you have chocolate as [00:22:30]a package manager, right?
So you can always go to find packages in [00:22:35]chocolate and search for whatever package that you want [00:22:40]to install. For example, win.
If I search this, I'm getting [00:22:45]this statement over here. If I have chocolate installed in my system, [00:22:50]I can directly execute this statement, which will itself take [00:22:55]care of downloading the package and installing in my system, right? [00:23:00]So the point is, whenever our package is available in chocolate [00:23:05]and there is no organization policy blocking us from using [00:23:10]it, we can make use of this and avoid reinventing [00:23:15]the wheel. All right, you awesome people. In this lecture, we [00:23:20]successfully developed this super awesome power self script, which takes care [00:23:25]of end to end software installation. For us, this is awesome. [00:23:30]Right Now what I want you to do is I have not [00:23:35]done the error handling something can go wrong while downloading the file due [00:23:40]to permission, Internet issue, and whatnot, as well as while installing [00:23:45]the software. There are things that can go wrong.
You should be mindful of [00:23:50]these things and do the error handling wherever required.
There are good [00:23:55]chances that you do not deal with just one software, right? [00:24:00]You might be interested in installing 45 softwares or more on [00:24:05]hundreds of systems that you administer, right? So it is a good [00:24:10]idea to specify multiple softwares along with their [00:24:15]download URL in the beginning of script, and then make a four [00:24:20]loop to iterate this logic over each item in that four loop. [00:24:25]And install multiple softwares on a system in single run of a [00:24:30]power shell script. If you have requirement to install multiple softwares, [00:24:35]you can always do this. Next, we have kept [00:24:40]all different scripts that are necessary for a software installation inside our powershell [00:24:45]script. If you want to use this script to install a software on [00:24:50]hundreds of computers, you can very well use this script along with your [00:24:55]orchestration tool like Sable Shave puppet, et cetera. [00:25:00]If you do not have any orchestration tool, you can [00:25:05]use Voc command of Powershell itself from a centralized location [00:25:10]to run this script on multiple remote computers and install your software [00:25:15]on those computers? Yes. All right, my dear friends, that's [00:25:20]it for this lecture. Take good care of yourself. Thank you.
[00:25:25][00:25:30]Hello, my dear friends.
Welcome to [00:25:35]this lecture. Powershell is an extensive scripting language capable [00:25:40]of automating just about everything.
And certainly it [00:25:45]is not just for system administrators. I'm recording this lecture [00:25:50]just to help you visualize how you can make use of power shell even [00:25:55]if you are not a system administrator. I'm recording this lecture on this [00:26:00]date and I honestly do not want to reveal this in my lecture, [00:26:05]because I do not want my content to be judged purely based on [00:26:10]its date of recording. As you rightly understand, it's not practical to keep the [00:26:15]lecture up to date all the time, right? So just about every content creator [00:26:20]want to hide this day. Now until Windows ten, there was [00:26:25]a toggle setting to show or hide this date and time.
And it was very easy [00:26:30]for every content creator. Right?
Starting with Windows 11. I don't [00:26:35]know what make Microsoft to change their mind, but now they have made it [00:26:40]a little difficult.
Okay. Actually there is no setting to hide directly, [00:26:45]so we have to do it this way manually. We have to go to date and time settings, [00:26:50]go to additional clocks, date and time. [00:26:55]Change date and time, then change calendar settings. [00:27:00]Here also additional settings. And then we [00:27:05]have to tweak these settings. For example, if instead of this format I just put edge. [00:27:10]Edge, something like this, I just have to [00:27:15]play with this, okay? Now when I click on okay button, [00:27:20]it is changed. Right now, obviously I want [00:27:25]to save the settings, so I'll click okay. Okay.
And come out of this and close this [00:27:30]now, because this is also my personal system. Once this recording [00:27:35]is done, I want to turn on this date and time. I have to [00:27:40]again, follow this procedure. It doesn't sound good. Right. As you might [00:27:45]be realizing, this is not a problem of system administrators. Correct. It [00:27:50]is a common problem faced by any content creator right now. [00:27:55]How to deal with this and make the process faster for this.
[00:28:00]Let me show you, this is registry edit settings. [00:28:05]And in this location I have found now the date [00:28:10]format. What my system is using is actually defined over here. This is [00:28:15]short time time format month, it is defined [00:28:20]over here.
We just need a script to tweak the registry [00:28:25]settings and our job will be done right. For this, [00:28:30]let me open my power shell for modifying the registry setting.
[00:28:35]We can use set item property and then we have to specify the [00:28:40]path. My registry setting is available at [00:28:45]this path, right? I'm specifying like this over here. [00:28:50]And then what is the setting that we want to modify? We have to [00:28:55]specify its name. Name is long date and it's value [00:29:00]I want to keep as a single white space. Okay, like this, we have to [00:29:05]modify a few other registry settings also. Let me put them here, [00:29:10]and that's it, my job is done. So let me execute this and [00:29:15]go to registry settings. Refresh this, and you can see these values are [00:29:20]modified, but this is still not gone, right? Why? Because this explorer [00:29:25]has to be restarted for this, this refresh needs to be done. [00:29:30]I have to go to task manager [00:29:35]and look for a process called Explorer [00:29:40][00:29:45]and close the process. And again, start [00:29:50]with this. As you can [00:29:55]see, the date and time is modified. This is a way to apply the registry [00:30:00]setting in this particular case right now, this process of stopping [00:30:05]and starting it again. Also I do not want to do manually. [00:30:10]Let me add this command. Let me add this [00:30:15]statement and this statement, This will simply look for a process called, [00:30:20]whereas this one will start it again. Simple, right? [00:30:25]This time it will become fully automated, [00:30:30]right?
As you can see, this script is good for hiding the date [00:30:35]and time. But once the recording is complete, I again want to see the [00:30:40]date and time, right?
This same script I want to use for hide and show [00:30:45]both the need of switch, right, well [00:30:50]you're right for this, we can use this parameter.
[00:30:55]What I want to do with this switch is whenever this is passed, I want [00:31:00]to execute a different set of scripts, which is [00:31:05]this one, okay?
And I can put L [00:31:10]here and inside L's block I can. Okay, let me hide this.
Okay, [00:31:15][00:31:20]with this simple change, there is more functionality added to [00:31:25]our script with reset flag. It will execute these statements [00:31:30]and show the date and time the way I want to.
Right? You have to add these, okay?
[00:31:35]Whenever I'm not passing this reset switch, it will execute [00:31:40]these statements and hide my date. This same script [00:31:45]can be used both for showing and hiding the date and time right after that.
[00:31:50]These statements which are responsible for restarting Explorer process, [00:31:55]we need these statements for applying our registry settings in both the cases, [00:32:00]so I'm not keeping them inside FL's loop right now. [00:32:05]Let me save this where I want to shave and I want to give it this name dot [00:32:10]save it. Saved, Okay. Now what I want to do is [00:32:15]launch Power Console over here and execute the script. [00:32:20]Modify press tab and run this. [00:32:25]Okay, process explorer is restarted. [00:32:30]This has done the job of hiding the time it was already [00:32:35]hidden. So I have nothing to validate. Now this time, let me pass this reset [00:32:40]switch to the script, and upon successful execution of the script, [00:32:45]I expect to see my date and time back. Let me execute this. [00:32:50]There we go. As you can see, the date [00:32:55]and time is back again. If I want to execute this for hiding, [00:33:00]I can run this date [00:33:05]and time is gone.
As you can see, it is easier for me to execute this [00:33:10]statement instead of going to that calendar setting of date and time [00:33:15]and modifying it from there every single time.
Correct.
So [00:33:20]this is our script doing its job perfectly fine. Also, I want [00:33:25]to add for you, if you are not a system administrator and you are doing [00:33:30]this task just for yourself, you need not to bother about logging the messages [00:33:35]into log file than exception handling performance [00:33:40]of the scripts. All these things are not for you. All you should concentrate [00:33:45]on is to identify the tasks that are tedious and repetitive. [00:33:50]Just write a simple workflow how you are doing those tasks. [00:33:55]And then for each item of that workflow, try to find the right powershell [00:34:00]statements. Taking the help of Internet in doing this is no sin. [00:34:05]Powershell can do so much more than what we have learned in this lecture. [00:34:10]My intention behind this lecture is to give you food for thoughts. [00:34:15]I'm looking forward to see which tedious task you automate with the help [00:34:20]of your Powershell knowledge. Well, that's it for this lecture. Take good care of yourself. [00:34:25]Thank you.
[00:34:30]Powershell [00:34:35]is a powerful scripting language, and is often used to automate [00:34:40]certain tasks that would otherwise be repetitive, [00:34:45]tedious, and error prone. Automation really starts [00:34:50]to provide value when it is applied to more complex tasks. [00:34:55]But these complex scripts might also have more chances of [00:35:00]throwing error and eventually breaking while the script [00:35:05]is executing.
There are so many things that are not in your control, [00:35:10]like security account is denied access, some server is [00:35:15]not reachable, application service may not respond, Machine [00:35:20]configuration may have changed, et cetera.
Without logging, [00:35:25]it is hard to know if script ran fine or something went [00:35:30]wrong. For example here how we will know at this time [00:35:35]an error was thrown Without this log messages correct. [00:35:40]If something goes wrong, logging can help us find [00:35:45]out where and why the error occurred. This [00:35:50]ability makes it an important part of our scripts. Yes, [00:35:55]while it is perfectly fine to use power shells [00:36:00]built in command leads for writing the log messages, it is always good [00:36:05]if we can find out some module for this to get additional features [00:36:10]right In search of this.
We come here to Powershell gallery [00:36:15]and search for logging and enter. [00:36:20]This first module itself is what we are [00:36:25]looking for. It is an external module. We need [00:36:30]to install it in our system in order to use it, we can just copy [00:36:35]this and launch. We need to launch Powershell in admin [00:36:40]mode, install module name logging enter. [00:36:45]I'm expecting an error because this module is already there [00:36:50]in my system. But this is very trivial step and you can follow it very [00:36:55]easily. Version is already there. This is perfectly fine. Let me close this. [00:37:00]Now let's try to understand this module in detail. [00:37:05]We can go to the project site. From here, [00:37:10]the documentation is available. This is also fine. But I [00:37:15]would like to take you through this site where the same modules documents are published. [00:37:20]This is better in Luken field, Yes. [00:37:25]All we need to know about this module is well described in this example. [00:37:30]Let's try to understand this.
First of all, we can set [00:37:35]the logging default level. Why this option is given [00:37:40]is because not all the time, we want to see all the messages. [00:37:45]For example, if an error is reported for your script and you are [00:37:50]investigating it. You want to set the default level to debug, [00:37:55]which will ensure your logs are displaying all different messages [00:38:00]written by the script. But if your script is running smoothly from [00:38:05]quite some time, why do you want to see so many messages? Instead, [00:38:10]you would want to concentrate on warning and errors. To [00:38:15]respond to such needs, we have this provision of setting the default level [00:38:20]where we can set the level according to our requirements. Correct. [00:38:25]Then we can add the logging target if you want to see the log messages [00:38:30]on the console, you can add console as a target. If [00:38:35]your script is doing something significant and you want to go back in the time [00:38:40]and see the log messages, you can add this file also [00:38:45]as a logging target, specifying where you would want to write your log messages.
[00:38:50]Then this module will help you to write the messages into file [00:38:55]as well. Next, we need to understand this right log command [00:39:00]late, which comes with this module. Here we are setting the [00:39:05]level of our messages out of debug info, [00:39:10]warning and error. You can pick an appropriate level and set it here.
[00:39:15]This weight logging command late [00:39:20]ensures script will wait for 30 more seconds to finish off [00:39:25]writing the log messages before it ends.
Right?
It is important [00:39:30]for the unattended scripts you can go through these documents, [00:39:35]snee pets to understand these command lets better. For now, this much [00:39:40]understanding is good.
Now let me take you through this example [00:39:45]which I've prepared for you. It is almost the same example [00:39:50]except this concept that I choose to write my own function [00:39:55]instead of directly relying on this right low command late [00:40:00]reason is this. Say anything if you want to change. For example, [00:40:05]I want to add more time or say tomorrow I get [00:40:10]some better module than this logging module and I want to change it.
[00:40:15]In those situations, all I'll have to do is change this statement and [00:40:20]I'm done. Writing this small function will save me from the effort [00:40:25]of replacing this command late from so many places.
This is the simple [00:40:30]reason why I chose to write this function, right? We have [00:40:35]set the default level to info, our logging targets are console [00:40:40]and the file as well. This is the fore loop, [00:40:45]okay, This is not needed. I am writing all the messages [00:40:50]with error level, right? So let's execute this and see.
[00:40:55]We can see all ten messages [00:41:00]are printed over here. And if I show you this file, yes. [00:41:05]All ten messages are available over here, correct? I'm changing [00:41:10]the default level to error and this time I'm writing Berg messages. [00:41:15]You understand the difference correctly.
This is the default logging level, [00:41:20]whereas this is the level of current message which I'm writing.
[00:41:25]Let's see what it prints this time.
Come on to no messages. Why? [00:41:30]Well simple, right? Because we have said the default level to error, [00:41:35]any messages which is below this level are not considered.
[00:41:40]If you want to see all different messages, including debug [00:41:45]messages, information, warning and errors, we can change [00:41:50]the default level to debug and now we can expect all different messages. [00:41:55]Let me execute this script again. All the messages [00:42:00]are appearing here as well as [00:42:05]they are appearing in this file also. All right, [00:42:10]my dear friends, I hope now you are clear on this logging module.
[00:42:15]It is very simple concept but often very helpful.
[00:42:20]I would urge you to use logging for all of your complex scripts.
[00:42:25]Well, that's it for this lecture. Take good care of yourself.
Thank [00:42:30]you.
[00:42:35]Hello you, awesome [00:42:40]people. Welcome to this lecture.
In this lecture, we will [00:42:45]learn CSV file handling with power shell, CSV [00:42:50]or comma separated values.
File format is a very popular format [00:42:55]for managing your data. According to me, the main [00:43:00]reason behind its popularity is that it is very human friendly. [00:43:05]You can read your CSV file and understand the data very easily, as [00:43:10]well as because it is structured data, it is very easy for the programs [00:43:15]to parse the information and make use of it.
I'm creating a simple [00:43:20]file. I'm naming it as samples [00:43:25]and accept this prompt. And that's [00:43:30]it. We have created our first CSV file. Let me open this file in [00:43:35]pad and let's add some data to it. Say,
[00:43:40][00:43:45][00:43:50]there we go.
We have created this data in this, [00:43:55]each new record has to be in new line.
If you want to add more record,
[00:44:00][00:44:05]say I have added some more data to this file.
[00:44:10]One benefit of CSV file, we can open it in a text [00:44:15]editor and edit the data as well as we can open it inside [00:44:20]Microsoft Excel. As you can see the data structure [00:44:25]on this data, you can apply your Cel knowledge and do whatever [00:44:30]formatting that you want to do or data manipulation you want to do. [00:44:35]You can just treat this as a Excel file. It reserves that in the end [00:44:40]you will have to save it as a Excel file and not as a CSV. Because [00:44:45]with CSV you will lose everything else except this data. [00:44:50]All right, so we have seen the use cases in which CSV proves [00:44:55]to be a very human friendly format for storing the data. You might [00:45:00]be wondering why this data needs to be handled with the help of power shell [00:45:05]when Cel is doing its job nicely, right? It is very neat [00:45:10]and clean and we can pretty much do anything with the help of Cel only. Why should [00:45:15]I be bothered about writing powershell statements for this? Let's understand this. [00:45:20]So there are two systems that needs to interact with each other. [00:45:25]One software you are using for handling daily at the counter. [00:45:30]And this software needs to send the daily sense data to [00:45:35]the centralized system. It may be slightly difficult to grasp, [00:45:40]but in the real time, these could be millions of rows. I have myself seen [00:45:45]30 GB of CSU file. It was pretty shocking for me. Okay, [00:45:50]This data needs to be sent to the centralized system. [00:45:55]Because there are two different softwares involved. It is quite possible [00:46:00]that the output of software one is not perfectly aligned with [00:46:05]software two. This software two needs data in a certain [00:46:10]format. Now, what you will do every time this software sends [00:46:15]a business data file, you will open it in Excel, make the change, and place [00:46:20]it for consumption by software to not very practical, right? You want [00:46:25]to design this system in such a way that things happens in the real time. [00:46:30]Well, this is where you need powershell to handle the CSV data, which can [00:46:35]help you make those changes in the CSV file and then you can make use of it. [00:46:40]This is just one example. All right. There are many more use cases [00:46:45]in which power shell is very helpful in managing your CSV files. [00:46:50]It is right time to open our power shell I and start exploring [00:46:55]how we can handle CSV files with the help of power shell.
[00:47:00]I'm opening this in my IC itself [00:47:05]and here I'll write the code we want to read this file. I'm at the same location [00:47:10]where CSV file is there.
Okay, Get content samples [00:47:15]and execute this with this simple statement, [00:47:20]we have successfully read the content written in the sample file [00:47:25]inside our powershell. While you can work this way also, you can [00:47:30]perform all different operations that you want to do on this content you [00:47:35]have read. And then you can also choose to export it to another file. All of [00:47:40]that is possible. But this is like if I give you a braid, you [00:47:45]grind it and convert it into floor, and then make another braid out of it. Doesn't make [00:47:50]any sense, right? This is a CSV file written in a particular [00:47:55]format and power shell has provided us with the command lets to [00:48:00]handle CSV data. It is a very bad idea to treat this data as a [00:48:05]raw text. Instead of doing this, let's make smart use of this structure [00:48:10]data.
I'll just remove this and I'll say import [00:48:15]CSV. What is the name of CSV sample? [00:48:20]I press Tab, and this is the name. I'll execute this.
[00:48:25]I need not to say what the difference is, you can clearly see with your eyes. [00:48:30]This time we have imported the data correctly.
Let me store [00:48:35]this in a variable called data and execute it again. Now [00:48:40]I want to see what is the type of this data. Type [00:48:45]enter, this is an array. Okay?
[00:48:50]What this array contains, I'll say [00:48:55]take one element out and get mad type it, is S custom [00:49:00]object. Okay? Our data is an array of [00:49:05]PS custom objects. What this means is this entire thing is stored as [00:49:10]an array in which each record is one S custom object. [00:49:15]Before we continue in this lecture and start making changes on this data [00:49:20]object, let me just tell you the previous command that we used.
[00:49:25]Get content samples.
If you're reading your CSV file like this, [00:49:30]world has not ended for you, you can still convert it to CSV [00:49:35]like this, okay?
Now, if you store this data in another [00:49:40]variable, say data two equals to get this thing, and [00:49:45]then data two type [00:49:50]it is again an array.
What this means is data and [00:49:55]data two objects hold same data, There's no difference.
Awesome. [00:50:00]I'll clear my screen as well as get rid of this statement [00:50:05]because we know data is an I can always do [00:50:10]stuff like this, right?
I just want the first row data, I want [00:50:15]the record four records, [00:50:20]three and I'll get this data one.
[00:50:25]And if I just execute this, I'll get last three records, [00:50:30]right? All different array operations are possible [00:50:35]on the data object.
Let's say we want to see only those records [00:50:40]where salary grade starts with a.
I'll just put the requirement [00:50:45]here,
[00:50:50][00:50:55]we can write it [00:51:00]like this. Dollar data is my object. [00:51:05]I want those records where object. [00:51:10]Then I have to use curly braces. And I'll put it this way.
I [00:51:15]want to see the current item in this. I want salary [00:51:20]grade, and we have to specify the condition here. What is that? [00:51:25]It must be A and then wild card, [00:51:30]any record that starts with a B mark [00:51:35]or David, all three should fall in this category, right? Let me execute [00:51:40]this, and there we go. We have successfully filtered out the required data [00:51:45]from this larger set of data, right? Of course, if you have more [00:51:50]complicated requirements, let's say you want those employees whose [00:51:55]salary grade starts with A as well as they must belong to Department [00:52:00]Cloud Services.
You can put condition [00:52:05]and the current item I want to access and I want to see their department [00:52:10]should be [00:52:15]CS and execute this.
In this case, [00:52:20]only this record satisfy both these requirements. Only this is appearing [00:52:25]over here means our filter condition is working perfectly fine, right? [00:52:30]Use your knowledge of writing conditional statements and I'm [00:52:35]sure you will be able to fetch exactly those records which you want with the help [00:52:40]of powerial statement like this. Okay. Moving on. [00:52:45]This time my requirement is such company has changed the slab. [00:52:50]Wherever you see salary grades, starting with A, we have to say [00:52:55]hi. And wherever it is with B, we have to say low salary grade.
[00:53:00]Let's categorize it like this. What I will do is data, [00:53:05]and this time I'll use for each object because [00:53:10]I want to process the data right, one by one.
What is that [00:53:15]condition condition?
I want to first get [00:53:20]hold of salary grade column of each record, and then want [00:53:25]to compare it like this. If it starts with A, salary [00:53:30]grade is equals to high. [00:53:35]If it starts with B, then I'll say medium, let's [00:53:40]say. Right. Okay, Let's execute this statement. [00:53:45]Now if we see our data, you can see it [00:53:50]is categorized differently. All those whose grade starts with A [00:53:55]are in high category, whereas others are in medium grade.
[00:54:00]Right. This way we have successfully modified our CSV data. [00:54:05]Cool, let's say.
To add another [00:54:10]row to this data. How can we do that? Very [00:54:15]straightforward. Each record in the array, we can access it like this. [00:54:20]And if we see it's tie, it is a PS custom [00:54:25]object, right? The new record that we want to add to [00:54:30]this array should be PS custom object, right? I'm assigning name [00:54:35]as new row to this record, and employee name is new name, Role [00:54:40]is powershell, department is platform, cell degrade is low, Low, [00:54:45]no, no, no, highest. Okay, fine. So this is the [00:54:50]S custom object that we have created. If I just execute it and [00:54:55]just see it's type get [00:55:00]type PS custom object. Fine, right? [00:55:05]We are good to add our PS custom object to the original dataset [00:55:10]like this simple operation. And if I just execute [00:55:15]and now print XP data, you can see the new record [00:55:20]is added over here without breaking the structure. Yes. [00:55:25]Now, because we have learned how to add a new row, it is our [00:55:30]responsibility to also know how to add a new column to our CSV [00:55:35]data. Right, I'll take this one. I [00:55:40]just want to demonstrate how to add a new column, so I do not want to keep [00:55:45]very intelligent requirement as such. I just want to add another column [00:55:50]as group. And we'll say sales is not a technical team, [00:55:55]but all other teams are technical teams, let's say. Okay, simple just for [00:56:00]reference, okay. May not be very logical as such, so I want to add [00:56:05]a new column here. Let's see how to do this. We want to process [00:56:10]each record, correct. For this, we need four loop [00:56:15]inside this. What we want if department is [00:56:20]sales, if condition is there and inside this [00:56:25]dollar underscore current item. And what is that I want on this [00:56:30]department, right department, and [00:56:35]it is sales, then [00:56:40]what do we want? We want a new column to be added, just [00:56:45]like our previous PS custom object. I'll say [00:56:50]in this case we do not want any hard coded value. [00:56:55]But we want to take from the same record, right? Dollar underscore [00:57:00]employee property we want to read from here become role. Department [00:57:05]become department degrade, becomes all degrade here. We want to add this [00:57:10]record. There's no change in this, except that we want to add this thing, [00:57:15]okay? Group equals two, [00:57:20][00:57:25]okay? If department [00:57:30]is sales, I want to say this group is non technical. And [00:57:35]if it is not sales, then same thing, I have to repeat. [00:57:40]And this time I'll say it is technical. [00:57:45]Simple, isn't it? This is my classification. And then we want to [00:57:50]add it to something, I'll say employees [00:57:55]data. Okay? And this is [00:58:00]an array as we must know by now, right inside this four [00:58:05]loop, once this FL part we have fixed after that [00:58:10]plus equals to, to original dataset, we want to add [00:58:15]this record. Now [00:58:20]understand like this, some data is there. On top of it, we are writing a [00:58:25]four loop. So we can expect every single record to be modified, right? [00:58:30]And then inside it there is an F loop which says [00:58:35]if the person belongs to sales department, add a new column, [00:58:40]this person belongs to non technical group. Whereas if [00:58:45]the person is not from sales department, then else block should get [00:58:50]executed where we have kept group as technical. In both [00:58:55]the cases, this record custom object will be created in the end [00:59:00]of this four loop. We are adding it to back to our array. Right, [00:59:05]let me execute this. Now if I [00:59:10]just copy this and Paste here, you can see a new [00:59:15]column is added which categorize salespeople as non technical [00:59:20]and all other people as technical. Don't go with this logic. I know salespeople [00:59:25]could also be technical, but I'm just scaping it for reference over here. Okay, [00:59:30]you want to see this tabular format? I'll say format [00:59:35]table. And yeah, there we go. This way we have successfully manipulated [00:59:40]our data. Awesome. We understand now how we [00:59:45]can add a new column. Why not to understand how to remove a column, [00:59:50]right? Removing a column is even simpler. Job data [00:59:55]is there in this, you can always write, I want to see select [01:00:00]employee name and maybe just the group. [01:00:05]And that's it. So if I execute this,
[01:00:10]my employees data on this, I have [01:00:15]to say because data I never modified, right? Okay. Yeah, with [01:00:20]the help of this select or to be more precise, select object command Led. [01:00:25]We can always specify which columns we want to see, right? Storing [01:00:30]this data in another variable is no difficult job at all. This [01:00:35]EXP has your required data, right? [01:00:40]For now, I do not have any such intentions. I'll just say this [01:00:45]dollar EXP data. [01:00:50]Great. In this lecture, first of all, we understood [01:00:55]how to filter the data. Essentially that means how we can remove [01:01:00]certain rows that we do not want, right? So you can call it [01:01:05]delete rows operation also in a way we understood the replace [01:01:10]a certain value that you want to change. How we can do that then [01:01:15]how to add a new row to our data set. How to add a new column, [01:01:20]as well as how to remove a column. We modify the data [01:01:25]A, now let's say result is this, okay? We want [01:01:30]to export it to another CSV file. How we can do that? Well, [01:01:35]very easy. Put a pipeline over here. Use command export. [01:01:40]Then what do you want to export? Csv and command. Let is here, just [01:01:45]use it. Specify what is the path or [01:01:50]name you want to give results. This is the name [01:01:55]I want to give. If you want to create this file at a particular location, you have to specify [01:02:00]that location over here. I'm fine with the current location. Okay, [01:02:05]let me execute this and go to my location. Results [01:02:10]is created. I want to open this file. [01:02:15]Our data is available over here. But what this stupid row is doing, I never [01:02:20]asked for it, right? Well, very easy, you can get rid of this by saying [01:02:25]no type information. I do not want the type of data, right? [01:02:30]Execute it again and go here, refresh, and that [01:02:35]column is gone. I want to see it in Excel, I'll just say [01:02:40]result. Open it in Excel. There we go. Simple and straightforward, [01:02:45]right? All right, my dear friends, in this lecture we [01:02:50]learn how to manipulate CSV data with the help of power shell. [01:02:55]I'm going to provide you this script file for your practice. Well, that's it [01:03:00]for this lecture. Take good care of yourself. Thank you.
[01:03:05][01:03:10]A window service [01:03:15]is a computer program that runs in the background. It doesn't [01:03:20]have any user interface and is similar in concept to [01:03:25]a Unix demon. Window service can be started [01:03:30]automatically or manually, and if not needed, they can [01:03:35]be kept in the disabled state as well. Okay, fine. [01:03:40]But why are we talking about window services in this lecture, [01:03:45]which is supposed to be for learning powershell? Well, [01:03:50]there are many operations involved in managing window services [01:03:55]where power shell can help confused. Let me explain. [01:04:00]Window services are designed to run all the time. [01:04:05]Many times they start making the overall system slow due [01:04:10]to memory issues. To keep the system healthy, we should [01:04:15]keep on restarting the services at some fixed schedule. It [01:04:20]could be once in a day, on alternate days, or even [01:04:25]weekly. Restart is acceptable depending upon what kind of task [01:04:30]your service is performing and how much load is there on it. [01:04:35]Powershell can help you in automating this very frequently needed [01:04:40]automation apart from this. Powershell can be used [01:04:45]to stop a service, start a service, changing the service user [01:04:50]account, or the start up type of a service, et cetera. [01:04:55]One big advantage of using Powershell for managing your [01:05:00]services is that using Powershell you can not only [01:05:05]very well manage the services on your local system, but also on the [01:05:10]remote systems. This means using a single line Powershell statement, [01:05:15]you can change the state of a service on hundreds of virtual [01:05:20]machine. Powershell saves a lot of your time. [01:05:25]I hope you are excited. Now let's get started with [01:05:30]learning how to use power shell to manage your Windows services. [01:05:35]All right. First of all, to launch the [01:05:40]services application, either we can go to the Start menu and [01:05:45]type Services and click here. [01:05:50]Or we can go to the Run and type services. Hit [01:05:55]Enter. This will launch this particular [01:06:00]application. If you want to manage the services on this machine [01:06:05]itself, that's fine. Because it's already showing local, you can [01:06:10]very well proceed with your operation. But if you want to manage the services [01:06:15]on a remote machine, you can connect to that computer from [01:06:20]here and then perform your tasks. In this application, you can [01:06:25]see hundreds of services are there which has been assigned some [01:06:30]task which they are performing either all the time or whenever [01:06:35]required. We see there are many services which are in [01:06:40]running state and there are many others which are currently [01:06:45]in stopped state. Right. If we double click on any service [01:06:50]or right click and go to the properties, we can see these advanced [01:06:55]options. Here you can see some basic details about your service path [01:07:00]to executable and its start up type, which could be automatic, [01:07:05]manual, or if you are not planning to use the service in nearby [01:07:10]future, you can even keep it in disabled mode. Here is your service [01:07:15]status, which is currently running. If you want to, we can stop it by [01:07:20]clicking this Stop button. If you go to log on, we can [01:07:25]see the account using which this service is running. Right now, [01:07:30]it is running through a local system account. But if you want to change it, we can do it [01:07:35]from here, right? If you have a domain user account or a local administrator [01:07:40]account, you can just specify its username and password, then [01:07:45]this service will start running through that account. These are some basic operations [01:07:50]which you want to perform on your window services. We have seen how [01:07:55]to do this in the UI. It's time to learn how to perform these operations [01:08:00]using our friend Powershell for performing any action [01:08:05]on any of the service running in my system. First of all, [01:08:10]what I need is object of that service. Let's start with [01:08:15]Powershell. Command late for getting the service objects. [01:08:20]As we already know, powershell, naming of command late is very user [01:08:25]friendly. It is verb hyphen now [01:08:30]because we are trying to get something verb is gait and [01:08:35]what we're trying to get, it's a service gait hyphen. Service [01:08:40]is the command late. You do not need much of efforts to remember such [01:08:45]command late names. Right? Let me execute this command late.
[01:08:50]You can see it has listed [01:08:55]down all different services running in our system. And [01:09:00]you can even compare it with this particular output, right? [01:09:05]Most of the times, we are not interested in this output [01:09:10]of all different services. Rather, we want to filter it down [01:09:15]to get the objects for our required services right. [01:09:20]Now, let's see how to filter down this output to get [01:09:25]only those service objects which we need. If you want [01:09:30]to get the service by its name, you can use name as a switch [01:09:35]and just specify the name of service in which you are interested [01:09:40]in. For example, for example, I [01:09:45]want the object for this particular service. Just specify [01:09:50]the name and execute. We have successfully fetched. Windows [01:09:55]management Instrumentation service. In this way straightforward, [01:10:00]we are able to see the status name and display name of the [01:10:05]service. But do we have any other detail about the service or no. [01:10:10]For this what we can do is just put a pipe and type select [01:10:15]star and run this statement. There we go. [01:10:20]You can see apart from these three properties, we have more properties [01:10:25]like can shut down is true, can pause and continue is [01:10:30]true, and so on, which can be utilized, right? [01:10:35]If you want to get the service objects not [01:10:40]by name but by wild card, Let's say you want all the [01:10:45]services that starts with MI and then we have [01:10:50]specified wildcard here. It will get us all different services [01:10:55]that are starting with VMI, right? [01:11:00]Instead of these three, if you want to see more properties, you can [01:11:05]specify like this. Instead of seeing this output in list [01:11:10]format, if you want to see it in tabular format, you can always [01:11:15]specify format, table, execute [01:11:20]this statement and you will see the output in tabular format. [01:11:25]If you put this command, let out grade view, you will get [01:11:30]even better view of this. [01:11:35]Makes sense.
Moving on, [01:11:40]let's slightly increase the level of problem this time. We want [01:11:45]to get all different services which are in running straight as well [01:11:50]as their start up type Should be manual, right? This is the requirement. [01:11:55]Let's see how to frame the statement for this requirement. We [01:12:00]want all services which are in running state. Let's break the problem [01:12:05]into two. First half of the problem is this, right? For filtering [01:12:10]the services like this, we can we object [01:12:15]command late over here just after the pipe. And then [01:12:20]here we have to put dollar underscore which [01:12:25]signifies the current object coming from output of gate [01:12:30]service command late via this pipe. Right? In which property [01:12:35]we are interested in status [01:12:40]and what is the status we need? It is running [01:12:45]status, so I'll say equal to and then now we should [01:12:50]verify if this statement is correct or no. Right, So let me execute [01:12:55]this. Yes. All different services which are in running status are [01:13:00]being returned in this output. Correct. So let's move on to [01:13:05]the second part of the requirement which is the services startup type should [01:13:10]be manual. Since there is an end in the requirement, we can [01:13:15]very well put brackets over here and add another requirement [01:13:20]here. We will specify the second part of the requirement and there is [01:13:25]end operator in between of these, right? And second part of the requirement [01:13:30]is it's start type should be [01:13:35]manual, right? [01:13:40]Yes, we have got the output [01:13:45]but since the startup type is not in the default output, we [01:13:50]need to put another pipe and then select it. I want [01:13:55]name, status of the service and start type. [01:14:00]Right, Let me execute this. Yeah, [01:14:05]the statement which we prepare is working perfectly fine and we [01:14:10]are able to see the services which are in running state as well as their [01:14:15]start up type is manual. Right on a side. [01:14:20]We can keep on adding this pipe and use more powerful command lets [01:14:25]in order to continue working on a single statement, right? But due to [01:14:30]this, if this statement becomes less readable for you, you can always break [01:14:35]the statement like this. Wherever you have pipe, just press enter. [01:14:40]Now if you execute, you can see it continues to work, [01:14:45]right. Now, last topic of [01:14:50]this lecture, because I'm running my Powershell statements on the same [01:14:55]system where these services are running. I need not to specify [01:15:00]the computer name always, right? But if I was to fetch [01:15:05]these services from a remote machine, I can always specify the computer [01:15:10]name like this. In my case, local host basically means my own system. [01:15:15]And then when you execute statement like this, Powershell [01:15:20]will fetch you this particular service from this particular computer. [01:15:25]If we had provided some other server name within the [01:15:30]domain, then Powershell would have fetched. This particular service [01:15:35]from that remote machine. This is how we can use [01:15:40]powershell for remotely managing the window services. All [01:15:45]right, in this lecture we learned the command late called gate service. [01:15:50]I am sure you are clear on how to fetch the services [01:15:55]that you need using the gate service. Well, that's it for this lecture. [01:16:00]Let's continue working on service management using Powershell in the next lecture. [01:16:05]See you there. Check care.
[01:16:10][01:16:15][01:16:20]Hi there, Welcome back. In the last lecture, [01:16:25]we learned how to use Git Service Command late of Powershell to [01:16:30]fetch the required services into powershell that we need. [01:16:35]We can fetch the services directly by the name or using the wildcard, [01:16:40]as well as using other properties like the service [01:16:45]status or the start type, et cetera. Now [01:16:50]it is right time to explore how to take different actions [01:16:55]like stopping a running service or starting a service, [01:17:00]or changing the service user account, et cetera. So let's get started. [01:17:05]This is a print puller service which is currently in the running state [01:17:10]and I want to stop it. What we can do is [01:17:15]simply put a pipe and place another Powershell command Led called [01:17:20]stop service. Right? Just execute this statement [01:17:25]now go here and refresh. [01:17:30]You see the service is stopped which was running earlier. Same [01:17:35]can be verified in power, shall also by running this statement. And you can see [01:17:40]service is stopped. Please notice my Powershell [01:17:45]is running as administrator. If yours is not running, please [01:17:50]re launch your Powershell in administrator mode. Now we want [01:17:55]to start the service again using Powershell. For this, what we can [01:18:00]do is use another command light called start service. [01:18:05]Execute. [01:18:10]Go here and refresh. You can see the service [01:18:15]is now in running state. Using these command lights, [01:18:20]stop service and start service, we can change the state of service. [01:18:25]If your print spooler service occasionally [01:18:30]gets struck, you can always try this option of stopping the service [01:18:35]and after a couple of seconds, you can start the service again. Please note [01:18:40]there's another command late called restart service, [01:18:45]which you can use like this.
[01:18:50]If you execute this statement, it will automatically [01:18:55]first stop the service and then turn it back into running state. [01:19:00]But from my experience, this is not good. You should rather go by this [01:19:05]and if you want to put some sleep time of some 10 seconds, [01:19:10]so that first service gracefully stops, then wait [01:19:15]for 10 seconds and then start again, right? Sometime restart, [01:19:20]service doesn't behave correctly. This is from my experience. [01:19:25]Moving on. Right now, the startup type of the service is automatic. [01:19:30]This means whenever system reboots, the service automatically [01:19:35]comes back into running state. But if you want to change this behavior [01:19:40]and you yourself want to manually start the service whenever the system reboots, [01:19:45]you can change it using set service command late. If I [01:19:50]just execute this statement,
[01:19:55]you can see now the startup type is [01:20:00]manual, right? You can use the set service for performing [01:20:05]multiple actions in one go. This time we are going to change [01:20:10]the startup type back to automatic as well as we are going to change [01:20:15]the description from this to this new description. [01:20:20]If I just execute this command and go here and refresh, [01:20:25]you can see now the start up type of service [01:20:30]is automatic and its description is changed. [01:20:35]All right, currently this service is running through a local system [01:20:40]account. Sometimes organizations have policies to follow [01:20:45]and they want to run a particular service through a domain service account. [01:20:50]Let's explore how to change the service user account [01:20:55]for changing the service user account. We can again use set [01:21:00]Service command only. But this is not available in Power version [01:21:05]five. Currently, you can only use it in Powershell version seven. [01:21:10]For performing this action. Launch power shell seven. [01:21:15]Of course, we are changing the state of service. [01:21:20]I should launch it as administrator. [01:21:25]In my case, I have already created this test as a local user [01:21:30]account in the system. In your case, if you want to run your service [01:21:35]through a domain user account, you can type your account name over here. [01:21:40]Right? I'll copy this line and go to my powershell Seven, [01:21:45]paste it here, it's asking for the password. Let [01:21:50]me pass it. Okay, now I'll execute [01:21:55]the second line, paste. [01:22:00]Now I'll go to my service [01:22:05]refresh. You can see now the service [01:22:10]account is set to test user. The user account [01:22:15]for this service is successfully changed to test user. Well done. [01:22:20]Now, as I told, this particular command of passing the credentials [01:22:25]to set service itself doesn't work in powershell five. [01:22:30]But for some reason, if you do not want to use power shell seven version at [01:22:35]all, you can perform the same task in these two ways in your current [01:22:40]version of Powershell itself. In first one we are using [01:22:45]to change the user name for our service. In the second method [01:22:50]we are making use of I. All right, my dear [01:22:55]friend, with this, I hope you are clear on concept of managing [01:23:00]window services using powershell. I urge you to complete this assignment [01:23:05]and test your knowledge. Well, that's it for this lecture. Take [01:23:10]good care of yourself. Thank you.
[01:23:15][01:23:20]Hello my dear friends, and welcome to this lecture. [01:23:25]As we understand, if we are dealing with hundreds [01:23:30]of servers, and there are certain Windows services that we [01:23:35]want to restart periodically, then Powershell is a great tool [01:23:40]to explore, right? In this lecture, I'm going to demonstrate [01:23:45]two simple Powershell scripts, which are immensely helpful [01:23:50]in restarting your Windows services on scheduled basis. [01:23:55]The best part is you can use the same scripts to restart services [01:24:00]on various servers on ad hoc basis, right? I'm [01:24:05]sure you are excited to see how we are going to run a Powershell script [01:24:10]on a centralized terminal server. And from there [01:24:15]it is going to restart the window services on various servers remotely [01:24:20]without having to login into those servers [01:24:25]without wasting any time. Let's get started. [01:24:30]First of all, let me show you the different VMs that I have deployed. In my Azure [01:24:35]subscription, I have deployed this terminal server. Then [01:24:40]there are two Apserverspserverzeroe 02, which [01:24:45]I'm going to use for demonstration. We are going to run our script [01:24:50]on the terminal server. We will not login into these two servers, [01:24:55]but sitting here we want to restart the services on these two servers. [01:25:00]This is a requirement right now. Let's [01:25:05]see how we can deal with this using Powershell to deal [01:25:10]with this requirement. I have prepared two versions of script with [01:25:15]very small changes. Right? I'm going to explain everything line by line, [01:25:20]don't worry at all. Everything is going to be crystal clear [01:25:25]by the end of this lecture. Yes. Okay, so [01:25:30]firstly, let's talk about this version which is to deal with requirements [01:25:35]in which we have fixed window services that needs to be [01:25:40]restarted, but the list of server names is not fixed, [01:25:45]right? We have created this server I file in [01:25:50]which you can update your server names, right? If a new [01:25:55]server you want to include, just write its name in the new line [01:26:00]automatically. It will be now used by this script [01:26:05]and it will restart certain window services this server as [01:26:10]well. This is a simple concept. Now let's go through the [01:26:15]script. This is our base directory where script [01:26:20]is kept. Then we are creating a log file variable that [01:26:25]because we have appended the date in [01:26:30]this file name itself, this way you can get one log file for [01:26:35]each day. Right? Then we are having this command start [01:26:40]transcript. This is very interesting command. Once [01:26:45]we have specified this file, it is going to write all different log messages [01:26:50]into this. Right using this right output command. Late. [01:26:55]Whatever statements we have written, all of these will be visible inside this [01:27:00]because of the start transcript command right [01:27:05]here [01:27:10]we are reading the list of servers. Servers I [01:27:15]file is here, we are reading this N I conflict file. [01:27:20]Then this array of servers will contain all these servers. [01:27:25]Right after this we have services [01:27:30]list defined over here, all different services that we [01:27:35]want to restart. We can specify here in comma separated manner, [01:27:40]right? A very small change which you can do if you have requirements [01:27:45]of this sort. If you want to keep [01:27:50]this list of services outside of this script. Let's say the way we [01:27:55]have specified different servers in an INI file, you can keep [01:28:00]another services INI file. Also, you can read the services [01:28:05]from that file. Benefit will be you will not have to come and [01:28:10]edit this file for changing these services, right? [01:28:15]In this demonstration, we are going to restart these services on [01:28:20]this set of servers, right? Of course you need to replace these [01:28:25]services with your application services. After this what [01:28:30]we are doing is using Powershell command late in Voc command [01:28:35]to computer name. We are passing all different servers in one shot. [01:28:40]If 100 servers are there, then also we are invoking [01:28:45]our logic for restarting the services on 100 servers in one [01:28:50]time. Then the logic for restarting the services [01:28:55]is written inside this script block, right? Let's [01:29:00]take a look at this logic. Sleep time we have defined [01:29:05]as 30 seconds, then the difference, [01:29:10]Okay, because the services we have defined in our local [01:29:15]host, but we are invoking the script on remote servers, [01:29:20]this variable will not be automatically passed to this remote computer, right? [01:29:25]For this reason we have to specify it like this. This [01:29:30]way powershell, we are referencing to the local variable, but [01:29:35]we want to use the value on a remote machine. [01:29:40]So we are firstly fetching the initial status of the services. [01:29:45]Correct. We are interested in name status and PS [01:29:50]computer name. Then we are invoking this [01:29:55]stop service command late. And we're just passing the list of services again. [01:30:00]We expect this will stop the services on all different computers. [01:30:05]After that, we are giving this 30 seconds sleep time [01:30:10]so that it can stop all different services. Again, we're checking the status [01:30:15]so that we can see if the services were [01:30:20]actually stopped or not. Right then, [01:30:25]okay, here I don't think the sleep time is needed. After this, [01:30:30]we are invoking start service command late to start the services [01:30:35]again. Right before leaving the system, we are checking the [01:30:40]final status, whether services came back to running state or no. [01:30:45]For this reason we are collecting the final status. This is the script [01:30:50]block which will be executed on the remote machines. [01:30:55]In the end, we are just stopping the transcript. And script ends here, right? [01:31:00]It is time to do the practical. Now, let me close this [01:31:05]and we'll copy this code to our terminal server. [01:31:10]Our code will lie here. We will [01:31:15]restart the services on these two remote machines from here, right? [01:31:20]All right, our scripts are copied. [01:31:25]Let me go here and launch Powershell.
[01:31:30]Let's take a look at the servers [01:31:35]list. It has got two servers on which we want to restart our services. [01:31:40]Which services we want to restart these services, [01:31:45]right? Let's executor script [01:31:50]now.
Enter. [01:31:55]This is the initial status of services on [01:32:00]different computers. This is 01 and this is a server 02. Now it is [01:32:05]stopping the services, and remember we have included some 30 [01:32:10]seconds sleep time. Because of this, you will see the delay. This is the [01:32:15]time we are allocated for the services to go down gracefully. [01:32:20]Once the services are stopped, we have checked the status again [01:32:25]on both the servers. You can see the services are stopped, [01:32:30]right? Now it is starting the services. And [01:32:35]then again, because for starting the services, it can take some time. So we are [01:32:40]waiting for 30 seconds. And now this is the final status on server [01:32:45]one, all the services are running. On server two, all [01:32:50]of the services are running, which means the services were restarted [01:32:55]gracefully without any issues on both the servers. Right, [01:33:00]Let me close this here. Let's say [01:33:05]somebody reports issue that on this server, this service is not stopped. Something [01:33:10]like that. Let's say App server 02, somebody reported issue. You can open [01:33:15]this log file. So let's say you have been reported an issue [01:33:20]for this server related to service restart. You can go to this log file [01:33:25]and see, okay, services were running on the server, then [01:33:30]they were stopped also and then they again came back in the running state. So [01:33:35]no issues with the service restart script? Correct. So this way [01:33:40]we can make use of this transcript, right? All right, so [01:33:45]this was the first version of our script. Very easy, just update your services [01:33:50]in the server's file and the list of services in this variable [01:33:55]and you are done, right? Let's meet in the next [01:34:00]lecture and explore second version of our solution to this requirement. [01:34:05]See there, take care.
[01:34:10][01:34:15]All right, let's take a look at this second version of our [01:34:20]script. And this time instead of an IN I based configuration [01:34:25]file, we have used XML configuration file. Let me show you [01:34:30]our configuration file looks like this. Everything is inside this [01:34:35]service restart tags. Inside this, we have different [01:34:40]server and service combinations. Visualize this. You have different [01:34:45]combinations here and you can just add your servers like this. [01:34:50]This tag accepts comma separated values. You can add all [01:34:55]different servers by separating them with a coma. The same [01:35:00]goes for the services. Also, all different services that you want to restart [01:35:05]should be added like this. What is the benefit of this approach is, [01:35:10]let's say on this app server 01, you want to restart spooler [01:35:15]service. Whereas the second server, you want to [01:35:20]restart spooler, as well as this PN service going [01:35:25]by our previous logic, how you will deal with this requirement, right? But [01:35:30]here you can deal with it very easily. In fact, all it takes is just [01:35:35]update the values here and you are done, right. Now here [01:35:40]again, I have used app 001.02 only because I do not have too many [01:35:45]servers. But whatever your servers are there, you can update like this. [01:35:50]This way we can update our server service combinations. And then [01:35:55]we have to execute our script. Let me copy this code. [01:36:00]Okay, and perhaps you are already aware, we should try [01:36:05]opening XML file like this. If they open like this, [01:36:10]we are sure that there is no syntax mistake inside the XML file [01:36:15]right Now, let's talk about the script. What is there inside [01:36:20]it?
Pretty much [01:36:25]the same thing. It's just that here, instead of reading the INI file, [01:36:30]we are reading XML file and we are reading all different [01:36:35]server service combinations. Let me execute this script for you. [01:36:40]Your Xml content looks like this. Then if we go inside this and [01:36:45]get grab, we get all different server service [01:36:50]combinations. In inside this, we [01:36:55]have our servers and corresponding services that we want to [01:37:00]restart, right, one by one. We are reading these combinations, [01:37:05]splitting these values by coma, and storing them inside these variables. [01:37:10]From here, we are repeating exactly the same logic which we already [01:37:15]discussed for the version one. Right, let me clear everything. [01:37:20]Just screen what is everything. [01:37:25]Let's execute our script. Launch power shell [01:37:30]automation for plant service [01:37:35]restart to enter. You can see this log file [01:37:40]is created which is storing our messages here. [01:37:45]Firstly, pick those servers. Spooler service is getting restarted on [01:37:50]app server 01,
on [01:37:55]zero, coming up, Okay.
[01:38:00][01:38:05]That's it. All different services are [01:38:10]restarted as per our requirement and we can take a look at this [01:38:15]log file to understand what exactly happened. And what this means to us [01:38:20]is script is doing the expected job, right? [01:38:25]Definitely this automation is a success, correct? Now, you [01:38:30]do not want to come to this power shell and execute this script every day, [01:38:35]04:00 in the morning, right? So what you can do is schedule [01:38:40]this script [01:38:45]tacedularn. A scheduler [01:38:50]will take care of executing your power shell script as per your [01:38:55]defined schedule, as per your requirement. All right, [01:39:00]my dear friends, I hope both versions of this planned service restart [01:39:05]script are clear with you. We had this simple problem [01:39:10]of planned service restart and we figured out two unique solutions [01:39:15]which takes care of this requirement very well, right? More [01:39:20]than anything else, this approach that what kind of configuration [01:39:25]file will be the best suitable for this requirement, and then how you should design [01:39:30]your power shell. If you are learning this, that is the most important thing. [01:39:35]Rest, everything else is secondary. I'm sure you are already [01:39:40]planning to use this script at your work for restarting a couple of your application [01:39:45]services. Well, on this note, let's conclude this lecture. [01:39:50]Take good care of yourself. Thank you.
[01:39:55]Hello you [01:40:00]awesome people, and welcome to this lecture. We are going [01:40:05]to develop a tool using Powershell to run the validation test [01:40:10]cases on several servers at a time and generate [01:40:15]awesome looking formal validation report like this for our environment. [01:40:20]Take a look at this beautiful report loaded [01:40:25]with content that helps us in understanding the present [01:40:30]state of this environment. At the top, we have summary. [01:40:35]Here we can see how many test cases were successful, [01:40:40]failed exception, et cetera. If [01:40:45]you want to know more about the validation results of any server [01:40:50]corresponding to a tier, you can either click on the buttons [01:40:55]in this navigation or maybe you can directly click on the server [01:41:00]name. In the summary table for this web [01:41:05]server, we can see, firstly, we have system level information [01:41:10]which contains server name, OS, and domain [01:41:15]information. Then we have validation results of [01:41:20]other test cases indicating the state of system and [01:41:25]application. In this example report, we have considered [01:41:30]the application to be a simple three tier application [01:41:35]with a web server form hosting the application and serving [01:41:40]the response to end user traffic. Then we have application [01:41:45]form that contains servers for batch processing [01:41:50]or background processing jobs. Lastly, we have [01:41:55]this database server, which is being used for storing the business [01:42:00]data of this application. Hence, it needs to be validated. [01:42:05]While we have taken this application architecture as [01:42:10]an example, I want to make it clear that our validation [01:42:15]tool is not limited to validate only such applications. [01:42:20]In fact, you can very well use it for [01:42:25]validating any of your web application, desktop software, [01:42:30]console based applications, and so on. Why [01:42:35]I am saying this is because our scripts are configurable in nature, [01:42:40]which makes them very flexible to use and accommodate [01:42:45]new requirements. All right, I want to make it a [01:42:50]very easy decision for you, whether you should watch the next lectures [01:42:55]and understand this tool or it is not for you. [01:43:00]Let us calculate the ROI or Return on Investment. For our [01:43:05]automation, I have worked in very small companies, two large [01:43:10]Mcs. From my experience, we deal with at least 50 customers [01:43:15]with on an average ten servers per customer environment. [01:43:20]Yes, typically we have three environments, [01:43:25]Dev, QA, and Prod. Yeah, this way [01:43:30]we get the total number of servers we are dealing with. Okay, [01:43:35]essentially, we need to perform application validation [01:43:40]at the time of critical events like post monthly [01:43:45]patching or post application upgrade to ensure things are [01:43:50]well and good. Also, we need to perform [01:43:55]application validation several times while resolving [01:44:00]critical issues, leading to a business down situation during [01:44:05]these events. Time is an important factor. [01:44:10]I remember a few years back when a major ransomware [01:44:15]attack on a cry happened. And I was among those who [01:44:20]were asked to validate hundreds of servers overnight. [01:44:25]I wish I had this script back then so that I could have peaceful [01:44:30]sleep that night. And this event was also an inspiration [01:44:35]to develop this solution, which I'm going to present to you in the coming [01:44:40]lectures. Can we say we might need to validate [01:44:45]a server thrice in a month, which includes one validation [01:44:50]after monthly patching, one after application upgrade, [01:44:55]and one validation on ad hoc basis? [01:45:00]Yes. How much time you need to validate [01:45:05]one server may vary depending upon the nature of your work. [01:45:10]But if you plan to automate valuation fully, [01:45:15]you can automate all the valuation steps that are performed [01:45:20]on each server by various teams like Windows team, your database [01:45:25]team, monitoring team, and application team. I [01:45:30]believe you agree with me on 5 minutes per server. [01:45:35]Quite conservative, correct? Friends, [01:45:40]Look, I'm not an expert of calculating these financial values. [01:45:45]I'm going to provide this Excel file to you. You can make your own [01:45:50]adjustments and do the calculation again? Yes. All [01:45:55]right. Now, let's see how much we are going to save per [01:46:00]year from this automation Based on these inputs, [01:46:05]let us look at both effort saved and revenue. [01:46:10]Wow, these figures are impressive. What do you say [01:46:15]with this? I believe you have clarity whether [01:46:20]you want to go through the next lectures and understand every single [01:46:25]aspect of this tool. If your answer is yes, I want to tell [01:46:30]you, don't just go through the lectures. Practice with me and [01:46:35]try to grasp the concept. This is an excellent opportunity [01:46:40]for you to stand out at work and increase your chances of [01:46:45]getting an award or promotion, or any other form [01:46:50]of appreciation from your management. I seriously doubt any [01:46:55]management will be able to ignore their employee who saved these [01:47:00]many hours of manual effort in their organization. If [01:47:05]you successfully implement this automation in your organization and [01:47:10]write in your resume say, I saved 100 K plus USD [01:47:15]for my employer. Using my powershell automations, you [01:47:20]think any employer out there in the market will be able to [01:47:25]ignore your potential? I don't think so. Yeah, [01:47:30]I have dedicated a month of my time to this project [01:47:35]and it would be lovely to get feedback from you about what you feel [01:47:40]about this automation. But trust me, nothing will make [01:47:45]me more happier than the screenshots of your report once you [01:47:50]implement this in your organization. On this exciting note, [01:47:55]let's conclude this lecture here and meet in the next lecture. [01:48:00]To get started with automating system and application [01:48:05]validation process for your organization, take good care of yourself. [01:48:10]Thank you.
[01:48:15]Hello and welcome to this lecture.
[01:48:20]As we are starting to learn how we can use our automation [01:48:25]to run validation test cases on multiple servers and [01:48:30]generate a report like this, it is a good idea to start [01:48:35]with the directory structure of this project. As well as understand [01:48:40]what to expect inside each of these files. A good understanding [01:48:45]of this will give you an upper hand while customizing this automation [01:48:50]as per your requirements. This is just a beginning. I [01:48:55]won't burden you with too many details. Yes, [01:49:00]please note everything related to running this project is [01:49:05]inside this folder. You can copy it anywhere in your system and [01:49:10]run the scripts. Feel free to rename the folder name [01:49:15]or these scripts because we have not hard coded these names [01:49:20]inside our code. Okay, this log folder is the [01:49:25]place for all different logs created by our validation scripts. [01:49:30]These log files you are seeing here are from the previous runs of [01:49:35]this script, as the name itself must be suggesting to you, [01:49:40]a new log file is created for each day this script runs. [01:49:45]Each of these log files essentially contains all different [01:49:50]information, error, and debug messages [01:49:55]that can be used to know the root cause in case your script [01:50:00]runs into an issue. At the time of writing scripts you [01:50:05]might find it utter waste of time to write good and helpful [01:50:10]log messages, but trust me, they are your best buddies [01:50:15]in the times of issues reported for your script. When you get [01:50:20]a ticket in the morning saying your script didn't work properly last night. [01:50:25]And you will be like, hey, how the heck I would know what happened last night [01:50:30]because I was sleeping at the time of issue. [01:50:35]Well, this be your situation without the log files. [01:50:40]But with log files, you can just chill and fix the issue with [01:50:45]the help of log messages recorded at the time of issue. [01:50:50]Maybe you can thank me later. We have used [01:50:55]logging module from PS gallery to write our log messages. [01:51:00]You don't know this module. I will take care of it by dedicating a lecture [01:51:05]to understand this module in detail. [01:51:10]The reports folder contains these stem reports generated [01:51:15]by our validation scripts, intermediate temporary reports [01:51:20]inside these folders, as well as the final consolidated reports. [01:51:25]I wanted to keep the things very simple for you to understand. So [01:51:30]reports and log file directory is right here, but [01:51:35]you can also store them in a shared directory so that other stakeholders [01:51:40]can access them over a network path. It is completely optional [01:51:45]and you need not to worry about it at this point of time. Yes, [01:51:50]the files in the library folder contain [01:51:55]all different functions that we can use once or [01:52:00]multiple times in our main scripts. Of course, this need not [01:52:05]to be set because that is what libraries are meant for, right? [01:52:10]This file includes common functions for [01:52:15]logging the messages or triggering the validation job. [01:52:20]These functions supports a functionality, but they are not [01:52:25]functionality in itself, right? For example, you definitely need [01:52:30]logging, but we do not write an automation script purely for getting [01:52:35]log messages, right. Those common functions [01:52:40]you can keep inside this file.
[01:52:45]Then the intelligence, the brain of our automation [01:52:50]is this file which contains all different test cases [01:52:55]that we can run against any system. And it returns [01:53:00]the output that we can publish in our validation reports.
[01:53:05][01:53:10]This file is [01:53:15]purely meant for fun. All it contains is Tom [01:53:20]CSS and Javascript. To make our report look [01:53:25]more beautiful and presentable, if I just delete this [01:53:30]file validation will still happen, but the report [01:53:35]that will be generated will be so ugly that nobody [01:53:40]will be able to tolerate. It sounds interesting, isn't it? [01:53:45]You might be thinking why two styles? Well, while [01:53:50]writing this code, I was doing experiments with reports. [01:53:55]Look and feel I was confused. Which one looks more pretty? Out [01:54:00]of this, then eventually decided that I should [01:54:05]not be deciding this, it should be you, people. Over here, [01:54:10]you can see there are two styles of report. Whichever variant [01:54:15]out of these you like the most, you can just use it [01:54:20]thinking how much effort it will be to jump from this to this. [01:54:25]Well, it is pretty difficult task if you call changing this [01:54:30]variable name from this to this difficult task. You got [01:54:35]it? Yeah. Anyways, we will have a dedicated lecture [01:54:40]on this beautification part. That time I [01:54:45]will also seek your feedback on how to enhance this report even [01:54:50]further. I am sure there is scope to improve it further and [01:54:55]make it look even better. All right, my dear friends, to conclude [01:55:00]this lecture, in the root directory of this project, we have those [01:55:05]scripts which we can execute directly. These scripts [01:55:10]make use of the code from library and create the beautiful [01:55:15]looking Tema reports and place them inside [01:55:20]reports folder. While doing this, it continuously writes [01:55:25]the log messages to inform us what is going on. [01:55:30]I hope now you are clear on the file and directory structure of [01:55:35]our automation project. This configuration file is so important [01:55:40]and the journey, how I reach to this is so interesting [01:55:45]that I need to share it with you in detail. And [01:55:50]I want a separate lecture for this configuration file alone. Hey, [01:55:55]but, well, in the very next lecture. See you there. [01:56:00]Take care.
Thank you.
[01:56:05]In this lecture, [01:56:10]we will understand the configuration file of our automation, which [01:56:15]a pretty good job for us. Config files are used [01:56:20]to configure the parameters and initial settings for our computer programs. [01:56:25]When we are dealing with any automation which is doing [01:56:30]some significant work for us, it is quite essential that we use [01:56:35]config files to pass information or preferences, [01:56:40]which is to be used by the script. If needed, we can just [01:56:45]make changes in the config file to change the behavior of the script [01:56:50]without making any code change. Example user credentials [01:56:55]to connect with the database, your application U RL and [01:57:00]service names, et cetera. Some five years [01:57:05]back, I had written this important automation script and its [01:57:10]configuration file looked like this. This file is containing [01:57:15]some essential information needed by our script to function. [01:57:20]Please notice key and values are separated [01:57:25]by this delmeter. It is fair enough. Script did wonderful job. [01:57:30]Nothing wrong with it till date. But problem is to [01:57:35]pass this information inside powershell. How much code I wrote? [01:57:40]Just look at this 100 plus lines of script [01:57:45]just to pass this information and load into power shell [01:57:50]variables so that we can make use of this information. This is insane, [01:57:55]these many lines just to load the information. Crazy, [01:58:00]right? Well, back then nobody was there to tell [01:58:05]me this. At times you need someone who can scold you [01:58:10]and correct your mistakes. My problem was I was the only [01:58:15]one who knew power shell in my team. I self learned everything. [01:58:20]And this is the drawback of this situation that you do not have [01:58:25]anyone around you to correct your mistakes. Fun fact, [01:58:30]now I'm accepting it so happily that this code is not at all good. [01:58:35]But I would have shouted and defended this masterpiece [01:58:40]when I wrote it. This is normal human behavior, right? [01:58:45]Okay. Do you know how much of this code can be [01:58:50]reduced if we were using XML based configuration file? Let [01:58:55]me show you.
This is a [01:59:00]simple XML file. For our demonstration, I can read the [01:59:05]content and type cast it to XML, type like this. [01:59:10][01:59:15]That's it. Now we can directly read the information [01:59:20]from this configuration file and store it inside the power shell [01:59:25]variables for later uses. If you have multiple [01:59:30]tags with the same name, you can access them like this.
[01:59:35]Or [01:59:40]perhaps writing a four loop like this is a better idea. [01:59:45]Well, for now, this is all we need to know about [01:59:50]able parsing using powershell simple, right?
[01:59:55]With [02:00:00]this, we are all set to understand this configuration file of our project. [02:00:05]Let's open it and look at this. Everything [02:00:10]is inside these validation tags. Then we have global variables. [02:00:15]Here we can declare those variables, which we will need [02:00:20]inside our script. I have added this prefix XML [02:00:25]underscore so that we can differentiate the variables [02:00:30]coming from XML file with other variables which we are going to create inside [02:00:35]our script. After going through all my lectures, if you feel [02:00:40]you need more variables over here, you can just create more variables [02:00:45]like this and start making use of the inside your script. [02:00:50]Simple enough, right? [02:00:55]How we are reading this information and creating the variables inside powershell [02:01:00]is something we will understand later. Right, [02:01:05]then we have tiers information. Depending [02:01:10]upon our application architecture, we can add as many tiers [02:01:15]we want. The script should take care of it automatically. [02:01:20]For example, in case you have another tier in your application [02:01:25]along with web app and DB, let's say [02:01:30]API tier, You can simply add it like this and [02:01:35]your job is done with the simple change. Now you can [02:01:40]run the validation script for this tier, as well as this tier will [02:01:45]automatically become the part of this report. Okay, [02:01:50]let me remove it for now. Inside tier [02:01:55]only basic information we have like the tier name and the validation [02:02:00]tasks that we want to perform on this particular tier. [02:02:05]Depending upon the nature of tier, you can have [02:02:10]different tasks, right? For example, your database [02:02:15]is healthy or not. You want to validate this on database tier [02:02:20]only, while Disc Health you might want to validate [02:02:25]on all different tiers. At the end of day, we [02:02:30]are doing validation for servers and what is server without a disk. [02:02:35]This design makes some sense now. All right, [02:02:40]I hope you are able to clearly understand the benefits of this XML [02:02:45]configuration file. I'm just reiterating, we could have stored [02:02:50]this information directly in powers script. The benefits [02:02:55]of having such file is now anyone can edit these [02:03:00]configuration values. And the person need not to be well washed with power shell [02:03:05]scripting because we are having a simple configuration file, [02:03:10]you can use it for various applications. Let's say there's another application [02:03:15]which is totally different in architecture from this one, with [02:03:20]small changes in the configuration file, you can still use it for validating [02:03:25]that particular application. Also, you remove few tasks from [02:03:30]here which are no longer valid and they will not be seen in the validation [02:03:35]report anymore. This level of simplicity you will not [02:03:40]be able to achieve without having a dedicated configuration file [02:03:45]for your automation, right? Yes, [02:03:50]I have an advice for you though. Configuration files are supposed [02:03:55]to be very critical and any small mistake in the configuration [02:04:00]file can stop your code from working. But don't be afraid [02:04:05]of experiments in this file. After watching all my lectures, [02:04:10]once you're comfortable with this automation, play with this Amal structure, [02:04:15]add or delete information from this file as you [02:04:20]need it. Remember, I could enhance a simple configuration [02:04:25]file from this to this. Maybe you can do [02:04:30]an even better job here. On this positive note, let's conclude [02:04:35]this lecture. Take good care of yourself.
Thank you.
[02:04:40]Hello [02:04:45]you awesome people and welcome to this lecture. I [02:04:50]want to give you sufficient knowledge about this system and application validation [02:04:55]tool so that you can own it fully. And [02:05:00]my goal is after completing this module, you should be able [02:05:05]to implement it in your organization. For this theory alone [02:05:10]will not help. In order to grasp the concept well, [02:05:15]we definitely need some live example with which most [02:05:20]people can connect and think about the use cases where they can [02:05:25]make use of this tool. Right? In this [02:05:30]lecture, we will try to understand a simple application architecture [02:05:35]that is very generic in nature. And I'm sure either you are already [02:05:40]working on such an application or you will be able to at least understand [02:05:45]it very easily. Once the application architecture is clear, [02:05:50]we shall try to use our automation tool and validate the application. [02:05:55]Correct. And validate the systems and application, [02:06:00]of course. Right. Well, in this section we are [02:06:05]going to do all of this and it is going to be spoon. So [02:06:10]let's get started.
Let us discuss a simple application architecture. [02:06:15]First of all, we have end users, [02:06:20]of course, what your application will do without end users, right?
[02:06:25]And they are sending traffic to your application over Internet, [02:06:30]which then goes through your load balancing device and hit [02:06:35]the web servers, right? These are the servers which are responding to [02:06:40]the end user traffic from here. Now, your applications business data [02:06:45]and end users data needs to be stored somewhere.
For [02:06:50]this, the web server continuously talks with the database. [02:06:55]You need a database server also.
Then in general, [02:07:00]applications have dedicated servers for their background processing tasks. [02:07:05]For this, we need dedicated application [02:07:10]or batch server, you may call it right, because [02:07:15]we want to ensure connectivity between web server, database server, [02:07:20]and application server. We tend to keep them inside a single network. [02:07:25]Although I agree, your application [02:07:30]may look a little bit different from this, but trust me, [02:07:35]this is most common application architecture we see around us. Yes. [02:07:40]Now in this system, where do you see the possibility [02:07:45]of using our automation? I see so many areas we can [02:07:50]use it. Firstly, our application is exposed over Internet.
[02:07:55]There is certain URL which we want to validate.
That is one area we [02:08:00]have to, we can validate the load balance.
Then [02:08:05]on the web server, we have to validate if I, services are running fine [02:08:10]and application is accessible from local URL or not, [02:08:15]your web server is able to talk to database server or no. Because [02:08:20]if this connectivity is broken, your application will go down. And there could be so many [02:08:25]other things which you want to validate on the web server, we will shortly talk about [02:08:30]those later. Right then on database server, one thing which we [02:08:35]definitely want to validate is whether we are able to connect with database and [02:08:40]execute our queries or no.
Then there are certain database related [02:08:45]services which we want to ensure in running state [02:08:50]coming to application or batch server here. Also [02:08:55]you must be having certain schedule tasks or window services whose health [02:09:00]matters to you, right? Like this. On each of the tier, [02:09:05]you will find certain aspects which you want to validate and they become [02:09:10]your validation tasks, right? In the next few lectures, we will [02:09:15]try to create this environment by deploying the resources in Microsoft [02:09:20]Azure Cloud.
We will create a resource group virtual network [02:09:25]and deploy the web server, database server, and application servers.
[02:09:30]And then deploy our validation script and run it over there.
[02:09:35]This is going to be our agenda for the next three lectures.
[02:09:40]While I'll be really happy if you want to come along and deploy [02:09:45]resources in Microsoft Azure with me, then we will deploy the script. [02:09:50]And we'll see how to plan the script execution. But [02:09:55]I don't expect all of you know Microsoft Azure as well [02:10:00]as I don't know whether it is in your plan to learn Microsoft Azure or no, [02:10:05]right? I do not want to simply assume that yes as much. I am [02:10:10]excited to share this knowledge. You are also excited to learn it right now. [02:10:15]Usually I do not say this, but here I'm saying it [02:10:20]is completely optional to follow this lab set up. If you do not want [02:10:25]to do, it's completely okay and you are not losing anything. But [02:10:30]we definitely need to do practice, right? For this, What [02:10:35]we will do is in the same laptop where you are watching this lecture right now, [02:10:40]we'll try to deploy our scripts and we'll try to use [02:10:45]our own server as web app and database server. [02:10:50]It is so easy and interesting without wasting any time. [02:10:55]Let's run this script locally in our system.
Let me show you the configuration [02:11:00]file. Configuration file looks like this.
[02:11:05]You do not worry about these things. We will discuss them in detail [02:11:10]right now.
Just understand that on web server, these tasks [02:11:15]will run on app server, these tasks, what these tasks does, you don't worry. [02:11:20]Okay, First of all, launch power shell in administrator mode,
[02:11:25][02:11:30]run this command [02:11:35]install module logging. If you do not have this module [02:11:40]already installed in your system, you have to execute it because we have [02:11:45]used this module inside our code, right? I already have this module. I need [02:11:50]not to follow this step, right?
[02:11:55]Look at this. We have written these three statements. We are executing our [02:12:00]server validation script thrice to the server parameter. We are passing [02:12:05]different values. But basically the script will run on the same server, which is our [02:12:10]local machine. But in the report it will be shown as server A, B, [02:12:15]and C. This name is only used for displaying in the report [02:12:20]and it doesn't have any other importance what test cases we should execute [02:12:25]on.
It is depending upon the tier name that we pass. Okay. Instead of [02:12:30]executing these statements one by one, I'm doing it in one shot. [02:12:35]At this point of time, I'm not expecting you to [02:12:40]understand them. Just keep watching and follow. [02:12:45]Do it in your system. Don't keep watching. Okay, [02:12:50]let me delete this. Okay, I'm pasting it over here.
[02:12:55][02:13:00]Okay, The folder is created and [02:13:05]reports have started coming into this.
Here we can see this valid [02:13:10]SQL database, this particular tests failed. It [02:13:15]is failed because in Voc SQL MD is not present in my system. [02:13:20]Believe me, if it is failing, it is good. Because this is my own system. [02:13:25]I do not have database installed. If still this test case is successful, [02:13:30]it is actually a problem, right? Same for this web administration [02:13:35]module.
It is not available in my system. It's used for validating [02:13:40]IIS in our code. All right, script executed [02:13:45]and our reports are created right now. If you see we [02:13:50]have another script called report consolidation which actually consolidate [02:13:55]these reports into a single report. For this purpose, let's execute [02:14:00]this script. Also report consolidations. [02:14:05]There we go.
There was one exception and one error on [02:14:10]the web server.
We have got this in red color. Same for database, [02:14:15]there was an exception thrown. It is in red color. App server seems to [02:14:20]be all good. No issues on this, right? This way our report [02:14:25]is designed and this is the output.
All right, so if you are [02:14:30]able to follow the lecture and you are able to generate this report, [02:14:35]you have deployed the script successfully. Congratulations, [02:14:40]now you are good to continue with me till end of this course, okay?
[02:14:45]All right, in the next three lectures, we will deploy [02:14:50]these resources in Microsoft Azure and then deploy our script over there.
[02:14:55]As I told already, those lectures are optional for you. If [02:15:00]you have already learned how to deploy the script in your local machine. Well, that's [02:15:05]it for this lecture. Take good care of yourself.
Thank you.
[02:15:10][02:15:15]All right, [02:15:20]let's proceed with our lab set up in Microsoft Azure. [02:15:25]First of all, we are going to create a resource group.
[02:15:30]Let me create, this name is [02:15:35]set up [02:15:40]is fine. Let me hit this review. All right, [02:15:45]and hit this button to create the resource group. [02:15:50]Okay, Resource group is created successfully. Let's [02:15:55]go inside it and deploy our virtual machines. First of all, we are [02:16:00]going to deploy one web server here. [02:16:05]Okay. This 2019 data center image is fine. [02:16:10]Let's select this. This is my web server. [02:16:15]Ts, no infrastructure redundancy [02:16:20]is required. This is image, all good, This is the site. [02:16:25]Okay. Username and password we have to set.
[02:16:30]Cool, [02:16:35]We are going to create RDP session, so we need to select this part. [02:16:40]Click Next button. We are okay [02:16:45]with the default disks, we are going to get no need to add additional [02:16:50]disk. Click Next. Okay, [02:16:55]so this is going to create this particular virtual network. We are fine. [02:17:00]Let me click next. Next. [02:17:05]Okay, [02:17:10]Hit this Create button and initialize the deployment, [02:17:15][02:17:20]okay? While this virtual machine deploys, [02:17:25]let us deploy another virtual machine and name [02:17:30]it as a server. All right, so [02:17:35]going here, repeating the pretty much same steps again. [02:17:40]This time for Ab server, [02:17:45]okay?
[02:17:50][02:17:55][02:18:00][02:18:05]Yeah, [02:18:10]go to next. No need to add any disk Now, this [02:18:15]is important.
We want our app and database servers [02:18:20]to communicate with each other. Right. Easiest way of making [02:18:25]this work is we should put them all inside same virtual network. [02:18:30]Make sure for app and database servers, [02:18:35]same virtual network is selected over here, right?
[02:18:40][02:18:45]Nothing in these screens. So I can hit this button again and [02:18:50]create this run.
[02:18:55]Okay? [02:19:00]Now, since our app server and web server are up and running [02:19:05]fine, it is time to deploy our database server. For this, [02:19:10]go to all services, select databases. [02:19:15]What we want to deploy?
We want to deploy SQL database on [02:19:20]a virtual machine where we can run our scripts.
This seems [02:19:25]to be ideal option. Right click Create button. [02:19:30]It is giving us these options out of which we [02:19:35]are interested in virtual machines free SQL [02:19:40]Server license on 2019. Yes, this seems to be good fit [02:19:45]for our requirements. Click Create.
We want to deploy our [02:19:50]virtual machine in this resource group. And what's the name SQL?
Let's [02:19:55]say Es. No infrastructure redundancy [02:20:00]required. A. Okay. Let me give the user [02:20:05]name the password [02:20:10]pretty easy. [02:20:15]Go to next. I do not have any disc requirements. Go to [02:20:20]next. Yes, this is important for us to ensure we [02:20:25]are deploying in the same virtual network as other virtual machines. Go [02:20:30]to next port, I'll let it remain 1433 [02:20:35]default port. If you wish to create a default [02:20:40]as soon as the virtual machine is deployed, you can select this [02:20:45]and give your username and password. We are okay for now. Need not to create [02:20:50]this user search. We are good to go to next step [02:20:55]and create the virtual machine.
[02:21:00]Hit this Create button.
[02:21:05][02:21:10]All right, [02:21:15]my dear friends, in this lecture we deployed a web [02:21:20]server to serve end user traffic of our application. [02:21:25]Sql server to store the data of our application. And [02:21:30]also we deployed this app server, which acts as [02:21:35]batch processing server, to process the background jobs. Now [02:21:40]in order to run our validation script on these servers, we need [02:21:45]to make few changes, right? So let's meet in the next lecture [02:21:50]and make those changes so that we can execute our scripts and [02:21:55]prepare a nice validation report. See there, take care.
[02:22:00][02:22:05][02:22:10]First of all, let's connect to the web server,
[02:22:15][02:22:20][02:22:25][02:22:30]pass the username and password, [02:22:35]and hit the okay button.
[02:22:40][02:22:45]All right, we deployed this virtual machine as a web server, [02:22:50]right? So ideally we should be installing [02:22:55]or Apache Tomcat. And then you host some static page on [02:23:00]it and mimic the exact scenario of web server. But [02:23:05]that will eat up a lot of time, right? And also I suppose [02:23:10]many of you who are into IS may not want to see Apache [02:23:15]Tomcat running, right? And then there are so many web [02:23:20]servers in the market, which one to choose and install over here, it's a [02:23:25]difficult choice, right? How I want to take it? I do [02:23:30]not want to install any web server. But in my website itself, which [02:23:35]is texto.com I have created a new page which looks like [02:23:40]this application version. And the status, [02:23:45]of course, these are just dummy values. What we are going to do [02:23:50]in our code is we will just check whether this particular pace is up [02:23:55]and running or no, right? And how we are doing just TTP status [02:24:00]code we will track, make sense.
Let me copy [02:24:05]our code to the server.
[02:24:10]Okay, In order to run [02:24:15]our script on the server, we need logging module. [02:24:20]Let me install it. Launch powershell [02:24:25]and then install [02:24:30]module. Logging it. Enter. [02:24:35]Yes. Yes, [02:24:40]again.
[02:24:45]All right, so the module is installed successfully. [02:24:50]Let me clear my screen. Okay, [02:24:55]before we run our script, let's take a look at the configuration file. [02:25:00]This is the customer name. This is the environment. [02:25:05]Web services. A services all looks good. Name [02:25:10]of my database server I gave during deployment is SQL. [02:25:15]Only this much is needed. Saving it. These are [02:25:20]the validation tasks which will run on our web server, right? [02:25:25]Just making it Sure. Let me close this. [02:25:30]Let me try to open it in browser, because if it opens fine, [02:25:35]then this is also a validation of Xml file. [02:25:40]Right? Now, since the configuration file looks good, [02:25:45]Logs folder is empty, [02:25:50]is reports folder. We are all set to run our [02:25:55]script. Let me launch Powershell at the current working direct. And then [02:26:00]server one hit Enter, [02:26:05]it's asking for the server name.
Now please understand this concept [02:26:10]very clearly. Script is already copied to the server and it [02:26:15]is running locally. It is very easy to directly fetch [02:26:20]the server name instead of asking the user. Right? But [02:26:25]still why we have provided this option is Because this gives [02:26:30]us additional control to give a server name how we want to [02:26:35]see it in the report, right?
Let me give [02:26:40]this name or any other name and accordingly it will show up in the report. [02:26:45]Yeah, hit Enter and this is web server. [02:26:50]Let me give, we hit Enter. [02:26:55]This script has started writing the log messages [02:27:00]as well as in the reports directory. So far nothing is okay. [02:27:05]This folder is created automatically and inside this, this report [02:27:10]is All right.
Following these simple steps, [02:27:15]we are able to run our validation test cases on the web server [02:27:20]and generate this report successfully. Right now, we [02:27:25]need to do the same for and observer, right?
[02:27:30]Let me quickly do this.
You can closely follow the screen [02:27:35]and understand this.
Exactly same steps we need to follow.
[02:27:40][02:27:45]I'm connecting to SQL server.
[02:27:50]Let me copy the code on desktop [02:27:55]itself. Since this is database [02:28:00]server, we can check if L is correctly installed on this or [02:28:05]no. Let me go to the Management Studio. [02:28:10]Okay, Management Studio is launched, so let's connect to [02:28:15]CQL.
[02:28:20]Yes, we are able to connect to our database. These are the default [02:28:25]databases available. And we can create our application databases if needed. [02:28:30]We do not need such thing. If you want to execute [02:28:35]query on this, we can execute it like this [02:28:40]new query, paste your query and execute, [02:28:45]we are getting the results. We can use this inside our validation [02:28:50]saying yes, we are able to execute our query on the database. [02:28:55]It is definitely healthy. Then only it is able to return the results, [02:29:00]Right, This is our validation. Let me minimize. [02:29:05]Like we did previously, [02:29:10]we need to install logging modules. Install module
[02:29:15][02:29:20]logging tenor.
[02:29:25][02:29:30]Yes. What do these installed? Let me [02:29:35]clear my screen and we'll take a look at the configuration file.
[02:29:40]These are the test cases which [02:29:45]will run on the server. Right, Let me run the script server validations. [02:29:50]One name of server is Q, tier [02:29:55]name is database. To [02:30:00]go to the reports, our test cases are [02:30:05]running.
[02:30:10]In the end, [02:30:15]it has placed the report over here. Yes, pretty much everything is successful [02:30:20]in our report. So these are some basic details about the M. [02:30:25]Our application URL is accessible, disks are healthy [02:30:30]QL query, we were able to execute and fetch [02:30:35]the results. And lastly, these are the processes which [02:30:40]are consuming highest memory, right? So our report is generated [02:30:45]successfully. Let me close this. Lastly, we need to [02:30:50]deploy our script on the app server. Let's do this.
[02:30:55]This is our app server. [02:31:00]Again, we need to follow the same steps. Go to a [02:31:05]server, launch powershell module [02:31:10]is installed. Let me run the server validation script. [02:31:15]This is application server [02:31:20]app server, your name is app.
[02:31:25][02:31:30]Okay, go inside.
[02:31:35]Here is our validation report for the application [02:31:40]server.
Right, with this, [02:31:45]I'm sure you are very clear on how we are deploying our power shell script, [02:31:50]installing the module, ensuring the configuration file is all good.
And [02:31:55]then simply it's a matter of running the script and generating this validation report. [02:32:00]Very straightforward process, I'm sure, right? [02:32:05]But look, there is a problem. Now we have. [02:32:10]Three reports on three different servers. Web server has web server [02:32:15]report, L report is on SQL server, and then validation [02:32:20]report of app server is lying on the app server.
In [02:32:25]order to arrange these tiny reports into one detailed consolidated [02:32:30]report, we need them at one place, right? [02:32:35]For bringing these reports at one single place. We can have multiple [02:32:40]approach out of which what we are following is that [02:32:45]instead of writing the reports into a normal directory, we will create [02:32:50]a shared folder. And we'll make sure our server validation [02:32:55]generates these reports inside a shared folder [02:33:00]so that this consolidation script can read [02:33:05]the reports from there and generate a consolidated report.
[02:33:10]Let's continue with this in the [02:33:15]next lecture. See you there.
Take care.
[02:33:20][02:33:25]Now what we will [02:33:30]do is create a folder anywhere in your system.
Let me call [02:33:35]it Shared Reports. Yeah, [02:33:40]right click on this. Okay, let me get rid of these folders. [02:33:45]Don't need them. Shared reports, properties [02:33:50]sharing.
[02:33:55][02:34:00]Everyone can read and write. Okay, [02:34:05]let me create this share.
[02:34:10]Okay, done close with this.
[02:34:15]We have got this path right now. Interesting [02:34:20]thing is with not just we can access this path on [02:34:25]this server, but we can also go to any other server [02:34:30]and access this path right [02:34:35]on app server.
[02:34:40]I'm sure the approach is clear. Instead of [02:34:45]generating the reports inside this directory, we will make sure [02:34:50]they go and sit in the directory which we have just created, right.
[02:34:55]For this, a small code change is needed. Let me make this code [02:35:00]change here. Let's open these reports [02:35:05]small change only reports directory. Instead [02:35:10]of this, we want to make sure this is the reports directory.
[02:35:15]Comment [02:35:20]it out, That's it, done. Awesome. [02:35:25]Right, this is the change for report consolidation script.
[02:35:30]Also, we need to make this change in this script [02:35:35]and we are good. We just need to replace these scripts and from [02:35:40]here onwards this location will be used for writing and reading [02:35:45]the reports. I'm sure very easy step to follow. Right.
[02:35:50]Let me replace these two scripts on all servers
[02:35:55][02:36:00]don't need this directory. [02:36:05]Now, delete same [02:36:10]on other two servers.
[02:36:15]Delete [02:36:20]this directory not needed and replace the scripts.
[02:36:25]Make this change on this [02:36:30]server. Also delete the reports directory and replace the [02:36:35]scripts.
[02:36:40][02:36:45]This is done on all three servers. Now I will run the [02:36:50]report. Let me launch power shell, [02:36:55]same as earlier,
[02:37:00][02:37:05]awesome script execution is over. Let's [02:37:10]go to the shared reports. And we can see this folder is created over [02:37:15]here. Inside which web server report is available. [02:37:20]How cool is that? This time report is generated inside the [02:37:25]shared directory. Similarly, we can execute this [02:37:30]report on other servers, SQL server.
[02:37:35][02:37:40][02:37:45]All [02:37:50]right, script education over, go to the shared reports directory, [02:37:55]go to this folder and you can see QL report is also [02:38:00]sitting along with web server on this clear [02:38:05]same step on the app server. Now,
[02:38:10][02:38:15]okay, script education over, [02:38:20]go to the shared reports. There we go, A server report [02:38:25]is also sitting at the same location.
Now since all the [02:38:30]reports are sitting in this shared directory, it [02:38:35]is very easy for the report consolidation script to go and read [02:38:40]them all and generate a consolidated report. [02:38:45]Now if you have understood the concept, I'm sure you might be wondering [02:38:50]this report consolidation job is. Anyways, we will run once [02:38:55]why it is present on all three servers. Well, let me congratulate [02:39:00]you. If you have this question in your mind, you have really understood the concept [02:39:05]very clearly and it is not needed on all three servers. [02:39:10]We have just tried to keep the code in sync on all three servers. So it [02:39:15]is there, but in reality, you just need it at one place, [02:39:20]Be it your web app or database server, or it could be some [02:39:25]jump server, or you call it a terminal server from where you want [02:39:30]to validate all of your clients, right? We are just trying to mimic [02:39:35]the business scenario. Anyways, this is just a lab set up, not the [02:39:40]real environment.
And we cannot keep on creating servers, right?
[02:39:45]This is why we are running the script from one of our environment [02:39:50]server only. But you are free to plan it as per your infrastructure.
[02:39:55]I hope that makes sense. Now let me run the report consolidation [02:40:00]job. All right, let me launch power shell [02:40:05]and then report consolidation script Enter. [02:40:10]Very first. Okay. All right, [02:40:15]so we can see the consolidated validation report is generated [02:40:20]successfully.
Automatically, the report is launched in [02:40:25]Internet Explorer, but for the best experience, open it in Chrome.
[02:40:30]I'm copying the script on my local folder [02:40:35]because I have Google Chrome installed over here. [02:40:40]There we go.
Our report is [02:40:45]working perfectly fine. We have color coding to understand it pretty easily. [02:40:50]Only one red I can see, which is for web server. [02:40:55]And trust me, I purposefully kept this service which is stopped [02:41:00]so that at least something we see has failed also.
Right. [02:41:05]This ensures our color coding is working perfectly fine. We have information [02:41:10]messages, then success messages in green and [02:41:15]failed test cases in the red color. Right?
Overall, [02:41:20]I hope you are liking this report. With this, our lab set [02:41:25]up is complete and we are successfully able to run our validation test [02:41:30]cases against web app and database servers [02:41:35]and generate this beautiful looking consolidated validation [02:41:40]report.
Right, well that's it for this lecture. Take good care of yourself. [02:41:45]Thank you.
[02:41:50]Hello friends. In [02:41:55]the previous lectures, we have covered the basics. And we are all set [02:42:00]to understand this server validation script. This script [02:42:05]executes on a server locally and runs the different test cases [02:42:10]on it. This is one of the most important lectures. You might [02:42:15]have to take a few pauses while watching the lecture to digest what is [02:42:20]going on. Let's start precisely.
This [02:42:25]script accepts user meters and read the configuration file [02:42:30]to understand what it is asked to do.
Then [02:42:35]it uses library methods and run the validation test cases [02:42:40]locally on the server.
While doing this, it writes [02:42:45]the information. Error and debug messages in the log [02:42:50]file, in the end generates this Tamal file over [02:42:55]here, which contains the validation results of the server.
[02:43:00]Let's go through the script now.
[02:43:05]Here it accepts these two parameters, name of the server [02:43:10]on which you are executing the validation and its tier type. [02:43:15]You might be thinking this script runs locally on any server. [02:43:20]Then why we need to pass the server name at all? Why not just get the server [02:43:25]name using another powerial command? Let itself logical [02:43:30]question, correct? Well, because here the server [02:43:35]name is only being used for displaying the server name in the report and [02:43:40]for naming the file. So this gives you additional control [02:43:45]that you can pass the name which you actually want to display in the report, [02:43:50]right, and the tier name you definitely want to pass [02:43:55]because according to this, only the script will know what you want [02:44:00]to validate on this server and which tasks you want to run.
[02:44:05]We can run [02:44:10]this param block and pass these values, or maybe we can create [02:44:15]these variables over here. I'm just keeping these variables over here [02:44:20]for your convenience in understanding the script.
These are not the part of script. [02:44:25]I will remove these variables from here immediately after this lecture.
[02:44:30]Yes, make sense then As per the base directory, [02:44:35]we are finding the library scripts and running them here. [02:44:40]You must be aware in power shell, we can call other scripts [02:44:45]like this. After this step, we can use any [02:44:50]of the functions defined inside these scripts.
[02:44:55][02:45:00]Now the logging part, We definitely want [02:45:05]to append messages into the text log file, but we also want [02:45:10]to see them in the console. I'm adding two logging targets. [02:45:15]This way you can see the live response from the script [02:45:20]in the console as well as you can visit this log file in [02:45:25]future if needed.
[02:45:30]From here onwards, you will see such statements where we [02:45:35]are writing our log messages. But we will not discuss about them, [02:45:40]just read them yourself. All right, then here we are reading [02:45:45]the configuration file and storing it into this variable. [02:45:50]With type casting, we are already aware of Xml passing. [02:45:55]With this step, the entire configuration file is loaded [02:46:00]into power shell and we can make use of the information [02:46:05]from this file inside our script.
[02:46:10]Here we are creating [02:46:15]the variables.
We are reading this particular node and initializing [02:46:20]the name and value. Then this power [02:46:25]shall command late set variable takes care of creating or updating [02:46:30]the variables with whichever scope we specify here [02:46:35]you don't believe variables are really created or not. [02:46:40]Run this get variable command late and set it for yourself.
[02:46:45][02:46:50]Now we are all [02:46:55]set to process these tiers by validating each tier [02:47:00]one by one. We are reading the information and storing [02:47:05]the name of tier and various tasks and then giving a call [02:47:10]to this method. Trigger validation Job
[02:47:15][02:47:20][02:47:25]trigger validation job is very interesting method and [02:47:30]very important as well because it has a central role [02:47:35]to perform. This is the right time to understand this method in detail. [02:47:40]Please concentrate. We are calling [02:47:45]this method with one parameter tier tasks, which is nothing [02:47:50]but this string. As you can see, these are comma separated [02:47:55]values. Here we are splitting this string using [02:48:00]coma as a delimeter. What this does is create [02:48:05]an array with all these values.
[02:48:10][02:48:15]Then we process [02:48:20]these tasks one by one using this for each object Command [02:48:25]late, we are adding validate hyphen as the prefix [02:48:30]for each task and simply calling this as a function [02:48:35]inside triblock. If something goes wrong in the [02:48:40]triblock, we have this catch block here to capture the exception [02:48:45]for your information. This way we can call a function [02:48:50]or execute another script in power. Okay, [02:48:55]You must be thinking we are calling these functions, but where [02:49:00]are they defined? Well, these are in this script which we lauded [02:49:05]earlier. If you notice, we have smartly named [02:49:10]these functions same as the task name with validate [02:49:15]hyphen as a prefix. If you ask me what is the validation [02:49:20]function name of this SQL database task, I will say [02:49:25]it is validate hyphen SQL database easy to remember [02:49:30]and logical correct. Another [02:49:35]confusion could be what is this array list to which we are [02:49:40]adding a new value in both tri and catch blocks. [02:49:45]Needless to say, the cache block only gets executed [02:49:50]if there was any exception thrown in the Tri block. Let's [02:49:55]understand the concept here. Each of the validation function [02:50:00]being called here is a piece of code validating some system [02:50:05]component. They all returns an output in form of [02:50:10]a PS custom object or Powertal custom object [02:50:15]with four columns title, status, output, [02:50:20]and comment. Of course, I will not give more details about these [02:50:25]functions right now because we will have dedicated lectures [02:50:30]for understanding this. If something goes wrong while [02:50:35]validating any of the server component, because of this cache [02:50:40]block over here, we can at least get a message saying [02:50:45]this test was carried out, but it failed with an exception. If we don't [02:50:50]add this information in the cache block in case of exception, [02:50:55]the entire test case itself will not be visible in the report.
Which is not good, [02:51:00]right?
Because we are adding [02:51:05]some information about the test case both inside try and catch [02:51:10]block. So be it. A successful validation or exception [02:51:15]thrown. This PS custom object should be added to the [02:51:20]list so that this can be added to the output report. [02:51:25]Makes some little more sense now.
Yes, [02:51:30]now let us run this statement and see what we [02:51:35]get in output.
Okay, I [02:51:40]hope now you understand the story up to getting this validation [02:51:45]summary. Please notice, till this point we have our validation [02:51:50]result in a perfectly workable powershell object, [02:51:55]If you want to export it to a CSV file, [02:52:00]Json file, or push the result to some API, [02:52:05]it is all doable. This is the benefit of following this approach, [02:52:10]that we have everything at one place in the end. Now [02:52:15]it's your choice. What you want to do with this data. Think about your [02:52:20]use cases, what you want to do with this data. I'm sure you will have some [02:52:25]good ideas. What we are doing with this is converting [02:52:30]the result into esteem format using convert to [02:52:35]esteem command late, making these replacements to avoid these [02:52:40]special characters in the final report which are there because [02:52:45]of some estima found in here and convert to esteem [02:52:50]preserved it.
Then we are just pushing this [02:52:55]Esteemaltring into a file with the name as server [02:53:00]name, underscore tier name estima.
[02:53:05][02:53:10][02:53:15]It's time [02:53:20]to execute this script. Past these values,
[02:53:25][02:53:30][02:53:35]you can see the report is getting generated [02:53:40]without any problem. While doing this, it the [02:53:45]log messages on the console as well as in this log file, [02:53:50]you can see. This folder got created automatically. And [02:53:55]this is the output report. I'm sure, just like [02:54:00]me, you also find this report very dull. Certainly not [02:54:05]very attractive. Yeah, it's time to do some magic.
[02:54:10]Copy [02:54:15]this, [02:54:20]paste here in this estimate file, [02:54:25]refresh the browser. [02:54:30]Wow, this is cool, right?
[02:54:35]Well, I just showed you this to create some curiosity for the next lectures. [02:54:40]You didn't get what happened here. Don't you worry at all? We will cover [02:54:45]this in the coming lectures. Yes. [02:54:50]All right, if you [02:54:55]have any doubts with respect to this script, it is completely normal. [02:55:00]You can just watch the lecture one more time and I'm sure you will be fine. [02:55:05]We covered an important script in this lecture. You [02:55:10]have got every right to celebrate this with a nice cup of coffee. [02:55:15]I am going to do the same. That's it for this lecture. Take good care [02:55:20]of yourself.
Thank you.
[02:55:25][02:55:30]The functions [02:55:35]written inside this script are like the workers who does the actual [02:55:40]job inside a factory. They are no nonsense people. [02:55:45]They have been assigned certain tasks which they perform and give the [02:55:50]result. This server validation script is just a wrapper around [02:55:55]these functions. Though it is doing such a good job, it can't [02:56:00]do anything without these guys. Please do not underestimate [02:56:05]them. I would say, let us discuss these validation functions [02:56:10]which we have written as standalone tasks. Rest [02:56:15]assured, I promise you are going to learn a lot [02:56:20]on this. Let's get started.
We have written these functions [02:56:25]just for demonstration purpose.
It doesn't validate anything, [02:56:30]but can surely help us understand the structure of our validation functions. [02:56:35]Yes, we have followed a strict structure in each [02:56:40]of the validation function. Each function should return a [02:56:45]PS custom object with these fixed four fields, [02:56:50]title, status, output, and comment. Title is nothing [02:56:55]but what you want to see as the name of validation in this [02:57:00]report status can have any value out of information, [02:57:05]success, fail, and exception occurred. [02:57:10]Let us understand the significance of each of these information. [02:57:15]Is the state in which you are not really looking for success or failure, [02:57:20]it is just information. For example, here we have [02:57:25]some basic details about our server, like its name last [02:57:30]boot up time, et cetera. These are the details which [02:57:35]we want to know. But it is not suitable to call it out as [02:57:40]success or failure because it is just a name. Right? [02:57:45]Next we try to validate something, and if the result [02:57:50]is expected one, we call it a success. For example, here [02:57:55]we wanted to see the TTP status code as to hundred for [02:58:00]the URL. If the TTP status code is found as [02:58:05]200, we call the validation successful. Else we call it [02:58:10]failed. Sometimes within a function, some unknown issue [02:58:15]comes up and we are not in situation to call it as a success [02:58:20]or failed. In those cases, we have this try and [02:58:25]catch block to capture the exception in the status, we can [02:58:30]report it as exception occurred, right? I'm sure this [02:58:35]is very simple and straightforward concept for you or some people.
[02:58:40]Let's move on to output. [02:58:45]Output is something you want to display in support of the [02:58:50]validation status that, okay, this test is failed [02:58:55]or passed because output is this, this is the evidence found. [02:59:00]So I'm calling the validation test as successful or failed.
[02:59:05]Very simple, right?
And lastly, comment [02:59:10]is an optional message for the end user to understand your test [02:59:15]case. You can put whatever additional information or remark [02:59:20]you want to put for someone who sees this validation report. [02:59:25]This is also an excellent place to make the report more readable [02:59:30]for your manager. In many cases they are not technically sound enough [02:59:35]to understand what you are saying. I know most of you agree with my observation. [02:59:40]At least partially. If not fully, yeah, [02:59:45]Moving on, here we are putting all of the data [02:59:50]into this PS custom object and printing it here [02:59:55]in Powerial, you need not to explicitly return anything in the function [03:00:00]because whatever you print in the function body [03:00:05]is returned except something you print using right host [03:00:10]command let,
[03:00:15]output is returned like this. You see we [03:00:20]can collect this output and convert it into esteem very [03:00:25]easily using convert to estima. This is what we are precisely [03:00:30]doing also in our server validation script.
[03:00:35]Please note in case here [03:00:40]we have another estima entity inside this and then you try [03:00:45]to use convert to estima, it will preserve the estima.
[03:00:50]We can just replace [03:00:55]these values like this to obtain pure TML format.
[03:01:00]Makes sense.
[03:01:05]All right, my dear friends, I hope the structure of validation function [03:01:10]is clear to you. We have followed the same style for all of these [03:01:15]validation functions.
You will not have trouble in understanding any [03:01:20]of these functions.
Well, that's it for now, take good care of yourself. [03:01:25]Thank you.
[03:01:30]Hello friends.
Welcome [03:01:35]to this lecture as we have already understood the structure of these [03:01:40]validation functions. Now this is good time to talk about each [03:01:45]of these functions and see what they are doing.
Please note, [03:01:50]our discussion is going to be only around the functionality for which [03:01:55]the function is written, not around this structure, [03:02:00]because we have already discussed this in the previous lecture.
Throughout the lecture, [03:02:05]we will try to keep our discussion very concise and pointed.
[03:02:10]Yes, let's start with this function, validate [03:02:15]info. This function is written around this command [03:02:20]gate info. Let me execute this. [03:02:25]You can see there are a whole bunch of details returned by this command [03:02:30]late depending upon your use case.
You might be interested [03:02:35]in few other deeds, but we are restricting ourselves to these details.
[03:02:40]We are fetching these details like this.
Now,
[03:02:45][03:02:50][03:02:55][03:03:00]if we directly convert this data to trial, [03:03:05]it will look like this, which I do not like personally. [03:03:10]Instead, I want to see my data in this key value fashion, right?
[03:03:15]For this reason, we are storing our data into this [03:03:20]order dictionary. Getting an enumerator on this, and then converting [03:03:25]into team so that our data looks like this, [03:03:30]right?
Instead of this, if you are okay [03:03:35]to see your data in this fashion, you can avoid this part, okay? [03:03:40]But I do not recommend that this tiny replacement, where [03:03:45]we are adding this ID equals to sub table, is to differentiate [03:03:50]this smaller table from the bigger one. So that we can [03:03:55]apply a different CSS style on the inner table and a different style [03:04:00]for this outer table.
This differentiation is being created by this [03:04:05]ID equals to sub table. Right? I hope this is clear.
Now, [03:04:10]if you are confused directly, try these command lights in your system [03:04:15]and you will know what I'm talking about.
Right? All you need is power [03:04:20]shell to run this command let and give it a try. No other lab setup is needed [03:04:25]for this. Okay, we're done with this function. Let's move on to validate [03:04:30]URL though.
This value is coming from the XML file.
[03:04:35]I'm just setting it over here. And what this URL is, nothing but [03:04:40]a page in my website, right? This is my website. I have kept a page [03:04:45]for this particular health check.
Okay.
[03:04:50]What we are doing over here is just trying to invoke this [03:04:55]URL and keep passing these additional parameters. Just let me execute. [03:05:00]Selected partial statement.
This [03:05:05]in Voce web request has written various details about our web page. [03:05:10]But what we are specifically interested in is this status code. This is [03:05:15]TTP status code, and if its value is 200, it indicates [03:05:20]the page is healthy. You think I'm simply kidding. [03:05:25]And this status code, 200 doesn't mean anything.
Let me prove it. [03:05:30]I'll just add random something here we need to add, [03:05:35]no, this is not a valid URL, right? Let me hit Enter [03:05:40]and now we'll try to execute this statement. And you can see [03:05:45]we did not get status 200 anymore because now it is not a valid [03:05:50]URL. Clearly, whenever the status [03:05:55]code is 200, this URL is healthy. And this is all we [03:06:00]care about in this test case. If it is 200, status is successful. [03:06:05]If it is not, we are marking it as a failed. And this is all for this [03:06:10]test case. Now please listen to me carefully. As an instructor, [03:06:15]I have my own limitations.
I cannot make things so complicated [03:06:20]that one out of 100 students can only understand it. But [03:06:25]doesn't mean you should also not try. You should always strive [03:06:30]for perfection, right?
Why don't you scrap this web page and [03:06:35]try to pull this application version application status, which is healthy [03:06:40]database status.
All these things try to pull in using Powershell, [03:06:45]do the web scraping for these things. According to these [03:06:50]values, you should call your validation test Successful or failed?
[03:06:55]Correct. Agree with me.
Good. Moving [03:07:00]on to the next validation test. And it is top memory consuming [03:07:05]processes.
First of all, why the processes which are consuming [03:07:10]highest memory should bother you at the end of day? They are processes [03:07:15]of your system only, not mine. So why should you be worried [03:07:20]which processes are consuming highest memory? Well, thing [03:07:25]is, ideally these processes are not a problem. But after [03:07:30]certain monthly windows patching or your application upgrade, [03:07:35]if all of a sudden a random process starts consuming so much memory [03:07:40]that your application itself goes down because of it, then it's a problem, [03:07:45]right? Because if it was a security patch, your [03:07:50]antivirus can become hyperactive and start scanning rigorously, [03:07:55]causing this high memory, which you want to be aware of. This is [03:08:00]the reason why we are fetching the top memory consuming processes, right?
[03:08:05]To fetch this data, we are using [03:08:10]Git process command late. By default it return all different processes [03:08:15]running on the system. We are using this pipe and sorting [03:08:20]this output based on working set in descending order. [03:08:25]This will sort this list, then we are only interested in first five. [03:08:30]Just let me do this. You can see we have got the top [03:08:35]five processes which are consuming highest memory in working set, [03:08:40]right Then we might not be interested in all of these details, specific [03:08:45]columns in which we are interested, we are specifying over here [03:08:50]then it's just a matter of converting it into Atmel. In the end we [03:08:55]are using the pipe and converting the output to string format. Right. [03:09:00]Let's not spend more time on this, we are good.
[03:09:05]Moving on to the next validation test, it is [03:09:10]Validate Disc Health, again a very common health test. [03:09:15]We are firstly getting the different partitions on system. [03:09:20]These are the partitions, then we are fetching the volumes [03:09:25]created on the partition. These are the volume [03:09:30]drive and D drive, right? We are interested [03:09:35]in these things. Drive letter, file, system, health status and [03:09:40]size remaining. I hope you are aware of what we are doing here. [03:09:45]This is name, this is expression. I want a column with size remaining. [03:09:50]For this, I'm doing some computation here. We are getting the size remaining. [03:09:55]It will be by default returned in bits, I guess. We are converting it into [03:10:00]GB and then rounding it up to two digits after decimal, [03:10:05]Same we are doing for the size. We are [03:10:10]fetching the size. We do not want to see the size as 3 [03:10:15]million bytes, right? Instead we want it in G B, which is more [03:10:20]comfortable for yes. Right? This is why we are converting it into G B [03:10:25]and then rounding it.
This way [03:10:30]we get disc sizing [03:10:35]it is this.
As we have asked for specific properties [03:10:40]only those are appearing, right? And then we're just converting this [03:10:45]data into estima format so that it can be displayed like this, [03:10:50]right? If all the drives are healthy, we are calling it [03:10:55]as successful validation. Otherwise we are calling it as failed. Right? [03:11:00]In my case, none of this is unhealthy, hence the overall test [03:11:05]case is passed.
But if you want to drill down further [03:11:10]and say if the size remaining is less than 10% then [03:11:15]you want to fail the test case. You can write your additional logic over here. [03:11:20]Moving on to Val date SQL connection. [03:11:25]This is actually very simple test case. But before understanding this, [03:11:30]let me tell you why we are doing this only.
Ideally, [03:11:35]our application servers or web servers, et cetera, continuously [03:11:40]communicates with database.
Either to fetch some information from database [03:11:45]or to write or update certain information in the database. [03:11:50]Correct. Now, if this connectivity between application servers [03:11:55]and database server is broken for any reason, it is a big problem. [03:12:00]There are so many factors which can cause this issue, but [03:12:05]if this issue is present, your application cannot be healthy.
Yes, [03:12:10]in this what we are doing is from our application server [03:12:15]or from our web server, we are connecting to database [03:12:20]over port 1433. This port is typically used by Microsoft [03:12:25]SQL server. Now, presently I do not have database server deployed. [03:12:30]I'm just using Google.com Say this is my server [03:12:35]for now, let me remove 1433, but I'll put [03:12:40]port 80. Hit Enter, and you can see it is returning. [03:12:45]Test succeeded as true, right? We are taking this as [03:12:50]a result and then if it is successful we are calling the test case [03:12:55]is passed, right? But it's not always that this will [03:13:00]be successful because I knew Google.com is a website, so port 80 [03:13:05]should be allowed.
But what if I check for 1433?
[03:13:10]Let's see our port 1433, the connection [03:13:15]is not allowed. Tsp test succeeded is false. This is [03:13:20]all we are doing in this function.
Validate qual connection [03:13:25]and please understand you need not to perform this test on database [03:13:30]server itself, right? Because here you are ensuring [03:13:35]database server is accessible from other servers.
Correct. [03:13:40]Okay.
This is all for this valuations, equal [03:13:45]connection. All right, so we discussed about these [03:13:50]functions in this lecture. Now I would request you to practice these [03:13:55]functions and become very comfortable on these. And then let's meet [03:14:00]in the next lecture to discuss about rest of the functions. Well, that's it for [03:14:05]this lecture. Take care.
Thank you. [03:14:10]Hello friends.
Welcome to this [03:14:15]lecture. In the previous lecture, we discussed these functions. I [03:14:20]hope you are very clear on this. Now, let's continue with the next [03:14:25]function, Validate web service.
Your application [03:14:30]may have its own dedicated window services, or it may depend on [03:14:35]certain Windows services to be up and running.
Right [03:14:40]in this first of all, we are fetching the service names as specified [03:14:45]in our XML file. Then we are splitting it because these are comma [03:14:50]separated values and storing the information in this area. [03:14:55]After this, we are using this four loop.
And for [03:15:00]each of the service specified over here, first of all, we are checking [03:15:05]whether the service exists or no. If the service is not [03:15:10]present, the test is failed right there. But if the service is present, [03:15:15]then we are using W MI to fetch more details about it, [03:15:20]like what is the start mode of the service, through which account it is running?
[03:15:25]What is the current status of the service? Pretty simple validation [03:15:30]test, right?
Well, exactly the same we are repeating for application [03:15:35]services.
Only difference here is instead of using this [03:15:40]variable, we are using this variable which is containing the application [03:15:45]services running on your app servers. And please do [03:15:50]not think too much about these services. I've just randomly picked them. [03:15:55]Okay? No logic behind them at all. Moving [03:16:00]on for understanding this validate IS function. [03:16:05]I have deployed a VM in azure and now we will install [03:16:10]IIS on this, yes.
Let me copy [03:16:15]this.
Launch power shell. [03:16:20]Yeah.
[03:16:25][03:16:30]All right. So the installation [03:16:35]is completed now. We [03:16:40]should be able to use Net Manager. Right. Let me, [03:16:45]man go to the sites. I [03:16:50]just want to add another site [03:16:55]anyways. [03:17:00]I'm not interested in running the site.
[03:17:05]Okay.
So this way [03:17:10]along with this default website, now we have tech school website also created [03:17:15]on the right. And also it has its own [03:17:20]dedicated application pool. Correct.
Now, when you're validating IAS, [03:17:25]things are very important to it's very straightforward to come here [03:17:30]and check whether your pool is started or no, your website is running or [03:17:35]stop. These things are very easy to check from. This IS manager, [03:17:40]but when you have to do this for hundreds of machines in a limited [03:17:45]time, then you have problem, right?
For this, we have this function [03:17:50]validate IIs. Firstly, [03:17:55]we are importing the module, then we are fetching all [03:18:00]different websites. These are the details about website out of which [03:18:05]we are interested in their name and state. Right? We are [03:18:10]fetching these details, converting into Este and adding to output. [03:18:15]After this, we are going for as apples and [03:18:20]running this statement which is getting us these pools details, [03:18:25]right? We are fetching these results, putting into [03:18:30]this output variable. And then if we execute this and see the [03:18:35]output, it looks like this. Now, it may not make any sense because [03:18:40]this is Tm, Let me copy this into my system. [03:18:45]Don't need this and this, [03:18:50]right? Saving it. Going to STM, [03:18:55]it looks like this. These are the websites [03:19:00]running in our system and these are the application pools along with their [03:19:05]run time, version, pipeline, mode, and state. Right? [03:19:10]This way we can gather information about our IS websites using [03:19:15]Powershell. This is a simple demonstration, but you can always [03:19:20]use this web administration module to fetch the details that [03:19:25]you need for your websites.
Moving on to [03:19:30]validate folder permissions out of nowhere from where [03:19:35]the need for validating folder permissions comes up. Well, [03:19:40]thing is, let's assume you are running your website [03:19:45]on IIS.
This is your folder where you are keeping all of [03:19:50]your code.
Typically what we do is we run our [03:19:55]website through a service user giving all different permissions on the system, [03:20:00]what are needed.
That particular service user needs [03:20:05]full control on your code, right? Like this. [03:20:10]We can always check which user has which permissions on this folder. [03:20:15]But the thing is, doing it on one server is fine.
[03:20:20]But if you have to do it on hundreds of servers, then you definitely need an automation [03:20:25]solution, right? So let's say I'm giving IS users full [03:20:30]permission, okay?
Okay?
[03:20:35]And all of these are, for example, only, not in all the cases [03:20:40]you want to give full control on this root folder. All right, When I'm talking [03:20:45]about this function, I'm only saying how you can validate this folder [03:20:50]permission. That's it. There's no recommendation of this. Okay? So [03:20:55]this is my folder here. First of all, we're checking [03:21:00]whether this particular path exists or no. Yes, [03:21:05]indeed it exists. Then using gate ACL command late, [03:21:10]we are fetching all different security users who has access on this. [03:21:15]Along with what permissions they are having. Right [03:21:20]out of which we are interested in only those users who has [03:21:25]full control on this. Let me execute this entire [03:21:30]statement. You can see these are a couple of users [03:21:35]who has full control on our directory, right? [03:21:40]Then what we can do is we are just [03:21:45]fetching the three identity reference, [03:21:50]file system rights and access control type, It [03:21:55]comes out to be this table we are just converting into TML [03:22:00]and displaying this as output. This is the folder permission validation [03:22:05]we are doing, right?
I'm sure you are getting ideas [03:22:10]around how you can use this gate ACL command.
[03:22:15]Now let's quickly talk about this function [03:22:20]validate version here. All we're [03:22:25]doing is going into registry, let me show it in. I [03:22:30]also re edit enter.
[03:22:35]Whenever you install your application on a system, [03:22:40]it may create its registry entry.
Right now, in my [03:22:45]case, I do not have any application to install, but I'm sure we are having [03:22:50]Powershell. I'm just trying to get the Powershell registry entry itself. [03:22:55]Hk local machine, then going inside software, [03:23:00]Microsoft [03:23:05]the Powershell.
[03:23:10]Okay, You can see we have this Powershell version key over [03:23:15]here and it has certain value. In this example, I'm [03:23:20]running Powershell script and I also am going into power shell registry only [03:23:25]and fetching the version, which doesn't make any sense as such.
[03:23:30]But what I'm trying to mimic is your application will also have certain [03:23:35]registry like this. This path will change. Maybe this value name will change. [03:23:40]But it will be there, right? It is a solid way to fetch [03:23:45]the application version easily and then use it inside your automation [03:23:50]to validate the application version.
Right here [03:23:55]we are trying to fetch the version like this fetching, [03:24:00]let's see what is the value. Its value is this, which [03:24:05]is exactly matching with this right after that. [03:24:10]How to call it a successful or failed, let's say I'm [03:24:15]expecting my version to be anything but starting with five, which is the [03:24:20]major version, I do not really care about the subversion. [03:24:25]According to this, we are doing this validation and [03:24:30]then we are calling it status successful or failed. Which [03:24:35]in our case is successful because our version is five and sub version [03:24:40]we are not really bothered about. Yes, this is the concept [03:24:45]and we can use it to validate the application version of our application [03:24:50]on hundreds of machines at a time. Right. [03:24:55]Lastly, let's talk about this validate [03:25:00]SQL database function. This is very easy. All we're [03:25:05]doing is running this select query on the database [03:25:10]using in Voc SQL CM D. Because we are running [03:25:15]this test case on the database server itself, we are not passing the user name [03:25:20]and password because that is how we have configured our database. Yes, [03:25:25]we are running the SQL query and then fetching the results.
[03:25:30]If we are getting the results without any problem, that itself [03:25:35]is a validation, isn't it? Firstly, we are able to connect with database, [03:25:40]we are able to run the queries and fetch the results. It [03:25:45]is a good enough initial SQL validation, right?
This [03:25:50]is what we are doing over here. All right, my dear friends, in the last two [03:25:55]lectures we briefly tried to cover these functions.
All of [03:26:00]these functions are from different areas altogether.
And it is certainly [03:26:05]not possible to cover them in thorough details in this limited time. [03:26:10]But we have understood the concept about each of the validation function.
[03:26:15]Now you can certainly add more logic to these functions [03:26:20]and make them a better fit for your requirements. On this positive [03:26:25]note, let's conclude this lecture. Take good care of yourself.
Thank [03:26:30]you.
[03:26:35]Hello my dear friends, and [03:26:40]welcome to this overview of report consolidation script. [03:26:45]When we execute this script, it is assumed that following [03:26:50]whatever best approach we decide, we have already executed [03:26:55]this server validation script to validate our servers for [03:27:00]each server report like this is already created [03:27:05]and placed in this reports directory. Now if you have [03:27:10]20 servers, it will be really annoying task to open each report [03:27:15]and see the results like this. Right, so it [03:27:20]is a good idea to consolidate these reports and present the [03:27:25]data in a way that is comfortable, eye catching, [03:27:30]and make sense to the end users.
Right?
[03:27:35]All of the necessary [03:27:40]validation results we want to present in the validation report [03:27:45]is inside these files.
But before we get into [03:27:50]this, we need to finalize a template inside which [03:27:55]we want to fit in this data and present. [03:28:00]After doing a lot of suffling and experiments, I prepared this template. [03:28:05]It is a single TML file which contains HTML, [03:28:10]CSS and Javascript concept [03:28:15]is very simple.
In left hand side we have this navigation [03:28:20]in which we can see the server names.
Upon clicking on [03:28:25]any of these links, we can see the data corresponding to it, [03:28:30]right? Simple and straightforward. If you are [03:28:35]not into web designing and don't understand HTML CSS [03:28:40]at all, you can watch the lecture to enhance your knowledge [03:28:45]and just use my template. You need not to experiment with it. [03:28:50]If you have good understanding of HTML, CSS, [03:28:55]web designing concepts, then I urge you to apply [03:29:00]your creativity and make this report look even better. [03:29:05]In that case, I would also request you to share your report with [03:29:10]me so that I can highlight it to our community.
Yes.
[03:29:15]Now let me take you through the content of this file.
[03:29:20]At the top we have body of this page and it is divided [03:29:25]into two division tags. One is for vertical navigation [03:29:30]and second is for the content area.
[03:29:35]This division has these links to be shown here.
And [03:29:40]upon clicking on a link, what content should be displayed is mentioned [03:29:45]inside this content area dive.
Look, [03:29:50]our template is really nice. We have separated this styling [03:29:55]part and Java script from this content so well that [03:30:00]if we have to add few more servers, we can do it so easily.
[03:30:05]Just take a look
[03:30:10][03:30:15][03:30:20][03:30:25]easy, right?
I hope you got this concept [03:30:30]very clearly. Because this is definitely going to help you, [03:30:35]because this is definitely going to help you in understanding report [03:30:40]conciliation script.
Yeah, [03:30:45]this is such a simple web page, but without CSS, [03:30:50]it would have looked like this.
[03:30:55]Not at all attractive, right? [03:31:00]The content you are seeing over here, if you find [03:31:05]it nicely formatted and eye catchy, credit goes to [03:31:10]this CSS style we have added over here.
[03:31:15]Makes sense.
By the way, it is my favorite pastime to change the color [03:31:20]combinations from here and see the results.
[03:31:25]Come on. It is fun, love it [03:31:30]s
[03:31:35][03:31:40][03:31:45][03:31:50][03:31:55][03:32:00][03:32:05]moving [03:32:10]on.
If you notice for all of these dives, display [03:32:15]is turned off. Why it is so well, this [03:32:20]is because we do not want to see this content until the button [03:32:25]is clicked, right? Every link in the bar has [03:32:30]a function attached to it which is invoked on clicking [03:32:35]on this link. Whenever we click here, this function [03:32:40]gets called. And what this does is Turn off the display [03:32:45]for all others except the one which we want to show.
[03:32:50]Simple enough.
Lastly, [03:32:55]we have these few lines of Java script to activate [03:33:00]the tab on which we just clicked. As you can see here, [03:33:05]we are just adding a class on the link on which we [03:33:10]just clicked, and then Ss takes care of highlighting it.
[03:33:15]Got it? Yes.
[03:33:20][03:33:25]Please notice this part of the report at the top [03:33:30]is dynamic.
Every time you run the report on different [03:33:35]server, you can expect different validation results.
[03:33:40]This is not in our control. Every time it can differ. But this [03:33:45]part containing CSS and Javascript remains same [03:33:50]no matter what you want to display here. Correct. [03:33:55]Does it bother you if I can just copy this static CSS [03:34:00]and Javascript from this file [03:34:05]and paste it [03:34:10]inside a power shell as string variable. This way [03:34:15]whenever we need to use this CSS and Javascript, we can just call [03:34:20]it by this variable name and job is done.
This makes sense, [03:34:25]right?
All right, my dear friends, I [03:34:30]hope now you are clear on the design aspects of this web page. [03:34:35]If you understood it well, not just in this automation, but [03:34:40]in so many other ways. You can use this template perhaps [03:34:45]in every report we need navigation and content [03:34:50]area. So this template is your friend. Keep it [03:34:55]safe.
Yes, well, that's it for this lecture. Take good [03:35:00]care of yourself. Thank you.
[03:35:05][03:35:10]Hello my dear friends, and welcome to this lecture.
[03:35:15]In the previous lecture, we understood this template. [03:35:20]I am sure you liked it and must have done a couple of experiments on [03:35:25]it to enhance it further. Now look, we [03:35:30]have data in the form of these Tom files as well as we [03:35:35]have CSS and Javascript for beautification stored in [03:35:40]this variable. I'm asking you what can stop [03:35:45]us from combining these two smartly in order to get a beautiful [03:35:50]looking consolidated report.
Absolutely nothing, right?
[03:35:55]Well, this is exactly what we are doing in this report [03:36:00]consolidation script. We know the concept already [03:36:05]in this lecture.
Let's see how we can implement this concept [03:36:10]using powershell. Let's get started up to here, [03:36:15]the script, same as server validation script, which we already [03:36:20]discussed.
We have these individual reports at this directory.
[03:36:25]These variables are coming from the XML file that we passed [03:36:30]to our script.
Understand this [03:36:35]concept very clearly.
We are adding this dynamic text here, [03:36:40]which depends on these server reports. We [03:36:45]need to design our script in such a way that we need not to [03:36:50]worry about how many files are available here. It could be two [03:36:55]files or 100 files. We should be able to consolidate them all [03:37:00]into a single report using our partial script, right? [03:37:05]We created these two string variables and added this [03:37:10]text in the beginning. This string is for storing this navigation [03:37:15]diastema, and this string is for storing the content. [03:37:20]Here, structure is like this. We have some [03:37:25]text which is fixed at the top and some text fixed at the [03:37:30]bottom. In the middle, we have this four loop which [03:37:35]add data for each report one by one to the string. This [03:37:40]makes sense. Please note, this summary [03:37:45]is a special case. Unlike others, this table isn't coming [03:37:50]directly from these files. But we are creating this summary as per [03:37:55]our logic. You will see some extra powershell statements [03:38:00]to create this summary, okay?
[03:38:05]All right, here we are going to the validation [03:38:10]reports directory, and getting the list of all TML files [03:38:15]available at this path, then processing each report [03:38:20]one by one, as you can see the file name. Server [03:38:25]name and tier information.
We are extracting it [03:38:30]and storing it into these variables.
Then [03:38:35]we are creating the Esteem link to be kept in the navigation [03:38:40]over here and adding it to this string. Next [03:38:45]we are reading the estima file and making use of the text stored [03:38:50]in them. One smart thing, what we have done here is [03:38:55]because we know each rope which has information messages [03:39:00]will definitely have this substring. We are counting the number [03:39:05]of information messages, this report like this. Same goes for [03:39:10]success fail, and exception counts if [03:39:15]at all, the exception count or the error count is greater than [03:39:20]zero. We want to show it in the red color using this [03:39:25]background property so that it can be highlighted [03:39:30]this way very easily. We end up creating this summary [03:39:35]table, which we can display at the top. I feel [03:39:40]it makes a lot of sense, particularly when you are validating large number [03:39:45]of servers, like 50 plus servers.
In those cases, this summary [03:39:50]will be really helpful.
Yes, cool. [03:39:55]This structure which you see here at the top is created [03:40:00]over here. This is optional. If you don't need it, just delete this [03:40:05]and you are fine. Here we are adding the validation [03:40:10]summary so that whenever somebody click on this link, this particular summary [03:40:15]can be displayed. And that's it. This is the end of four loop [03:40:20]just we need to close pretty basic stem concept. [03:40:25]Yes. With this we are all set to create our estima [03:40:30]report file. Depending upon where you want to place your [03:40:35]final output file, you can modify this variable. Right now, [03:40:40]we are placing it inside this reports directory. Here we are [03:40:45]adding navigation esteem, summary, Este content Esteem [03:40:50]and CSS, and Javascript which is stored inside [03:40:55]this variable.
[03:41:00]I have created two variants of this style.
[03:41:05]If you select one, [03:41:10]you get this table which has alternate backgrounds.
[03:41:15]Whereas if you [03:41:20]go by second option,
[03:41:25]you get this report where [03:41:30]we have added color coding for each row based on whether it is [03:41:35]success, failed, exception, et cetera. This is done by adding [03:41:40]small Java script over here.
[03:41:45]You can select whichever you find better. Out of these two variants [03:41:50]I wanted to enable you to choose for yourself, have [03:41:55]given you both the options.
[03:42:00]Now let's talk about one scenario. Right now [03:42:05]we have limited number of servers and this report is looking pretty [03:42:10]good, right? Our report consolidation script is doing a fair [03:42:15]job there. You must give the credit where it is due. All right, but [03:42:20]what if we have too many servers to validate? I [03:42:25]server validation script, you will run on all of the servers and maybe [03:42:30]not these many reports. You will have control control, [03:42:35]yeah, [03:42:40]so instead of having 34 reports, [03:42:45]you have 60 reports. Okay, it must be interesting [03:42:50]to see how our report consolidation script will behave in this case. Correct.
[03:42:55]Let me just go to the and execute [03:43:00]script Power shelf Enter,
[03:43:05]and there we go look [03:43:10]at this.
Our script is not impacted from this change at all. [03:43:15]It is clearly able to accommodate all of these 60 [03:43:20]servers inside this report, right? So you can click on any of the report [03:43:25]and see the status of different validation test cases. Or you [03:43:30]can directly click in this report itself and it will take you to that particular [03:43:35]validation report.
Looks pretty good, right?
[03:43:40]We can safely conclude our report consolidation script [03:43:45]is good enough for taking large number of server reports as well. [03:43:50]Right. I have one more [03:43:55]interesting question for you. So if we go inside reports directory [03:44:00]here is a report, correct? This report [03:44:05]is there with me. Nobody is taking it away. Tell me one good reason [03:44:10]why we should still keep these reports at all. One [03:44:15]good reason, I'm sure you will not be able to tell because whatever these [03:44:20]reports contain is already part of this report. So [03:44:25]why would I keep these many reports at all? No, they are not [03:44:30]bringing any happiness in my life. Right. So what we can [03:44:35]do to get rid of this, what we can do [03:44:40]to get rid of these reports as soon as our consolidated reports are [03:44:45]ready.
Let's take a look here. I will write [03:44:50]my statement. Firstly, we'll lock the messages, [03:44:55]this is temporary reports directly only as soon as this final consolidated report [03:45:00]is created correct, then we need to write the statement for deleting this [03:45:05]remove item. Then, what is this folder? [03:45:10]As per our logic, this folder is this [03:45:15]inside the reports directory, we have created this folder, customer name, environment name. [03:45:20]This folder is this. I'll use this variable name, [03:45:25]remove item, dollar, this thing, and then [03:45:30]since there are files inside it, while deleting, Powershell will ask for [03:45:35]formation to bypass it, I'll have to specify recurse parameter [03:45:40]right now. Let me launch Powershell again. [03:45:45]Here, [03:45:50]go to this corner. Okay, [03:45:55]I'll go inside. Okay, I'm deleting this just for our understanding [03:46:00]right now. Only this folder is here. Okay, so report [03:46:05]consolidation script you want to execute.
Once this script is executed, [03:46:10]we expect a consolidated report to be created here.
And this folder [03:46:15]should go away, right? Let's see if that happens or no. [03:46:20]Okay, report is generated and if I go here, the folder [03:46:25]containing temporary reports is gone.
And I'm not sad about it [03:46:30]because whatever it could have given me is already available in this report. [03:46:35]I'll take it from there. Yeah. Perfectly fine. You are [03:46:40]not sad because those temporary reports are gone. Yeah.
All right.
My dear [03:46:45]friends, I hope now you are on this report consolidation [03:46:50]script. Not just the power shell, but you should be very clear [03:46:55]on the concept that tomorrow if you want to make changes [03:47:00]here and there in this script, you should be able to make those changes [03:47:05]without any trouble.
This is my goal.
Well, that's it for this lecture. [03:47:10]Take good care of yourself. Thank you.
[03:47:15][03:47:20]Hello my dear friends and welcome to this lecture.
[03:47:25]We have used power shells logging module for [03:47:30]writing our log messages.
Correct launching powershell [03:47:35]in administrator mode.
I'm sure by now you are tired of [03:47:40]seeing me installing this module install, module logging. [03:47:45]We are writing log messages throughout our scripts and [03:47:50]it is an important part of our script right now. If [03:47:55]you are dealing with 345 servers, then you can go to each server [03:48:00]and install this module. Not a problem, but if you are dealing [03:48:05]with hundreds of server, let's say you are only asked [03:48:10]to validate the servers and present the report. In that case, [03:48:15]first of all, you are required to install this module on each of the server, [03:48:20]and then only you can use this code. It doesn't sound good, [03:48:25]right? Let's do an enhancement for this.
[03:48:30]How about automatically install logging [03:48:35]module if it is not present already? Sounds very easy, [03:48:40]right? For this what we have done, let me show you, [03:48:45]because we will be installing the module from our script itself. [03:48:50]I want to launch it in admin mode.
I [03:48:55]have launched power shell in administrator mode here. I'll just [03:49:00]copy this path and then file open. [03:49:05][03:49:10]There we go. A small piece of code I [03:49:15]have added that.
Firstly, check whether the module is there [03:49:20]or not. If it is already present, don't do anything. [03:49:25]Just continue in the flow after writing this message, right? But if [03:49:30]the module is not there, install it immediately, right? Once [03:49:35]the module is installed, you continue in the flow. This is the simple logic we have added [03:49:40]in the starting itself. And this will do the job, right? Let me [03:49:45]uninstall the module.
Uninstalled module logging. Okay, [03:49:50]now let me execute line to see [03:49:55]if the module is there.
You can see right now we do not have logging [03:50:00]module installed in the system.
Now let's execute the script. [03:50:05]I'm clearing my screen. Execute the server, I don't care [03:50:10]name.
Let me enter [03:50:15]logging module is not installed installing the module now, Right, [03:50:20][03:50:25]it is installing the module and then it will continue in the flow.
[03:50:30][03:50:35]Okay, [03:50:40]Module is indeed installed.
These errors are because we are not at [03:50:45]the right path.
Let me switch directory.
[03:50:50]I'll go here D to this because inside our script, [03:50:55]we are referencing to library log, folder, reports, folder, et cetera. [03:51:00]It was not at that path how it will find. Because of that, we got those [03:51:05]errors clearing my screen again and now we are at This path module is also [03:51:10]installed you instead of right host.
Let me use [03:51:15]right that it look a little bit different [03:51:20]than other messages. Boo [03:51:25]Yes. Saving it. Let me run it now. Server name, [03:51:30]name, web.
[03:51:35]This time you can see logging module is already installed. This message has come [03:51:40]basically, it checked in the beginning itself whether module is [03:51:45]there or no. Because it was there, it simply skipped it. Adding the [03:51:50]simple logic into our script has made it smart. Right now, [03:51:55]we need not to worry about the server. We are going to execute this. [03:52:00]We don't care whether this module is there or no. We will simply check [03:52:05]if it is there, it will skip. Otherwise, it will install the module. [03:52:10]How simple and cool is that what we have? Improvised [03:52:15]the code. Definitely, it is good. It is just that we have to see [03:52:20]the broader picture and understand the pain points and try to fix them. [03:52:25]Let me try to address another problem. Tell me, [03:52:30]whenever we run this powershell statement, what exactly happened? [03:52:35]Well, you should be aware, we are connecting to Powershell repository [03:52:40]over the Internet. Downloading the necessary files needed [03:52:45]for this module and then placing them inside our power shells [03:52:50]module directory. Right, so it is saving us from effort of [03:52:55]searching this module on the Internet, downloading it, and then finding [03:53:00]the place where to copy this module. All of this we do not have to do [03:53:05]when we are using install module, right? But there is an important [03:53:10]item here to address. What if due to the security reason [03:53:15]on your servers, you are not allowed to use the Internet. What you will do in [03:53:20]that case, will you be able to download this module from Internet? No. [03:53:25]Will this logic work anymore? Right, [03:53:30]What you will do in that case, think [03:53:35]well, in that case we'll have to do one more [03:53:40]enhancement in our code. And that enhancement will be, we have to remove [03:53:45]the dependency on this logging module.
Right? Let's see [03:53:50]how we can do this. Let me go back. [03:53:55]I've already prepared the code for you.
[03:54:00]Common functions.
Yes, [03:54:05]this is the function which we had written. We wrote this function keeping [03:54:10]it in mind that tomorrow we might want to change to some other [03:54:15]logging module, or maybe no logging module at all. How fast [03:54:20]we can make this transition is what matters for this, what [03:54:25]we have done. Function we have written no change here, just one message.
[03:54:30][03:54:35]Get the time stamp level is coming from function like this and then messes [03:54:40]print it on the console. If you do not want to see the message on the console, [03:54:45]you can just comment it out. Right then what we are [03:54:50]doing, same thing, we are writing into the log file.
Now, what [03:54:55]is this log file? Let's see, inside the script, earlier [03:55:00]we were setting this debug level, then we were adding targets here. [03:55:05]We specified this is my file, right?
This time [03:55:10]this is my log file, this is the date, right? [03:55:15]And I'm just adding it to a file, then.
[03:55:20]This is my log file, right?
That's it. This is the change. [03:55:25]With this now, we do not have any dependency on the logging [03:55:30]module. We have commented this out. We are all set. [03:55:35]This is the log file. We will call the same function. Again, no change there, [03:55:40]right? We will call exactly same function with the same arguments. [03:55:45]This time it will write the message into the log file without [03:55:50]using logging module. Let me show it to you right away.
[03:55:55]Let me launch power shell here, [03:56:00]let me delete the existing log files and all the [03:56:05]clean right. Install, server validation, [03:56:10]PS one server name, anything to your name. Web Enter. [03:56:15]There we go.
We are getting the output on the console [03:56:20]as well. This error is coming because I do [03:56:25]not have IIS in my system. And this module also I have not added.
Right? [03:56:30]Go to the log file. There we go. Our log file is created [03:56:35]and messages are written inside it.
How cool is that? [03:56:40]Right, With this small change, we have completely [03:56:45]avoided the dependency on logging module.
Right?
[03:56:50]This is the enhancement we have done. I will definitely provide [03:56:55]you the source code of all different versions or the enhancements [03:57:00]that we do in these lectures. Yes, don't worry about this. [03:57:05]Take away from the enhancement is how you see a problem [03:57:10]and try to resolve it logically. More than these code [03:57:15]snippets, the approach which you are learning from me is going to help you in your [03:57:20]future. Well, that's it for this lecture. Take good care of yourself. Thank you.
[03:57:25][03:57:30]Hello my dear friends and [03:57:35]welcome to this lecture. In this lecture, we will see what [03:57:40]changes we can make in our application and server validation scripts [03:57:45]to be able to execute the scripts on a remote server. [03:57:50]And we never have to login into that remote server at all. Be it for [03:57:55]executing the scripts or seeing the results. For this, [03:58:00]I have deployed these three virtual machines in my Azure subscription. [03:58:05]First is this terminal server from where we will execute our scripts. [03:58:10]This is a centralized server or jump [03:58:15]server, what you call it here. We will copy our code and [03:58:20]execute the scripts as well as we'll see the results on this. [03:58:25]But where the script will actually run, it will run on SQL server [03:58:30]or application server or any other server of our choice. [03:58:35]And how we are able to execute scripts remotely [03:58:40]on these servers from this terminal server is because all of these [03:58:45]servers are part of our domain, right?
This is the domain [03:58:50]and this is the domain user which we have created, right?
[03:58:55]I have logged in into our terminal server. This is the place [03:59:00]where we will keep our code and execute on the remote machines.
[03:59:05]Right?
Before doing anything on the server, let's see [03:59:10]what are the changes we have made in our code, right?
I'll [03:59:15]go inside this and these are the two scripts where we have made certain [03:59:20]changes.
Very small changes though. I will explain them. Please [03:59:25]notice I have got rid of log file folder and reports folder. [03:59:30]Why? Because this is the code we are planning [03:59:35]to execute on the remote server, right? We do not want to keep [03:59:40]the log files and the output reports on these servers, right? [03:59:45]Instead, we want to see these results on a centralized machine. [03:59:50]It could be this terminal server or any other shared location, [03:59:55]but we do not want to keep them on these servers, correct? That is for sure. [04:00:00]For this reason, keeping the logs and reports directory [04:00:05]inside this doesn't make any sense, agree? Now our [04:00:10]code looks much simple, right? Just library scripts and [04:00:15]this configuration file is all we need to push on the remote server.
[04:00:20]That's it, right?
Now, let me describe you the change [04:00:25]which we have made in these scripts, very small change. Let me tell you, [04:00:30]we want to publish the results of our report [04:00:35]in a shared drive so that they can be centrally managed, [04:00:40]right? For this reason we have created this share directory variable [04:00:45]and this is the location where we are going to publish our results. [04:00:50]Right now we are using the concept of drives [04:00:55]and executing this statement to see if this drive [04:01:00]already exists or no. If it doesn't exist on the server [04:01:05]already, we are creating this share drive, right? As simple as [04:01:10]that. Once this piece of code is executed, [04:01:15]this location will be known by the name, output drive, [04:01:20]or any other sensible name which you could give. This [04:01:25]is the mapping we are creating here. And then here we are asking [04:01:30]our script. Hey, inside the drive you have a reports [04:01:35]directory, Use it for publishing your reports. Also, [04:01:40]you will see a log folder inside this shared location. Use it [04:01:45]for writing your logs. Our script will run anywhere, [04:01:50]but it will know this is the location where it has to publish [04:01:55]the reports and log files. Correct. This perfectly [04:02:00]makes sense. I'm pretty sure there's no confusion left on [04:02:05]this concept.
Right after this, we are using [04:02:10]the reports directory. You can see we are always referencing reports directory [04:02:15]or logs directory wherever needed. Right, In this [04:02:20]log directory, we are creating this log file where server name is also appended.
[04:02:25]And what is the server name? The same server name which we passed as a parameter. [04:02:30]This is one thing. Secondly, earlier we were [04:02:35]using a logging module for writing the logs. I have already explained [04:02:40]this earlier. Now we do not need logging module. [04:02:45]This function itself takes care of printing the log messages [04:02:50]on the console as well as writing them into a log file. Correct. [04:02:55]We have spent a decent time on this very simple concept. [04:03:00]I'm pretty sure this is very clear to you. Let me tell you in report [04:03:05]consolidation script also we have made the exact same change, [04:03:10]the same ph drive we are creating here. No change at all. Just [04:03:15]copy paste and these variables also as it is nothing new here, [04:03:20]right, with the simple change now our script can run remotely. [04:03:25]All right, here we have just defined [04:03:30]this variable, but the location should be present on the terminal server, right? [04:03:35]For this reason, let me quickly create this shared folder,
[04:03:40]just anywhere we can create it. [04:03:45]I'm not going to use drive, but I will use drive instead. [04:03:50]Why? Because it has less number of files over here. Less distraction. [04:03:55]We can concentrate.
That's it.
By the way, you must know in Azure [04:04:00]D drive should not be used for placing your data. Right? [04:04:05]I'm keeping it here because after this demonstration, anyways, I have to [04:04:10]dismantle these virtual machines. But this doesn't mean you will also use Drive [04:04:15]if you are in Azure, if it is some other cloud. Do whatever you want.
[04:04:20]Okay, sharing, [04:04:25]I do not want any security related pop ups. [04:04:30]Click because our scripts are going to write into this directory. [04:04:35]I'll say read, write both permissions and click okay, [04:04:40]Copy this path done.
[04:04:45]Now this is the shared location we have created, right? [04:04:50]Not pad, paste it here. [04:04:55]Not used in this way. I'll just change [04:05:00]this s yes and remove this. [04:05:05]That's it. Run here, [04:05:10]there we go. You can see we are able to access the [04:05:15]shared folder as in the network drive, basically the [04:05:20]way we are accessing it. Our scripts will also access this directory only, but [04:05:25]from the remote machines. This is the change.
Before [04:05:30]we invoke the scripts remotely, let me log in into one of the servers [04:05:35]and see if the script is all fine, right?
Okay, I [04:05:40]have logged into the application server. Let me copy [04:05:45]this code to our application server. Because our scripts are intelligent, [04:05:50]we can copy them anywhere and it will understand. [04:05:55]All right, scripts are copied. Let me go inside [04:06:00]launch power shell here.
[04:06:05]Now it's time to execute server elevation [04:06:10]S, one name of the server, [04:06:15]server name,
[04:06:20]and [04:06:25]so many errors. Why? Let's take a look.
[04:06:30][04:06:35]Bad, we need to [04:06:40]create these folders here.
[04:06:45][04:06:50]This reports folder and [04:06:55]logs folder should be here, reports [04:07:00]and logs.
[04:07:05]Okay, now let's execute the script [04:07:10]again.
[04:07:15][04:07:20]App, server, and type is app [04:07:25]returned to inside [04:07:30]validation results, logs. A new log file is created.
[04:07:35][04:07:40]Yes, log file is created over here.
[04:07:45]Inside our reports directory. This report is generated, right? [04:07:50]So where did we execute a script on the application server and where [04:07:55]are the results? They are inside the shared directory correct, which I can [04:08:00]access from anywhere. It's a network path.
Right? Now, what does this mean [04:08:05]is our scripts are fully ready to be executed remotely. [04:08:10]In the next lecture, we will understand how to execute [04:08:15]this code from the terminal server on the remote machines using [04:08:20]in Voce command. Let's continue working on this in the next lecture. [04:08:25]See you there.
Take care. Thank you.
[04:08:30][04:08:35]Hello my dear friends and welcome to this lecture.
In [04:08:40]the previous lecture, we made our scripts ready to be able [04:08:45]to remotely execute them on these servers from our terminal [04:08:50]server. In such a way that once executed, these [04:08:55]scripts should direct its output to a shared location. This [04:09:00]shared location could be on terminal server, any other server. [04:09:05]Yes, I'm sure whatever we discussed in the previous [04:09:10]lecture, there was a lot of repetition because similar thing we had previously [04:09:15]discussed also. But in this lecture we are going to discuss [04:09:20]a lot of new concepts and you have to fully concentrate on this [04:09:25]lecture. Otherwise you will not understand anything.
That's my promise.
[04:09:30]Okay, as you remember in the previous lecture, we [04:09:35]had to copy this code manually to the target server in [04:09:40]order to execute the scripts there. But we do not want to do that right, [04:09:45]because what this would mean is if you are executing script on [04:09:50]ten servers, you will have to manually login into each and copy this code. [04:09:55]And then you'll be able to execute scripts. Maybe output will be published to the shared directory, [04:10:00]but logging into the VM and then executing itself is challenging [04:10:05]task, right? In order to get full benefits, [04:10:10]we do not even want to copy this script on the server. Instead we want [04:10:15]Powershell to take care of that part also, right? What we will do [04:10:20]now is we will copy our code to this shared repository which [04:10:25]is nothing but another shared folder only, right? Most of the organizations [04:10:30]have this concept where they use a network location and store [04:10:35]all different softwares that are allowed within the organization. I'm [04:10:40]just talking about very similar concept. You create a shared folder and copy [04:10:45]your code there. And then from terminal server we will execute [04:10:50]a script on these servers which will first pull [04:10:55]the code from the shared repository and execute on the servers and then [04:11:00]place the output in the output directory. Very simple, right? [04:11:05]We are just trying to mimic what we have done manually. Correct? [04:11:10]I'm sure this is very exciting. Yes, let's get started then. [04:11:15]This is that script which we are talking about, which we will try [04:11:20]to invoke on these servers in order to validate them. And it has [04:11:25]very simple task. What this script does, it is pretty clear from [04:11:30]the name itself, right?
So if you have a better name, please suggest me.
[04:11:35]What this does is pull the script from somewhere, right? [04:11:40]And then run it. Wherever we execute it, it will pull [04:11:45]the code and run it. That is a simple task it is doing.
[04:11:50]Now please concentrate.
Firstly, we have this variable [04:11:55]local working directory on the server where you are executing the [04:12:00]script. Somewhere you want to copy the code and execute. [04:12:05]Right? I'm not saying it should be D drive, it could be your F drive drive, whatever [04:12:10]drive. But there has to be some location where you want to copy your code [04:12:15]and execute. Yes. For this reason we are having this variable, [04:12:20]then we are having this remote shared repository, UNC [04:12:25]path. What this location is basically a shared location [04:12:30]where we are going to store our code. In our case, only this [04:12:35]folder, but it could be used for a variety of purpose.
[04:12:40]Before I forget, let me create the shared location on the terminal [04:12:45]server. I'm going here again. We can create it [04:12:50]anywhere, but I'm creating it on this drive itself. [04:12:55]This is my shared folder properties [04:13:00]share with everyone.
[04:13:05]This one need not to be [04:13:10]written by any script, right? This is the shared repository. I don't want a random [04:13:15]to come and spoil this. I'm only giving read permissions. Not [04:13:20]write permissions. Yes, [04:13:25]done. What it has copied is this [04:13:30]location terminal server, remote shared repository, which is exactly this thing. [04:13:35]Instead of discussing this way, let me copy this script on this remote [04:13:40]server and try to execute line by line for you.
[04:13:45][04:13:50][04:13:55]Okay, my local working directory is D. Yes, [04:14:00]then this is the shared drive. [04:14:05]We are creating this mapping here so that we can [04:14:10]use drives executing this. With [04:14:15]this, the drive is created. Now you have to understand [04:14:20]very tricky concept here. Okay, please concentrate.
[04:14:25][04:14:30]We are able to create the drive here, but because our [04:14:35]script will run remotely, we will not be able to pass this credential there. [04:14:40]Instead of this, we have to use this piece of code [04:14:45]here. We have used this using variable so that we can pass [04:14:50]the credentials to our remote script from outside. Yes, [04:14:55]if you don't use it this way, you will face issues and you'll have to spend [04:15:00]one day to fix it. Let's not do it.
I'll omit this section. [04:15:05]Now, while in walking, we will pass the credentials [04:15:10]and the same credential will be used for mapping this PS drive.
[04:15:15]I know you are confused here, but don't worry you are going to understand it [04:15:20]very clearly by the end of this lecture. Okay, moving on. [04:15:25]So where is the code to execute? It should be here. [04:15:30]But do we have our code here? Not yet. So, let's copy it.
[04:15:35]We have copied our code to the [04:15:40]remote shared repository. Okay. [04:15:45]Now, what is our local directory? [04:15:50]It look like this. Inside [04:15:55]drive, We are creating a folder local code copy.
[04:16:00]This is the location where we are going to copy our code.
Yes. [04:16:05]Let's execute this.
What this line of code will [04:16:10]do is if from the previous execution we had our code copied [04:16:15]at this location, it will be removed. This way you [04:16:20]are ensuring, even if you're making any changes in the repository, [04:16:25]those changes will be used in the next run. If you do not keep [04:16:30]this statement, every time you make modification in your validation [04:16:35]script, it will not be reflecting on the remote servers because [04:16:40]we are not removing the old code, right? This is very important [04:16:45]then we are copying the code from here, [04:16:50]which is shared repository and then our application and system validation scripts.
[04:16:55]And where we are placing them in this location, [04:17:00]right drive local code copy. Right now [04:17:05]you do not see any such file, let me execute this code.
[04:17:10]Then you can see [04:17:15]this local code copy folder is created over here.
[04:17:20][04:17:25]Lastly, what we are doing, changing the present working directory [04:17:30]to this folder, basically the control is now here. [04:17:35]Lastly, we are executing our scripts, which takes care [04:17:40]of validation and place the results in a shared directory. [04:17:45]This small piece of code is doing everything [04:17:50]that we had to do manually in order to execute our code.
[04:17:55]Fantastic. Hello friends. If [04:18:00]you are following this time and also if you can feel tiredness [04:18:05]in my voice.
Let me tell you, while I was making [04:18:10]the lecture, I have this habit of testing the code, which I'm going to present [04:18:15]to you in next 5 minutes.
So that at the time of recording, I do not face [04:18:20]any issue, right? Because we have to do the remoting. I [04:18:25]had to connect to this server. You can see even the ping is not [04:18:30]working to this. I was breaking my head. I tried everything, [04:18:35]why it is not working in the end.
What is the problem [04:18:40]here?
I gave this name when I go to the server.
[04:18:45][04:18:50]When I go to the server and I'll show you the computer name [04:18:55]here, Computer name is registered as application server.
[04:19:00]While I was trying to pin application server for this, you won't [04:19:05]believe one.
I was breaking my head why this is not working, Why this [04:19:10]is not working. Everything I tried and at the end of the day this is the problem.
[04:19:15]Nothing from my side. It's just that this entire server name, right.
It is [04:19:20]too big for it to handle.
Whenever we try to set it, it will say [04:19:25]longer than 15 bytes.
It won't accept for this reason, 17 [04:19:30]characters. All it wanted is 15 characters, this [04:19:35]E R. It automatically removed without giving any warning, any error. [04:19:40]If I could get anything about it, maybe I would [04:19:45]have fixed it faster. I don't know. But this up a lot of my time. And [04:19:50]secondly, another statement which I was trying, which is this one, [04:19:55]right? New S drive remotely.
This statement was not working [04:20:00]at all for 90 minutes. I was just trying this and it didn't [04:20:05]work. Lastly, it started working. All of a sudden, [04:20:10]literally nothing I changed.
I [04:20:15]hope you do not face this issue.
[04:20:20]If I had made small change here and there, please [04:20:25]understand it. Here we are just invoking the script [04:20:30]because every time we might want to change the server name and it's tight, [04:20:35]I've made these two variables and I'm passing these two [04:20:40]our script as parameters, right? This is the script which we are going to ink [04:20:45]on the different servers remotely where I'm keeping this script [04:20:50]anywhere you can. Let's say I'm keeping it here, centralized [04:20:55]control.
All right, my dear friends, we are done [04:21:00]with the difficult work now. We are just left with executing this script [04:21:05]remotely. Let's meet in the next lecture and do this. See [04:21:10]you there, take care.
[04:21:15]All right, so we have to execute this [04:21:20]statement. Firstly, we are getting the credentials. Let me get it. [04:21:25]It will ask for the credentials of this particular user. [04:21:30]Let me supply the password enter. [04:21:35]The credentials are stored. Right [04:21:40]Now we are using in Voc command to invoke our script [04:21:45]on this server application, right? Of course you cannot [04:21:50]use server we are stopping at, I just told [04:21:55]you why we cannot use the full name, right? Then we are using the [04:22:00]file path. This is the script we are executing. Yes, [04:22:05]our script.
[04:22:10]As you know, this is script except [04:22:15]two parameters, server name and tier name, using which it does nothing [04:22:20]but just pass them to our server validation script, right?
Because this script is [04:22:25]like intermediate only, it's not doing anything of its own, right? [04:22:30]It's passing this, we are giving server name and password [04:22:35]here I can call application server. [04:22:40]Now, look at this.
Here is the benefit, right, though it is completely [04:22:45]unplanned. But we are getting advantage of this for whatever reason. I [04:22:50]cannot name the server as application server, right? But [04:22:55]because we have not used this name. Instead we have asked us [04:23:00]only what name you would like to show in the report for this reason. [04:23:05]Here, user can pass whatever name they want, right? [04:23:10]This is completely land, I'm telling you.
But we are getting advantage of this, right? [04:23:15]What kind of tier it is? Let's say a tier, [04:23:20]yes. And then we are using the credentials which we created [04:23:25]earlier, right?
Why we are passing this without these [04:23:30]credentials? Because this script will run on a remote server, it will not [04:23:35]be able to communicate with the shared drive because another authentication is [04:23:40]needed. Right?
For that reason we are passing the credentials here which are or [04:23:45]stored. Yes. Now I'm about to execute this statement, [04:23:50]okay? You can see this [04:23:55]is our application server on which script will execute. And right now the drive [04:24:00]is empty, okay? Completely live, No editing at [04:24:05]all running it. You can see local [04:24:10]code cop folder is created containing our code. Right [04:24:15]here, script is getting executed on this server, but we are sitting on [04:24:20]the terminal server. This is the concept we are talking about whenever these scripts are [04:24:25]executing, where they're writing there reports, they're writing [04:24:30]into this particular share directory. Correct. Because [04:24:35]this code is nothing but coming from repository. Okay, [04:24:40]Now let's see if it has generated any results for us. [04:24:45]So I'll go to the data drive. Where are the results, validation results. [04:24:50]Firstly, let me see the logs. And yes, log is indeed created [04:24:55]open it, this is our log file. [04:25:00]It is server specific, right? So we [04:25:05]have executed it on application server, so it's log is here. Okay. The reports [04:25:10]you can see execute it, go inside application server [04:25:15]open [04:25:20]Bingo. These are the validation results of [04:25:25]our application server. How cool is that, right?
I [04:25:30]don't know how much you are clear, but let me reiterate for you. We are running [04:25:35]the script on terminal server.
[04:25:40]It is going into a remote server, pulling the code there.
Then it [04:25:45]is doing the stuff there. Right. In the end, we are getting our results. [04:25:50]Okay, now let me execute the script on a server. Let's see if it is [04:25:55]working over there.
[04:26:00]I'll just replace this here, [04:26:05]a server here also. I'll just replace the name. This is [04:26:10]used for display, so I can pass anything. Okay, [04:26:15]clearing my screen,
[04:26:20]executing this line execute, [04:26:25]script executed, finished. [04:26:30]I did not even check what happened on the app server. All I'll do is go to [04:26:35]the shared location. Okay, here it is. The report is placed here. If [04:26:40]I see any issue in the report, I'll go to its log file. And it's [04:26:45]log is also available over here. This is the benefit. Let me execute [04:26:50]it for the third server.
Why [04:26:55]I'm executing on the third server? Because I'm paying for it from last one week [04:27:00]at least. Let me make use of it, SQL Server.
[04:27:05]Ql server [04:27:10]and [04:27:15]it's tier has to be database, [04:27:20]right? Okay, Let's execute this.
[04:27:25]We are not connected, [04:27:30]you can see, to database server. I've not even did IDP, right? I'm sitting [04:27:35]on the terminal server. I have executed script on SQL server and [04:27:40]looks like it is finished. I can see it's log file is available. [04:27:45]In the reports directory, we are able to see the SQL database [04:27:50]report.
How cool is that? What else can you ask for? What [04:27:55]does this mean for you?
Is you don't have to login into the app server. [04:28:00]You don't have to login into database server, you just [04:28:05]have to log in into the terminal server where scripts are placed.
[04:28:10]You will just push this script on any server, execute the [04:28:15]validation test cases, come out, and you can share the report with anyone.
[04:28:20]Right, How cool is that? [04:28:25]Lastly, let's generate a consolidated report out of [04:28:30]these tiny reports. Let's launch the power shell locally and [04:28:35]run it report. Consolation one. There [04:28:40]we go.
Right now it says it [04:28:45]is not accessible because report is generated where on the shared location [04:28:50]I is not able to access at the moment. But because we are the owner [04:28:55]of the shared location also, we do not have any problem in accessing this [04:29:00]report. It is right here, correct? Just because I like [04:29:05]Chrome. So I'll copy it to my local directory. [04:29:10]As you can see, this is a report containing all different test cases. [04:29:15]Everything is here and this is all we wanted, right? [04:29:20]All right. My dear friends, now I hope you are [04:29:25]absolutely clear on what we have done.
[04:29:30]Centralized Scripts has its own advantages when you are dealing with hundreds [04:29:35]of server.
It saves you from effort of logging into each server [04:29:40]and then running your scripts Right. As well as now, I hope you [04:29:45]have understood the benefits of using a shared repository where you can go [04:29:50]and make changes and all of other servers will have to use that [04:29:55]updated code, right?
As well as we used a shared location [04:30:00]for the output so that at one place you can see the logs and [04:30:05]reports of all different servers. If all of this is done, [04:30:10]it saves a lot of your time while doing administrative activities.
[04:30:15]I'm going to provide you this code. Please feel free to download [04:30:20]it and practice in your environment.
I think after this long [04:30:25]lecture, both you and I are tired and deserve a nice cup of coffee.
[04:30:30]I'm certainly going for it. Well, that's it for this lecture. Take [04:30:35]good care of yourself. Thank you.
This transcript was automatically generated using speech recognition technology. Because this method relies on machine learning algorithms, the quality of transcripts may vary. To request this transcript be improved with enhanced accuracy, please email [email protected].
Copyright © 2023 Packt Publishing Limited
