Pages

Friday, September 2, 2016

PowerShell v3 in a Year Day 9 about Execution Policies

Execution policies, in many ways, are a throw away concept in PowerShell. Once you understand it, for most people, thats probably the last time you will need to think about it until you log onto a new machine, reinstall Windows, or something along those lines. Yet, it is also one of the first things you will run into with PowerShell. After you run a few commands, chances are you will want to try and run a "script" by saving something to file. Thats when it rears its ugly head. Execution policies, as noted in the help, "let you determine the conditions under which Windows PowerShell loads configuration files and runs scripts."  By default, execution policies are locked down. So, unless you know to modify the setting, one of the first things you will encounter in PowerShell, when you try to run a script, is an error. A simple "Hello world" exercise quickly turns into "What the frack!?"  This is a little unfortunate because lots of folks who take a chance on PowerShell by the themselves get greeted with a simple, but, frustrating hiccup. Hardly a great "introduction" to this great new shell all the koolaid drinkers rave about.

When working with execution policies it is good to know how they really work. You can set them at three levels:
  1. the local computer: The execution policy affects all users on the current computer. It is stored in the HKEY_LOCAL_MACHINE registry subkey.
  2. the current user: The execution policy affects only the current user. It is stored in the HKEY_CURRENT_USER registry subkey.
  3. the current session: The execution policy affects only the current session (the current Windows PowerShell process). The execution policy is stored in the $env:PSExecutionPolicyPreference environment variable, not in the registry, and it is deleted when the session is closed. You cannot change the policy by editing the variable value.
Before getting into the first two, I want to note that the current session setting lives in memory only. So, when a session closes, the session-specific ExecutionPolicy level disappears with it.

If those look familiar to an experienced Windows user they should, because, they work against the registry. When you call Set-ExecutionPolicy (or Get-ExecutionPolicy) you are really reading this registry key:
HKLMSOFTWAREMICROSOFTPowerShell1ShellIdsMicrosoft.PowerShellExecutionPolicy
When you change it, there is an enumeration of type Microsoft.PowerShell.ExecutionPolicy that determines what the settings are. Below is a quick list for reference. You can enter either a string or an integer to update the values when you call Set-ExecutionPolicy using these:
  • 0: Unrestricted
  • 1: RemoteSigned
  • 2: All signed
  • 3: Restricted
  • 4: Bypass
  • 5: Restricted
I am not sure why Restricted is in there twice, but, it is. If you try to do an integer of 6 you get this error,
Set-ExecutionPolicy : Cannot bind parameterExecutionPolicy.Cannot convertvalue "6"to type "Microsoft.PowerShell.ExecutionPolicy"due to enumeration valuesthat arenot valid.Specify oneof thefollowing enumeration valuesand tryagain. Thepossible enumerationvalues are"Unrestricted, RemoteSigned, AllSigned, Restricted, Default, Bypass, Undefined".
At line:1char:21
+ Set-ExecutionPolicy6
+                     ~
    + CategoryInfo          : InvalidArgument:(:) [Set-ExecutionPolicy], ParameterBindingException
    + FullyQualifiedErrorId :CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand
This is a pretty informative error, as it tells you all about the enumeration and what values are legit. As one of the MSFT guys pointed out, the fastest way to get information about anything enum related, is to put something you know will fail. PowerShell will come back with informative details like this and tell you what you need to know without a lot of research.

Below is a breakdown of the actual execution policies as outlined in the v3 help:
  • Restricted
    • Default execution policy.
    • Permits individual commands, but will not run scripts.
    • Prevents running of all script files, including formatting and configuration files (.ps1xml), module script files (.psm1), and Windows PowerShell profiles (.ps1).
  • AllSigned
    • Scripts can run. 
    • Requires that all scripts and configuration files be signed by a trusted publisher, including scripts that you write on the local computer.
    • Prompts you before running scripts from publishers that you have not yet classified as trusted or untrusted.
    • Risks running signed, but malicious, scripts.
  • RemoteSigned
    • Scripts can run.
    • Requires a digital signature from a trusted publisher on scripts and configuration files that are downloaded from the Internet (including e-mail and instant messaging programs).
    • Does not require digital signatures on scripts that you have written on the local computer (not downloaded from the Internet).
    • Runs scripts that are downloaded from the Internet and not signed, if the scripts are unblocked, such as by using the Unblock-File cmdlet.
    • Risks running unsigned scripts from sources other than the Internet and signed, but malicious, scripts.
  • Unrestricted
    • Unsigned scripts can run. (This risks running malicious scripts.)
    • Warns the user before running srcipts and configuration files that are downloaded from the Internet.
  • Bypass
    • Nothing is blocked and there are no warnings or prompts.
    • This execution policy is designed for configurations in which a Windows PowerShell script is built in to a a larger application or for configurations in which Windows PowerShell is the foundation for a program that has its own security model.
  • Undefined
    • There is no execution policy set in the current scope.
    • If the execution policy in all scopes is Undefined, the effective execution policy is Restricted, which is the default execution policy.
NOTE: One particular gotcha they warn users of focuses on UNC paths. If your script has UNC paths it might pose problems, but, there is no way to know for sure.

One of the things that you need to pay close attention to is the concept of scope. There are several scopes for which you can set/get ExecutionPolicy. The trick is to use the -Scope operator. As outlined below, there are 5 key scopes to pay attention to. As with the ExecutionPolicy enum, you can set/get values by integers as well. In this case, you are dealing with the Microsoft.PowerShell.ExecutionPolicyScope object.
  • MachinePolicy: 4
  • UserPolicy: 3
  • Process: 0
  • CurrentUser: 1
  • LocalMachine: 2
The fact of the matter is that you can have several different settings for an environment. There is no interdependence between each of these. I tried to track down where the settings lived, but, procmon wasnt divulging any secrets on this one. When you do change your execution policy for local computer (default) or current user it writes down to the registry. Also, from Vista forward, UAC requires you to run the shell with elevated privileges in order to make the change.

To set an execution policy you can take two approaches:
  1. Set-ExecutionPolicy Undefined
  2. Set-ExecutionPolicy Undefined -Scope Process
The first option affects the default scope whereas the second one is more precise and focuses on the Process scope.

There is a group policy approach that you can use to manage execution policies as well. If you use the group policy approach these settings override all manual settings. The setting you modify via the GPO is "Turn on Script Execution" and is as follows:
  • If you disable "Turn on Script Execution", scripts do not run. This is equivalent to the "Restricted" execution policy.
  • If you enable "Turn on Script Execution", you can select an execution policy. The Group Policy settings are equivalent to the following execution policy settings (Group policy and execution policy respecitvely):
    • Allow all scripts: Unrestricted
    • Allow local scripts: RemoteSigned and remote signed scripts.
    • Allow only signed: AllSigned scripts.
  • If "Turn on Script Execution" is not configured, it has no effect. The execution policy set in Windows PowerShell is effective.
The GPO creates physical files in the following locations:
  • .adm: Administrative TemplatesWindows ComponentsWindows PowerShell
  • .admx: Administrative TemplatesClassic Administrative TemplatesWindows ComponentsWindows PowerShell
Entries set in the Computer configuration node take precedence over the user configuration node.

For reference, here is the order of precedence to use when trying to evaluate execution policies:
  • Group Policy: Computer Configuration
  • Group Policy: User Configuration
  • Execution Policy: Process (or PowerShell.exe -ExecutionPolicy)
  • Execution Policy: CurrentUser
  • Execution Policy: LocalMachine
Special attention must be paid when dealing with signed scripts. If you use RemoteSigned for your execution policy PowerShell will not run unsigned scripts downloaded from the Internet. This includes emails. PowerShell v3 gives you the ability to use the -Stream parameter to detect files that are blocked because they were downloaded from the Internet. To work with these, use the Unblock-File cmdlet, also new in v3, to remove the limitation from these files.
Read More..

PowerShell v3 Function Get UACControlSettings

Another of my current project functions. These settings correlate with slider definitions for the Control Panel | User Accounts | Change User Account Control Settings tick marks. Below is the basic function to read the settings:
function Get-UACControlSettings
{
       [CmdletBinding()]
       param()
      
       $ConsentPromptBehaviorAdmin = (Get-ItemProperty -Path HKLM:SOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem -Name ConsentPromptBehaviorAdmin).ConsentPromptBehaviorAdmin
       $PromptOnSecureDesktop = (Get-ItemProperty -Path HKLM:SOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem -Name PromptOnSecureDesktop).PromptOnSecureDesktop
      
       # ConsentPromptBehaviorAdmin = 0/PromptOnSecureDesktop = 0 - Never
       if(($ConsentPromptBehaviorAdmin -eq 0) -and ($PromptOnSecureDesktop -eq 0))
       {
             Write-Verbose "$(Get-TimeStamp): Never notify.";
             return Never
       }
      
       # ConsentPromptBehaviorAdmin = 3/PromptOnSecureDesktop = 0 - Notify - do not dim
       if(($ConsentPromptBehaviorAdmin -eq 3) -and ($PromptOnSecureDesktop -eq 0))
       {
             Write-Verbose "$(Get-TimeStamp): Notify only when programs try to make changes to computer (do not dim desktop).";
             return Notify - do not dim;
       }
      
       # ConsentPromptBehaviorAdmin = 0/PromptOnSecureDesktop = 1 - Default    
       if(($ConsentPromptBehaviorAdmin -eq 3) -and ($PromptOnSecureDesktop -eq 1))
       {
             Write-Verbose "$(Get-TimeStamp): Default - Notify only when programs try to make changes to computer.";
              return Default;
       }
      
       # ConsentPromptBehaviorAdmin = 0/PromptOnSecureDesktop = 0 - Never
       if(($ConsentPromptBehaviorAdmin -eq 2) -and ($PromptOnSecureDesktop -eq 1))
       {
             Write-Verbose "$(Get-TimeStamp): Always notify.";
             return Always;
       }
}
Read More..

The reusable holdout Preserving validity in adaptive data analysis



Machine learning and statistical analysis play an important role at the forefront of scientific and technological progress. But with all data analysis, there is a danger that findings observed in a particular sample do not generalize to the underlying population from which the data were drawn. A popular XKCD cartoon illustrates that if you test sufficiently many different colors of jelly beans for correlation with acne, you will eventually find one color that correlates with acne at a p-value below the infamous 0.05 significance level.
Image credit: XKCD
Unfortunately, the problem of false discovery is even more delicate than the cartoon suggests. Correcting reported p-values for a fixed number of multiple tests is a fairly well understood topic in statistics. A simple approach is to multiply each p-value by the number of tests, but there are more sophisticated tools. However, almost all existing approaches to ensuring the validity of statistical inferences assume that the analyst performs a fixed procedure chosen before the data are examined. For example, “test all 20 flavors of jelly beans”. In practice, however, the analyst is informed by data exploration, as well as the results of previous analyses. How did the scientist choose to study acne and jelly beans in the first place? Often such choices are influenced by previous interactions with the same data. This adaptive behavior of the analyst leads to an increased risk of spurious discoveries that are neither prevented nor detected by standard approaches. Each adaptive choice the analyst makes multiplies the number of possible analyses that could possibly follow; it is often difficult or impossible to describe and analyze the exact experimental setup ahead of time.

In The Reusable Holdout: Preserving Validity in Adaptive Data Analysis, a joint work with Cynthia Dwork (Microsoft Research), Vitaly Feldman (IBM Almaden Research Center), Toniann Pitassi (University of Toronto), Omer Reingold (Samsung Research America) and Aaron Roth (University of Pennsylvania), to appear in Science tomorrow, we present a new methodology for navigating the challenges of adaptivity. A central application of our general approach is the reusable holdout mechanism that allows the analyst to safely validate the results of many adaptively chosen analyses without the need to collect costly fresh data each time.

The curse of adaptivity

A beautiful example of how false discovery arises as a result of adaptivity is Freedman’s paradox. Suppose that we want to build a model that explains “systolic blood pressure” in terms of hundreds of variables quantifying the intake of various kinds of food. In order to reduce the number of variables and simplify our task, we first select some promising looking variables, for example, those that have a positive correlation with the response variable (systolic blood pressure). We then fit a linear regression model on the selected variables. To measure the goodness of our model fit, we crank out a standard F-test from our favorite statistics textbook and report the resulting p-value.
Inference after selection: We first select a subset of the variables based on a data-dependent criterion and then fit a linear model on the selected variables.
Freedman showed that the reported p-value is highly misleading - even if the data were completely random with no correlation whatsoever between the response variable and the data points, we’d likely observe a significant p-value! The bias stems from the fact that we selected a subset of the variables adaptively based on the data, but we never account for this fact. There is a huge number of possible subsets of variables that we selected from. The mere fact that we chose one test over the other by peeking at the data creates a selection bias that invalidates the assumptions underlying the F-test.

Freedman’s paradox bears an important lesson. Significance levels of standard procedures do not capture the vast number of analyses one can choose to carry out or to omit. For this reason, adaptivity is one of the primary explanations of why research findings are frequently false as was argued by Gelman and Loken who aptly refer to adaptivity as “garden of the forking paths”.

Machine learning competitions and holdout sets

Adaptivity is not just an issue with p-values in the empirical sciences. It affects other domains of data science just as well. Machine learning competitions are a perfect example. Competitions have become an extremely popular format for solving prediction and classification problems of all sorts.

Each team in the competition has full access to a publicly available training set which they use to build a predictive model for a certain task such as image classification. Competitors can repeatedly submit a model and see how the model performs on a fixed holdout data set not available to them. The central component of any competition is the public leaderboard which ranks all teams according to the prediction accuracy of their best model so far on the holdout. Every time a team makes a submission they observe the score of their model on the same holdout data. This methodology is inspired by the classic holdout method for validating the performance of a predictive model.
Ideally, the holdout score gives an accurate estimate of the true performance of the model on the underlying distribution from which the data were drawn. However, this is only the case when the model is independent of the holdout data! In contrast, in a competition the model generally incorporates previously observed feedback from the holdout set. Competitors work adaptively and iteratively with the feedback they receive. An improved score for one submission might convince the team to tweak their current approach, while a lower score might cause them to try out a different strategy. But the moment a team modifies their model based on a previously observed holdout score, they create a dependency between the model and the holdout data that invalidates the assumption of the classic holdout method. As a result, competitors may begin to overfit to the holdout data that supports the leaderboard. This means that their score on the public leaderboard continues to improve, while the true performance of the model does not. In fact, unreliable leaderboards are a widely observed phenomenon in machine learning competitions.

Reusable holdout sets

A standard proposal for coping with adaptivity is simply to discourage it. In the empirical sciences, this proposal is known as pre-registration and requires the researcher to specify the exact experimental setup ahead of time. While possible in some simple cases, it is in general too restrictive as it runs counter to today’s complex data analysis workflows.

Rather than limiting the analyst, our approach provides means of reliably verifying the results of an arbitrary adaptive data analysis. The key tool for doing so is what we call the reusable holdout method. As with the classic holdout method discussed above, the analyst is given unfettered access to the training data. What changes is that there is a new algorithm in charge of evaluating statistics on the holdout set. This algorithm ensures that the holdout set maintains the essential guarantees of fresh data over the course of many estimation steps.
The limit of the method is determined by the size of the holdout set - the number of times that the holdout set may be used grows roughly as the square of the number of collected data points in the holdout, as our theory shows.

Armed with the reusable holdout, the analyst is free to explore the training data and verify tentative conclusions on the holdout set. It is now entirely safe to use any information provided by the holdout algorithm in the choice of new analyses to carry out, or the tweaking of existing models and parameters.

A general methodology

The reusable holdout is only one instance of a broader methodology that is, perhaps surprisingly, based on differential privacy—a notion of privacy preservation in data analysis. At its core, differential privacy is a notion of stability requiring that any single sample should not influence the outcome of the analysis significantly.
Example of a stable learning algorithm: Deletion of any single data point does not affect the accuracy of the classifier much.
A beautiful line of work in machine learning shows that various notions of stability imply generalization. That is any sample estimate computed by a stable algorithm (such as the prediction accuracy of a model on a sample) must be close to what we would observe on fresh data.

What sets differential privacy apart from other stability notions is that it is preserved by adaptive composition. Combining multiple algorithms that each preserve differential privacy yields a new algorithm that also satisfies differential privacy albeit at some quantitative loss in the stability guarantee. This is true even if the output of one algorithm influences the choice of the next. This strong adaptive composition property is what makes differential privacy an excellent stability notion for adaptive data analysis.

In a nutshell, the reusable holdout mechanism is simply this: access the holdout set only through a suitable differentially private algorithm. It is important to note, however, that the user does not need to understand differential privacy to use our method. The user interface of the reusable holdout is the same as that of the widely used classical method.

Reliable benchmarks

A closely related work with Avrim Blum dives deeper into the problem of maintaining a reliable leaderboard in machine learning competitions (see this blog post for more background). While the reusable holdout could directly be used for this purpose, it turns out that a variant of the reusable holdout, we call the Ladder algorithm, provides even better accuracy.

This method is not just useful for machine learning competitions, since there are many problems that are roughly equivalent to that of maintaining an accurate leaderboard in a competition. Consider, for example, a performance benchmark that a company uses to test improvements to a system internally before deploying them in a production system. As the benchmark data set is used repeatedly and adaptively for tasks such as model selection, hyper-parameter search and testing, there is a danger that eventually the benchmark becomes unreliable.

Conclusion

Modern data analysis is inherently an adaptive process. Attempts to limit what data scientists will do in practice are ill-fated. Instead we should create tools that respect the usual workflow of data science while at the same time increasing the reliability of data driven insights. It is our goal to continue exploring techniques that can help to create more reliable validation techniques and benchmarks that track true performance more accurately than existing methods.
Read More..

Thursday, September 1, 2016

Skill maps analytics and more with Google’s Course Builder 1 8



Over the past couple of years, Google’s Course Builder has been used to create and deliver hundreds of online courses on a variety of subjects (from sustainable energy to comic books), making learning more scalable and accessible through open source technology. With the help of Course Builder, over a million students of all ages have learned something new.

Today, we’re increasing our commitment to Course Builder by bringing rich, new functionality to the platform with a new release. Of course, we will also continue to work with edX and others to contribute to the entire ecosystem.

This new version enables instructors and students to understand prerequisites and skills explicitly, introduces several improvements to the instructor experience, and even allows you to export data to Google BigQuery for in depth analysis.
  • Drag and drop, simplified tabs, and student feedback
We’ve made major enhancements to the instructor interface, such as simplifying the tabs and clarifying which part of the page you’re editing, so you can spend more time teaching and less time configuring. You can also structure your course on the fly by dragging and dropping elements directly in the outline.

Additionally, we’ve added the option to include a feedback box at the bottom of each lesson, making it easy for your students to tell you their thoughts (though we cant promise youll always enjoy reading them).
  • Skill Mapping
You can now define prerequisites and skills learned for each lesson. For instance, in a course about arithmetic, addition might be a prerequisite for the lesson on multiplying numbers, while multiplication is a skill learned. Once an instructor has defined the skill relationships, they will have a consolidated view of all their skills and the lessons they appear in, such as this list for Power Searching with Google:
Instructors can then enable a skills widget that shows at the top of each lesson and which lets students see exactly what they should know before and after completing a lesson. Below are the prerequisites and goals for the Thinking More Deeply About Your Search lesson. A student can easily see what they should know beforehand and which lessons to explore next to learn more.
Skill maps help a student better understand which content is right for them. And, they lay the groundwork for our future forays into adaptive and personalized learning. Learn more about Course Builder skill maps in this video.
  • Analytics through BigQuery
One of the core tenets of Course Builder is that quality online learning requires a feedback loop between instructor and student, which is why we’ve always had a focus on providing rich analytical information about a course. But no matter how complete, sometimes the built-in reports just aren’t enough. So Course Builder now includes a pipeline to Google BigQuery, allowing course owners to issue super-fast queries in a SQL-like syntax using the processing power of Google’s infrastructure. This allows you to slice and dice the data in an infinite number of ways, giving you just the information you need to help your students and optimize your course. Watch these videos on configuring and sending data.

To get started with your own course, follow these simple instructions. Please let us know how you use these new features and what you’d like to see in Course Builder next. Need some inspiration? Check out our list of courses (and tell us when you launch yours).

Keep on learning!
Read More..

Security Tips While Login By Sanjit Patel


Sanjit Patel (Blog Owner)

This are the Security Tips While Login in Any website. Always Remember the Following Things while log in any site...

1) When to Log in


Always See HTTPS:// and like Following...








You can See In any Login Like Facebook or Any Bank Login You will Find "Https://" 

2) When NOT to log in
Some Times You will Find HTTPS:// But Lock Will be Like This Following
Unsecure
Unsecure

unsecure


Then Also Beware For Login... It may be Secure Or Not...   

**************************************************
Some Other secure Tips To Make Your Self Secure..

1) Never Use Public Computer For Log in For Your Bank Account Or Any Other Personal Account

2) After Log in Close webpage Or Close Tab

3) Use Cleaner For Your Browser (Example:- CCleaner

4)Clear the browser cache regular.

5). Never respond to emails that request personal information
6). Always check the URL and the Security Certificate.

7) Avoid using cyber cafes to access your online accounts as they may be infested with Virus,Trojans or Spywares which might track your activity or worse, compromise your security.

8) Keep your computer secure by Installing and continuously updating Anti Virus software(s).

 9) Keep your password top secret and change them often.
Do NOT disclose you User ID and/or Passwords to any person – not even Bank staff – either Intentionally or otherwise. Periodically change your Passwords.

For Anyhelp Click Here
Read More..

Expression Web 4 0 Tutorials from Install to Publish and More Ebook

Table of Contents
  • Expression Web 4.0
  • User Interface - Changes to the User Interface
  • Installing Expression Studio 4
  • Setting Up Expression Web 4.0
  • Create New Website in Expression Web 4.0
  • Create a Blank Web Page
  • Create a Webpage Layout in Expression Web
  • Adding Horizontal Top Navigation to Webpage Layout
  • Adding Vertical Navigation to Webpage Layout
  • Validating Your Pages
  • Creating Your Dynamic Web Template
  • What Is Search Engine Optimization - SEO?
  • Using the Expression Web SEO Checker and Report
  • Working with Images in Expression Web 
  • Working with Hyperlinks
  • Publishing Your Web Site
  • How to back up your local website on your hard drive
  • Expression Web 4.0 Add-ins
  • Learning HTML - HyperText Markup Language
  • CSS Basics
  • How to Copy and Paste Text so the Code is Clean
  • Create and Style a Data Table

Expression Web 4.0 Tutorials from Install to Publish and More




SHARE BY GK
Computer Knowledge
Read More..

ECC vs Non ECC Memory Whats the Difference



This video from Crucial.com highlights the differences between ECC and Non-ECC computer memory.  For more information, go to http://www.crucial.com/


SHARE BY GK
Computer Knowledge
Read More..