Thursday, October 22, 2015

async and await call

from http://blog.stephencleary.com/2013/11/taskrun-etiquette-examples-dont-use.html

Last time we looked at using Task.Run for the wrong thing (code that is not CPU-bound). So let’s move on to the proper use case of Task.Run: CPU-bound code. We start off with some existing code, which synchronously does some heavy calculations.
class MyService
{
  public int CalculateMandelbrot()
  {
    // Tons of work to do in here!
    for (int i = 0; i != 10000000; ++i)
      ;
    return 42;
  }
}

...

private void MyButton_Click(object sender, EventArgs e)
{
  // Blocks UI thread! :(
  myService.CalculateMandelbrot();
}
Now, we want to use this from a UI thread, but this method will block our thread. This is a problem that should be solved using Task.Run. Doing these calculations is a CPU-bound operation.
// Warning: bad code!
class MyService
{
  public Task<int> CalculateMandelbrotAsync()
  {
    return Task.Run(() =>
    {
      // Tons of work to do in here!
      for (int i = 0; i != 10000000; ++i)
        ;
      return 42;
    });
  }
}

...

private async void MyButton_Click(object sender, EventArgs e)
{
  // Does not block UI thread! Yay!
  await myService.CalculateMandelbrotAsync();
}
At first glance, it may look like this solves the problem. And it does solve thisproblem, but it does not do it in the best way.
Let’s say that this service is a generic dll that can be used inside any .NET application. It has an API, and APIs are all about etiquette.
How would an ASP.NET application react to this change?
Let’s introduce a simple ASP.NET MVC controller that returns a view using the (original, synchronous) service.
class MyService
{
  public int CalculateMandelbrot()
  {
    // Tons of work to do in here!
    for (int i = 0; i != 10000000; ++i)
      ;
    return 42;
  }
}

...

public class MandelbrotController: Controller
{
  public ActionResult Index()
  {
    var result = myService.CalculateMandelbrot();
    return View(result);
  }
}
So far, so good. When a request comes in, the controller uses the service to (synchronously) calculate the view data. A single request thread is used the entire time during that calculation.
But the desktop app required a change in the service. It’s now async, which is “no problem” because ASP.NET MVC naturally supports asynchronous actions.
// Warning: bad code!
class MyService
{
  public Task<int> CalculateMandelbrotAsync()
  {
    return Task.Run(() =>
    {
      // Tons of work to do in here!
      for (int i = 0; i != 10000000; ++i)
        ;
      return 42;
    });
  }
}

...

public class MandelbrotController: Controller
{
  public async Task<ActionResult> IndexAsync()
  {
    var result = await myService.CalculateMandelbrotAsync();
    return View(result);
  }
}
And when we do testing, it works! Unfortunately, this change introduced a performance problem.
With the original (synchronous) code, only one thread was used to process the request, from beginning to end. That’s a heavily-optimized ASP.NET scenario. With this async code using Task.Run, instead of a single request thread, this is what happens:
  • The request starts processing on an ASP.NET thread.
  • Task.Run starts a task on the thread pool to do the calculations. The ASP.NET thread pool has to deal with (unexpectedly) losing one of its threads for the duration of this request.
  • The original request thread is returned to the ASP.NET thread pool.
  • When the calculation is complete, that thread completes the request and is returned to the ASP.NET thread pool. The ASP.NET thread pool has to deal with (unexpectedly) getting another thread.
This will work correctly, but it’s not at all efficient.
There are (at least) four efficiency problems introduced as soon as you useawait with Task.Run in ASP.NET:
  • Extra (unnecessary) thread switching to the Task.Run thread pool thread. Similarly, when that thread finishes the request, it has to enter the request context (which is not an actual thread switch but does have overhead).
  • Extra (unnecessary) garbage is created. Asynchronous programming is a tradeoff: you get increased responsiveness at the expense of higher memory usage. In this case, you end up creating more garbage for the asynchronous operations that is totally unnecessary.
  • The ASP.NET thread pool heuristics are thrown off by Task.Run“unexpectedly” borrowing a thread pool thread. I don’t have a lot of experience here, but my gut instinct tells me that the heuristics should recover well if the unexpected task is really short and would not handle it as elegantly if the unexpected task lasts more than two seconds.
  • ASP.NET is not able to terminate the request early, i.e., if the client disconnects or the request times out. In the synchronous case, ASP.NET knew the request thread and could abort it. In the asynchronous case, ASP.NET is not aware that the secondary thread pool thread is “for” that request. It is possible to fix this by using cancellation tokens, but that’s outside the scope of this blog post.
If you have multiple calls to Task.Run, then the performance problems are compounded. On a busy server, this kind of implementation can kill scalability.
That’s why one of the principles of ASP.NET is to avoid using thread pool threads (except for the request thread that ASP.NET gives you, of course). More to the point, this means that ASP.NET applications should avoidTask.Run.
Whew! OK, so now we know what the problem is with that implementation. The plain fact is that ASP.NET prefers synchronous methods if the operation is CPU-bound. And this is also true for other scenarios: Console applications, background threads in desktop applications, etc. In fact, the only place we really need an asynchronous calculation is when we call it from the UI thread.
But watch out! There’s another pitfall just ahead…

Using Task.Run for Asynchronous Wrappers

Let’s continue the “Mandelbrot” example. We’ve learned that some clients prefer asynchronous APIs for CPU-bound code and others prefer synchronous APIs for CPU-bound code.
So, let’s be good API citizens (“APIs are all about etiquette”) and keep the new synchronous version along with the asynchronous version. That way there’s no breaking changes, right? It’s simple enough, and we can even implement it easily so that there’s no code duplication!
// Warning: bad code!
class MyService
{
  public int CalculateMandelbrot()
  {
    // Tons of work to do in here!
    for (int i = 0; i != 10000000; ++i)
      ;
    return 42;
  }

  public Task<int> CalculateMandelbrotAsync()
  {
    return Task.Run(() => CalculateMandelbrot());
  }
}
Sweet. The UI app has its nice asynchronous method, and the ASP.NET app has its original synchronous method. Easy! And there are many other examples where synchronous and asynchronous APIs exist side-by-side, so developers are already used to this! But the fact is that using Task.Run for asynchronous wrappers is a code smell.
The problem with this approach is what is implied by this API design. Consider all the examples where synchronous and asynchronous APIs exist side-by-side, e.g., Entity Framework 6, or the WebClient class. Notice anything? They’re all naturally asynchronous! Not a single one of them is CPU-bound.
When a developer sees two methods in an API Method and MethodAsync, the convention is that they represent a naturally-asynchronous operation. In other words, the developer expects that MethodAsync is the “natural” implementation and that Method is essentially a synchronous (blocking) equivalent of that operation. That API implies that Method will at some point have the calling thread enter a wait state as it blocks for the naturally-asynchronous operation to complete.
Let’s make this a bit more practical. When a new ASP.NET developer approaches our service API, they’ll see CalculateMandelbrot andCalculateMandelbrotAsync (and let’s pretend that our method names are not so obviously CPU-bound). If they’re familiar with asynchronous APIs at all, they’ll assume that this calculation is naturally asynchronous. And on ASP.NET, you should use naturally-asynchronous methods. Therefore, they will choose what they think is the naturally-asynchronousCalculateMandelbrotAsync method and end up inheriting the performance problems discussed earlier.
CalculateMandelbrotAsync is what I call “fake-asynchronous” because it’s just a thread pool wrapper around a synchronous operation. But when developers see that API, they assume that it is a naturally-asynchronous operation.
This is just a brief description and I only covered one facet of this problem. Stephen Toub has an excellent blog post explaining in detail why you should not write asynchronous wrappers for synchronous methods.

OK, enough about the wrongsolutions? How do we fix this theright way???

Back up to the original problem for a moment. What is the problem? The UI thread is blocked. How did we solve it? By changing the service. Who needs the asynchronous API? Only the UI thread. Sounds like we just seriously violated the “Separation of Concerns” principle.
The key here is that the solution does not belong in the service. It belongs in the UI layer itself. Let the UI layer solve its own problems and leave the service out of it.
class MyService
{
  public int CalculateMandelbrot()
  {
    // Tons of work to do in here!
    for (int i = 0; i != 10000000; ++i)
      ;
    return 42;
  }
}

...

private async void MyButton_Click(object sender, EventArgs e)
{
  await Task.Run(() => myService.CalculateMandelbrot());
}
Now the service API is clean (it exposes a synchronous API for a naturally-synchronous, CPU-bound method), it works for all consumers (e.g., ASP.NET), and the UI layer is responsible for not blocking the UI thread.
Conclusion: do not use Task.Run in the implementation of the method; instead, use Task.Run to call the method.

Thursday, April 30, 2015

how to find sql job reference

quote from
http://stackoverflow.com/questions/18105547/how-to-find-all-sql-agent-jobs-that-call-a-given-stored-proc



SELECT
    [sJOB].[job_id] AS [JobID]
    , [sJOB].[name] AS [JobName]
    , [sJSTP].[step_uid] AS [StepID]
    , [sJSTP].[step_id] AS [StepNo]
    , [sJSTP].[step_name] AS [StepName]
    , CASE [sJSTP].[subsystem]
        WHEN 'ActiveScripting' THEN 'ActiveX Script'
        WHEN 'CmdExec' THEN 'Operating system (CmdExec)'
        WHEN 'PowerShell' THEN 'PowerShell'
        WHEN 'Distribution' THEN 'Replication Distributor'
        WHEN 'Merge' THEN 'Replication Merge'
        WHEN 'QueueReader' THEN 'Replication Queue Reader'
        WHEN 'Snapshot' THEN 'Replication Snapshot'
        WHEN 'LogReader' THEN 'Replication Transaction-Log Reader'
        WHEN 'ANALYSISCOMMAND' THEN 'SQL Server Analysis Services Command'
        WHEN 'ANALYSISQUERY' THEN 'SQL Server Analysis Services Query'
        WHEN 'SSIS' THEN 'SQL Server Integration Services Package'
        WHEN 'TSQL' THEN 'Transact-SQL script (T-SQL)'
        ELSE sJSTP.subsystem
      END AS [StepType]
    , [sPROX].[name] AS [RunAs]
    , [sJSTP].[database_name] AS [Database]
    , [sJSTP].[command] AS [ExecutableCommand]
    , CASE [sJSTP].[on_success_action]
        WHEN 1 THEN 'Quit the job reporting success'
        WHEN 2 THEN 'Quit the job reporting failure'
        WHEN 3 THEN 'Go to the next step'
        WHEN 4 THEN 'Go to Step: '
                    + QUOTENAME(CAST([sJSTP].[on_success_step_id] AS VARCHAR(3)))
                    + ' '
                    + [sOSSTP].[step_name]
      END AS [OnSuccessAction]
    , [sJSTP].[retry_attempts] AS [RetryAttempts]
    , [sJSTP].[retry_interval] AS [RetryInterval (Minutes)]
    , CASE [sJSTP].[on_fail_action]
        WHEN 1 THEN 'Quit the job reporting success'
        WHEN 2 THEN 'Quit the job reporting failure'
        WHEN 3 THEN 'Go to the next step'
        WHEN 4 THEN 'Go to Step: '
                    + QUOTENAME(CAST([sJSTP].[on_fail_step_id] AS VARCHAR(3)))
                    + ' '
                    + [sOFSTP].[step_name]
      END AS [OnFailureAction]
FROM
    [msdb].[dbo].[sysjobsteps] AS [sJSTP]
    INNER JOIN [msdb].[dbo].[sysjobs] AS [sJOB]
        ON [sJSTP].[job_id] = [sJOB].[job_id]
    LEFT JOIN [msdb].[dbo].[sysjobsteps] AS [sOSSTP]
        ON [sJSTP].[job_id] = [sOSSTP].[job_id]
        AND [sJSTP].[on_success_step_id] = [sOSSTP].[step_id]
    LEFT JOIN [msdb].[dbo].[sysjobsteps] AS [sOFSTP]
        ON [sJSTP].[job_id] = [sOFSTP].[job_id]
        AND [sJSTP].[on_fail_step_id] = [sOFSTP].[step_id]
    LEFT JOIN [msdb].[dbo].[sysproxies] AS [sPROX]
        ON [sJSTP].[proxy_id] = [sPROX].[proxy_id]
WHERE [sJSTP].[command] LIKE '%yourjobname%'
ORDER BY [JobName], [StepNo]

Thursday, April 9, 2015

CANNOT DROP SERVER ‘REPL_DISTRIBUTOR’ BECAUSE IT IS USED AS A DISTRIBUTOR

quote from


We just went through our annual IG audits and several servers I manage were shown to have vulnerabilities. After further inspection into the issue it appears I was hit with “DBMS remote system credential use and access” and “Replication use and security”. Since replication was a requirement initially and has since changed I am able to clear all the replication functionality off the server.

When trying to drop the repl_distributor “linked” server by just right clicking on the the server and trying to delete the server the following error was produced.
Even though I figured this would fail as well I thought I would script out the linked server in a drop statement I received the following error.
A little more digging and querying the sys.servers table it appears the “is_distributor” column is set to 1 indicating it is still being used in replication and the source of the error. Keep in mind I have already deleted the publication and subscription (on a different server). These are just the remnants of the distribution and need to be cleaned up. So let’s try and update the table and set the value to 0.
Oops. That is not allowed here. Reading more on Remote Servers and Linked Servers I found this stored procedure sp_serveroption. It sets server options for remote servers and linked servers and can be used. You can use the following code.
Checking to make sure we updated the value correctly.
Now we can drop the linked server with the code we scripted out originally.
And confirm the linked server is gone.
Here is a link for more information on the sp_serveroption.
Just remember, when requirements change and you are cleaning up the server, to remove all the aspects of the functionality you are removing.
Cheers!

Thursday, March 26, 2015

Wednesday, March 25, 2015

how to change account for AppFabricCachingService

quote from http://blog.blksthl.com/2012/10/26/how-to-change-the-distributed-cache-service-managed-account/


$farm = Get-SPFarm
$cacheService = $farm.Services | where {$_.Name -eq “AppFabricCachingService”}
$accnt = Get-SPManagedAccount -Identity <domain\user>
$cacheService.ProcessIdentity.CurrentIdentityType = “SpecificUser”
$cacheService.ProcessIdentity.ManagedAccount = $accnt
$cacheService.ProcessIdentity.Update()
$cacheService.ProcessIdentity.Deploy()

Thursday, March 12, 2015

Shrinking the Transaction Log files and checkpoint

http://blogs.msdn.com/b/suhde/archive/2009/03/16/shrinking-the-transaction-log-files-in-sql-server.aspx

Shrinking the Transaction Log files in SQL Server

This blog post is mainly to address concerns about increasing log file space, and how to go about to shrink the file. I am adding a priliminary content here as of now, but I will update this content as and when I get time and new content to add.

Question: What is a log file, and why is it required?
Answer: A log file is a separate file, that stores the various transactions hitting the database. Each database has its own log file (one or more) and this (these) log file(s) are required to keep a history of the transactions that hit the database, so that the database can be recovered to a state of both physical and logical consistency in case of failure.
When a transaction is fired against the database, the transaction is first recorded in the log file, and then actual data pages, that need to be modified, are brought into memory. Later, these data pages in memory are changed. At certain intervals of time, a background process, known as a CHECKPOINT runs, which flushes the dirty pages (modified pages in memory) to the disk.

Question: How is a log file accessed?
Answer: A log file is always accessed sequentially. So as to be able to properly utilize log space, a log file is divided into logical segments called VLFs or Virtual Log Files. Each log file is divided into multiple VLFs. Now, when there is a new database, the log file,  for all practical purposes is empty, and so are all VLFs. Writing starts from the first VLF, and when this VLF is full, writing continues to the second, and so on and so forth until the last VLF is reached. Now, when the last VLF is full, before growing the log file, the log writer determines if any of the preceding VLFs have been truncated (marked as inactive). In case it finds any such VLF, writing starts in the first inactive VLF in the chain.
For example, if a log file has 10 VLFs, then after the 10th VLF is full, the log writer will search for any inactive VLF in the chain. Let us assume that it finds VLF 3, 5, 6, 7 as inactive, then it will start writing into VLF 3.
In case no inactive VLF is found, the log writer will trigger a log file growth, thereby creating more VLFs at the expense of disk space.

Question: VLF truncated?
Answer: Most people assume truncation to be analogous to shrinking the file, however, in SQL Server terminology, truncation and shrinking is not the same. Truncation merely means marking a VLF as inactive, where as shrinking log file means to reduce the number of VLFs in the log file so as to reclaim disk space. There are different conditions when a VLF is truncated.
If a database is in simple recovery model, a VLF will be truncated only when all the conditions below are fulfilled:
  • All transactions that have begun in the specific VLF has been either committed or rolled back.
  • All transactions, that have at least one command written into the specified VLF have either been committed or rolled back. Assuming that a transaction has started in VLF 5 and has continued to VLF 7 through VLF 6, it has to be either committed or rolled back before VLF 6 can be truncated.
  • A checkpoint has run on the database.
  • All transactions, that have at least one command written into the specified VLF have been replicated.
If a database is in bulk-logged recovery model or full recovery model, a VLF will be truncated only when all the conditions below are fulfilled:
  • All transactions that have begun in the specific VLF has been either committed or rolled back.
  • All transactions, that have at least one command written into the specified VLF have either been committed or rolled back. Assuming that a transaction has started in VLF 5 and has continued to VLF 7 through VLF 6, it has to be either committed or rolled back before VLF 6 can be truncated.
  • A checkpoint has run on the database.
  • The VLF in question has been backed up by taking a log backup after the checkpoint has run.
  • All transactions, that have at least one command written into the specified VLF have been replicated.


Question: How does shrinking of a log file happen?
Answer: A log file is shrunk when you issue the following command to the SQL Server:
            DBCC SHRINKFILE ('logical file name', targetsize)
When this command is issued, the log file is shrunk from the end till the last used VLF. Hence, if the last VLF is in use, the log file cannot be shrunk.

Question: How do I know if a log file can be shrunk?
Answer: To understand whether the log file can be shrunk, you will need to fire the following commands and understand their outputs. The first command that needs to be fired is:
            DBCC SQLPERF(logspace)
This will let us know what percentage of the log file is actually in use. The lower the percentage, the more the file can be shrunk.
Now, run the following command:
            DBCC LOGINFO(DatabaseName)

The output of this command will give you the status of each VLF in the log file. A status of 0 means that the VLF has already been truncated and can be overwritten. A status of 2 means that the VLF has active transactions and cannot be overwritten. Look for the last VLF, if the last VLF is 2, then file cannot be shrunk just yet. However, if it is 0, then that means that the file can be shrunk till the last VLF with a status of 2.

Question: Ok, so how do I proceed in case the last VLF has a status of 2?
Answer: First you will need to check the recovery model of the database. The general tab in the database properties dialog box will show the recovery model of the database. If the database is in simple recovery model, then open a new query window, change the database under context to the user database whose log needs to be truncated, and run the command: CHECKPOINT. See if that helps in truncating the last VLF; if not, you can consider trying again after an interval of time. Also, if the database is involved in transactional replication, run the LogReader Agent and keep it running for some amount of time, so that the commands can be replicated.
However, if the database is in bulked-logged or full recovery model, apart from running the CHECKPOINT command, take a log backup. See if that helps in truncating the last VLF, if not, probably you may have to wait for some time before trying again. Also, if the database is involved in transactional replication, run the LogReader Agent and keep it running for some amount of time, so that the commands can be replicated.
-------------------------------------


use master

go

--- first take a FULL backup

backup database db_name to disk = 'D:\backups\db_name_FULL.bak' with stats =10, compression, init

go

use db_name

go

---- manually issue a checkpoint
checkpoint

go
checkpoint -- run twice so the log file wraps around

go

----- now shrink the log file to reclaim disk space

dbcc shrinkfile(db_name_LOG,xxMB)  -- xxMB is the size that you want.
go


Thursday, January 15, 2015

Configure Excel Services in SharePoint 2013

http://prabathf.blogspot.com/2012/12/configure-excel-services-in-sharepoint.html

Configure Excel Services in SharePoint 2013

Finally I got some time to configure my SharePoint 2013 farm. So I thought of using the chance to update my blog too. "Two birds with one stone".
Central Administration > System Settings > Manage services on server
Start "Excel Calculations Services"
Once it's started the next step is to create a new instance of "Excel Service Application" (in one doesnt exist already).
Central Administration > Application Management > Manage service applications and create  a new "Excel Service Application"

You are done with configuring Excel Servcies. As the next step you need to configure the tusted locations where Excel Service will only load Excel Workbooks from.

Central Administration > Application Management > Manage service applications > Excel Servce Application (This will change based on how you named the servcie instance on the previous step) > Trusted File Locations
Fill the Address with a Document Library location. I have created a document library called "Excel Documents" in a site called "Team".
In the External Data section you have "Allow External Data", select the Trusted data connection libraries and embedded option.

Set both the Automatic refresh and Manual refresh values to 0 (But this is not recommend as it will consume more resources by refreshing workbook results immediately).
Now as we have a trusted file location, we can now publish data-connected workbooks with embedded data connections. But for Excel Services to correctly render data-connected workbooks with external data connections, we must configure a trusted data connection library.


Central Administration > Application Management > Manage service applications > Excel Servcie Application > Trusted Data Connection Libraris
Make sure you have a "Data Connection Library" type library. Name of my data connection library is "Data Connections" which is created in a site called "Team".
Now you have successfully completed with Excel service configuration.

----------------------------------------------------------------------------------------------

http://sharepointknowledgebase.blogspot.com/2011/12/this-workbook-cannot-be-opened-because.html#.VLg7CivF-4I

This workbook cannot be opened because it is not stored in an Excel Services Application trusted location

One of our user has opened a ticket with us by mentioning that he is getting an error while editing the date in excel. the error message is mentioned below:

This workbook cannot be opened because it is not stored in an Excel Services Application trusted location.
To create an Excel Services Application trusted location, contact your system administrator
Now its quite obvious that you need to add the document library path inside excel trusted file location.

Please refer following article and your issue will be resolved J
http://sharepointknowledgebase.blogspot.com/2011/12/excel-services-sharepoint-2010-trusted.html

If you have any queries/questions/doubts regarding the above mentioned information then please let me know. I would be more than happy to help you as well as resolves your issues, Thank youJ
- See more at: http://sharepointknowledgebase.blogspot.com/2011/12/this-workbook-cannot-be-opened-because.html#.VLg7CivF-4I

-----------------------------------------------------------------------------------------
http://blogs.technet.com/b/chrad/archive/2010/07/21/tfs-dashboards-in-moss-2010-return-access-denied-for-tfsolapreport-to-end-users.aspx

TFS Dashboards in MOSS 2010 Return Access Denied for TfsOlapReport to End-Users

If you’ve decided to integrate your TFS 2010 infrastructure with Microsoft’s enterprise-ready SharePoint Server 2010 (MOSS) then you are in for quite a store, both good and potentially bad.  MOSS is extremely powerful platform that many customers of ours are investing in and we certainly understand why.  However, the frustration often comes when you have competing heads – the TFS infrastructure “guys” and the SharePoint “gals – trying to get all the pieces working and playing together well.
In today’s post, I will share some insight that had our TFS consultants scratching their head and our MOSS folks (me – and can I say I’m limited) spinning around and saying “Just make it work…”  The beauty of working at Microsoft is the ability to find the right people and get the assistance needed to make it work.  This is the reward we get and that reward I’m happy to share with those of you that run into the same thing.
Goal
The goal here is to get TFS integrated fully into MOSS 2010 and, in particular, using Excel Web Services to display critical data in the TFS Dashboards.

Understanding Excel Web Services & TFS Integration

John Socha, a colleague and friend of mine, helped me quickly understand how the TFS Excel reports work with respect to TFS.  Excel workbooks have “cached” data embedded within the XLSX files themselves that is accessible via the Excel or via Excel Web Services.  By default, Excel Web Services (EWS) utilizes a SharePoint feature called Secure Store Service (or abbreviated often as SSS) to allow individuals who do not have direct access to the data source of the workbook to see the data within EWS.

Access Denied by External Data – TfsOlapReport

The reports start coming in as soon as you roll out TFS 2010 that you are having issues with “reports” showing up on the dashboard for each of your team projects.  TFS will create the dashboard anytime you point it towards the MOSS server which is helpful though it doesn’t mean that everything is good to go.
clip_image001
As you can see, the user will see the above error report every time they re-attempt to connect to the site and view the data.  If they click OK, you (and the user) are shocked when data is displayed to the end-user like below.
DynamicImageProviderWhy is it saying I’ve been denied access?

Excel Workbook Caching versus Excel Web Services

The reason data is shown after clicking ok is the user has been granted access to the SharePoint site and hence can access the physical Excel workbook.  As mentioned earlier, Excel stores “cached” data from the last time that the workbook is refreshed and this is shown to the user.
You get Access Denied because the end-user has been denied access to the data source, not the entire workbook.  EWS attempts to access the data source directly each time the site is opened and an existing connection isn’t open.  To validate whether you have an issue with your SSS, you can do the following-
  1. Open your MOSS site hosting TFS content
  2. Click the Excel Reports in the land-hand navigation
  3. Click on the name of the Excel report such as Burndownimage
  4. For Data, click Refresh All Connectionsimage
This will verify if your SSS is configured to work as expected, and that within the Members for SSS you’ve granted that user permissions.  If you get an access denied, let’s walk through fixing this problem.

Determining EWS Configuration – One Workbook at a Time

The first step is to verify that your Excel workbook is using the authentication type.  TFS 2010, out-of-the-box, uses SSS and setups a Target Application ID called TFS.  This, of course, is configurable though you need to think about two things -
  1. SSS doesn’t require end-users to have direct access to the TFS cube (TFS_ANALYSIS)
  2. Windows Authentication requires direct access permissions to the TFS cube
SSS is optimal long-term (as put by someone who helped me figure this out) though it requires a bit more configuration than Windows authentication.
To determine which authentication a workbook is using (if necessary), do the following -
  1. Open the workbook in Excel (from SharePoint, not Visual Studio)
  2. Highlight the chart in the report by clicking on it
  3. Under the Data menu item, Connections
  4. Highlight TfsOlapReport, click Propertiesimage
  5. Click Definition tabimage
  6. For Excel Services, click Authentication Settings…image
As you can see, you can select SSS or you can choose Windows Authentication.  In this case, it should be set to the default of SSS – SSS ID:  TFS.  If not, change it to look directly like the above.

Correctly Configuring Secure Store Service (SSS)

If your workbook is configured correctly, and you are still getting access denied, then you will need to use the SharePoint Central Administration Site to correct the problem.  In this section of the post, I will share specifically how I corrected the problem that was occurring throughout our dashboard.
  1. Open the MOSS Central Administration Site
  2. Click Application Management
  3. Under Service Applications, click Manage service applications
  4. In this list of service names, click Secure Store Applicationimage
  5. You should see a Target Application Id of TFS (if not, you need to create one which isn’t the purpose of this article). click on TFS
  6. On the Target Application Settings page, click Nextimage
  7. On the Field Name page, click Nextimage
  8. On the Administrators & Members page, you will need to enter all potential dashboard viewers into the Members box.image
  9. Click OK
After you’ve clicked OK and saved, you can re-test and the users should not get the Access Denied if they are listed in a domain group that they are a member of or you’ve granted them direct access via their user account.

Summary

Doing software development is hard enough, let’s not make it any harder than it has to be.  Enter the handy dandy capabilities of SharePoint to make flashy, useful dashboards to give insight in how your team is executing and most importantly struggling.  Then it doesn’t work.  This is a challenge set of applications coming together as a “mash up” that one simple step missed can cause it to come tumbling down.  There is documentation out there that can help “lead” you to the problem but unless you know a)  TFS  b)  Excel Web Services  c)  SharePoint then your life is probably going to be a bit complicated.  In today’s post, I shared some insight into one problem that many of you might very well have and hopefully will make your life much better.
Enjoy!
-Chris