Blazor server performance issues
Its a Blazor server project with about 100 .razor pages,
When a change occurs on one of the pages, it takes about 19 seconds to rebuild pages, restart IIS Express and refresh the page. It's awful, I make only a minor change in html, but it takes long time to show the results.
The test result is like this :
With 100 razor pages and all css and js references
build : 10 sec
refresh page : 19 sec (include 10 secs for build)
When we remove 90 pages of 100 pages (10 pages remains) :
build : 3 sec
refresh page : 12 sec (include 3 secs for build)
When we remove all css and js references :
build : 3 sec
refresh page : 6 sec (include 3 secs for build)
It is not good at all, because the project is growing and finally we will have about 400 pages! and extra css and js references will added. In this case, the time will be much longer for developing.
https://github.com/dotnet/aspnetcore/issues/29416
what's the solution? Thanks
See also questions close to this topic
-
How to efficiently aggregate the same column using different aggregate functions?
Consider the following data:
df = pd.DataFrame({"id": [1, 1, 1, 2, 2], "value": [10, 50, 90, 25, 75]}) df id value 0 1 10 1 1 50 2 1 90 3 2 25 4 2 75
How can one efficiently and elegantly aggregate the column value by id considering multiple aggregate functions on the same column, for instance:
value_min value_max value_mean value_sum value_max_diff id 1 10 90 50 150 80 2 25 75 50 100 50
One approach is to create multiple pivot tables, one by each aggregate function (built-in or not), and then concatenate the result, such as:
def max_diff(x): return np.max(x) - np.min(x) funcs = [np.min, np.max, np.mean, np.sum, max_diff] tmp = [pd.pivot_table(df, index=["id"], values=["value"], aggfunc={"value": f}).rename(columns={"value": f"value_{f.__name__}"}) for f in funcs] pivot = pd.concat(tmp, axis=1) pivot value_amin value_amax value_mean value_sum value_max_diff id 1 10 90 50 150 80 2 25 75 50 100 50
However, it seems to me that this approach is not very scalable, considering multiple columns and multiple and even different aggregate functions per column. As Raymond Hettinger says: "There must be a better way!". So, which one would be better?
Thanks in advance!
-
Java Streams: Is the complexity of collecting a stream of long same as filtering it based on Set::contains?
I have an application which accepts employee ids as user input and then filters the employee list for matching ids. User input is supposed to be 3-4 ids and employee list is a few thousands.
I came up with the following 2 methods using Streams filter based on performance concerns.
Method1
Motivation here is to not run filter for each employee, rather run it on the requested ids list which is guaranteed to be very short.
private static Set<Long> identifyEmployees(CustomRequest request) List<Long> requestedIds = request.getRequestedIDs(); if (!requestedIds.isEmpty()) { Set<Long> allEmployeeIds = employeeInfoProvider .getEmployeeInfoList() // returns List<EmployeeInfo> .stream() .map(EmployeeInfo::getEmpId) // getEmpId() returns a Long .collect(Collectors.toSet()); return requestedIds.stream().filter(allEmployeeIds::contains).collect(Collectors.toSet()); } return Collections.emptySet(); }
Method2
Motivation here is to replace collect() in Method1 with a filter as complexity would be same. collect() here would actually be running on a very small number of elements.
private static Set<Long> identifyEmployees(CustomRequest request) Set<Long> requestedIds = request.getRequestedIDs() // returns List<Long> .stream() .collect(Collectors.toSet()); if (!requestedIds.isEmpty()) { return employeeInfoProvider .getEmployeeInfoList() // returns List<EmployeeInfo> .stream() .map(EmployeeInfo::getEmpId) // getEmpId() returns a Long .filter(requestedIds::contains) .collect(Collectors.toSet()); } return Collections.emptySet(); }
Does Method2 perform as good as Method1? Or does Method1 perform better?
-
NumPy efficiency in dataset preprocessing
I am currently working on a research project relating to the use of neural networks operating on EEG datasets. I am using the BCICIV 2a dataset, which consists of a series of files containing trial data from subjects. Each file contains a set of 25 channels and a very long ~600000 time step array of signals. I have been working on writing code to preprocess this data into something I can pass into the neural network, but have run into some efficiency issues. Currently, I have written code that determines the location in the array of all the trials in a file, then attempts to extract a 3D NumPy array that is stored in another array. When I attempt to run this code however, it is ridiculously slow. I am not very familiar with NumPy, the majority of my experience at this point being in C. My intention had been to write the results of the preprocessing to a separate file that can be loaded to avoid the preprocessing. From a C perspective, all that would be necessary is to move the pointers around to format the data appropriately, so I am not sure why NumPy is so slow. Any suggestions would be very helpful since currently for 1 file it takes ~2 minutes to extract 1 trial, with 288 trials in a file and 9 files, this would take much longer than I would like. I am not very comfortable with my knowledge of how to make good use of NumPy's efficiency improvements over generic lists. Thanks!
import glob, os import numpy as np import mne DURATION = 313 XDIM = 7 YDIM = 6 IGNORE = ('EOG-left', 'EOG-central', 'EOG-right') def getIndex(raw, tagIndex): return int(raw.annotations[tagIndex]['onset']*250) def isEvent(raw, tagIndex, events): for event in events: if (raw.annotations[tagIndex]['description'] == event): return True return False def getSlice1D(raw, channel, dur, index): if (type(channel) == int): channel = raw.ch_names[channel] return raw[channel][0][0][index:index+dur] def getSliceFull(raw, dur, index): trial = np.zeros((XDIM, YDIM, dur)) for channel in raw.ch_names: if not channel in IGNORE: x, y = convertIndices(channel) trial[x][y] = getSlice1D(raw, channel, dur, index) return trial def convertIndices(channel): xDict = {'EEG-Fz':3, 'EEG-0':1, 'EEG-1':2, 'EEG-2':3, 'EEG-3':4, 'EEG-4':5, 'EEG-5':0, 'EEG-C3':1, 'EEG-6':2, 'EEG-Cz':3, 'EEG-7':4, 'EEG-C4':5, 'EEG-8':6, 'EEG-9':1, 'EEG-10':2, 'EEG-11':3, 'EEG-12':4, 'EEG-13':5, 'EEG-14':2, 'EEG-Pz':3, 'EEG-15':4, 'EEG-16':3} yDict = {'EEG-Fz':0, 'EEG-0':1, 'EEG-1':1, 'EEG-2':1, 'EEG-3':1, 'EEG-4':1, 'EEG-5':2, 'EEG-C3':2, 'EEG-6':2, 'EEG-Cz':2, 'EEG-7':2, 'EEG-C4':2, 'EEG-8':2, 'EEG-9':3, 'EEG-10':3, 'EEG-11':3, 'EEG-12':3, 'EEG-13':3, 'EEG-14':4, 'EEG-Pz':4, 'EEG-15':4, 'EEG-16':5} return xDict[channel], yDict[channel] data_files = glob.glob('../datasets/BCICIV_2a_gdf/*.gdf') try: raw = mne.io.read_raw_gdf(data_files[0], verbose='ERROR') except IndexError: print("No data files found") event_times = [] for i in range(len(raw.annotations)): if (isEvent(raw, i, ('769', '770', '771', '772'))): event_times.append(getIndex(raw, i)) data = np.empty((len(event_times), XDIM, YDIM, DURATION)) print(len(event_times)) for i, event in enumerate(event_times): data[i] = getSliceFull(raw, DURATION, event)
EDIT: I wanted to come back and add some more details on the structure of the dataset. There is the 25x~600000 array that contains the data and a much shorter annotation object that includes event tags and relates those to times within the larger array. Specific events indicate a motor imagery cue which is the trial that my network is being trained on, I am attempting to extract a 3D slice which includes the relevant channels formatted appropriately with a temporal dimension, which is found to be 313 timesteps long. The annotations gives me the relevant timesteps to investigate. The results of the profiling recommended by Ian showed that the main compute time is located in the getSlice1D() function. Particularly where I index into the raw object. The code that is extracting the event times from the annotations is comparably negligible.
-
How to display data in a reusable Table component in Blazor
I'm trying to create a reusable MasterTable component in Blazor.
So far, I've defined the MasterTable as
@using AntDesign @using MyNamespace.Blazor.ViewModels @typeparam TItem <Table TItem="TItem" DataSource="@Data"> @{ foreach (var col in Columns) { <Column Title="@col.Label" @bind-Field="@col.Key" /> } } </Table> @code { private List<TItem> _data; [Parameter] public List<TItem> Data { get => _data; set => _data = value ?? new List<TItem>(); } [Parameter] public TableColumnViewModel[] Columns { get; set; } }
where TableColumnViewModel is defined simply as
public class TableColumnViewModel { public string Key { get; set; } public string Label { get; set; } }
I would like to create an instance of the MasterTable in a page for Daily Tasks but so far I'm only able to get it to display like this:
My attempt to implement MasterTable is as follows:
@page "/Tasks/Daily"; @using MyNamespace.Blazor.Services; @using MyNamespace.Blazor.ViewModels; @using MyNamespace.Api.Client.Model; @inject ITasksService _tasksService; <h1>Daily Tasks</h1> <MasterTable TItem="TaskStatus" Data="_tasks" Columns="cols"> </MasterTable> @code { private List<TaskStatus> _tasks = new List<TaskStatus>(); protected override async Task OnInitializedAsync() { _tasks = await _tasksService.GetTaskStatusAsync(); } TableColumnViewModel[] cols = { new TableColumnViewModel { Key = "id", Label = "ID" }, new TableColumnViewModel { Key = "description", Label = "ID" }, new c { Key = "type", Label = "Type" } }; }
With TaskStatus defined as
public class TaskStatus { public TaskStatus(int taskStatusId = default(int), string statusDescription = default(string)) { this.TaskStatusId = taskStatusId; this.StatusDescription = statusDescription; } public int TaskStatusId { get; set; } public string StatusDescription { get; set; } }
What do I need to do to get the MasterTable template to display the list of TaskStatus objects instead of the keys from TableColumnViewModel?
To be clear - instead of just using the component without wrapping it, the issue is that I want to isolate the CSS in the context of the 3rd party component, so that only the necessary CSS is loaded.
-
Blazor - app.UseIdentityServer(); with .pfx key file - Unexpected character encountered while parsing number
I have created a new Blazor WebAssembly App with Individual User Accounts, Store user accounts in-app and ASP.NET Core hosted in .NET 5. When deploying my app to Azure App Service I get the following error:
Object reference not set to an instance of an object.at Microsoft.Extensions.DependencyInjection.IdentityServerBuilderConfigurationExtensions
Reading these links I have to provide my own certificate in production for IdentityServer:
Blazor Web Assembly App .Net Core Hosted: publish runtime error
https://stackoverflow.com/a/56904000/3850405
I then created a
.pfx
file like this and I have verified that it works and my password is correct.https://stackoverflow.com/a/48790088/3850405
I then placed the
.pfx
file in myServer
projects root folder and markedCopy to Output Directory
asCopy Always
.I then updated
appsettings.json
to look like this:"IdentityServer": { "Clients": { "BlazorTest.Client": { "Profile": "IdentityServerSPA" } }, "Key": { "Type": "File", "FilePath": "localhost.pfx", "Password": "MySercurePassword123?" } },
Now the project does not work neither locally or on Azure. It fails on
app.UseIdentityServer();
inStartup.cs
with the following error:Newtonsoft.Json.JsonReaderException: 'Unexpected character encountered while parsing number: �. Path '', line 1, position 1.'
According to Microsoft docs my certificate should be valid:
A production certificate to use for signing tokens.
- There are no specific requirements for this certificate; it can be a self-signed certificate or a certificate provisioned through a CA authority.
- It can be generated through standard tools like PowerShell or OpenSSL.
- It can be installed into the certificate store on the target machines or deployed as a .pfx file with a strong password.
If I load the key like this it works:
"Key": { "Type": "Store", "StoreName": "My", "StoreLocation": "CurrentUser", "Name": "CN=blazortest" }
-
Is there some how to navigate to a page and pass parameters without use the address bar in Blazor?
This is present in many modern SPA libraries/frameworks...
I will supply an example using React (But it could be Angular or Vue), you can do something like...
this.props.router.push({ pathname: '/login-successfully', state: { userId: 'john', name: 'John Doe } })
and then on the initialization of the "other-page" you will have:
const {state} = useLocation(); const { userId, name } = state;
and you can render things like
<p>Welcome Back, {name}!</p>
Such feature is very useful in many scenarios, but by reading the documentation of routing in Blazor at https://docs.microsoft.com/en-us/aspnet/core/blazor/fundamentals/routing?view=aspnetcore-5.0 I cannot find anything. The NavigationManager just have those parameters:
public void NavigateTo (string uri, bool forceLoad = false);
Is there some equivalent approach that I can use ? I know a workaround by creating a singleton class, store the data over there and display on the login-successfully page, but I really hope to find something better as solution.
-
how to keep alive my Blazor Server side app IIS 8.5
I need to keep my blazor server side app alive all time.
I tried to set my iis 8.5 start mode setting to alwaysrunning and idle to 0 but still shutting down the app and caught stopping signal... after between 23h to 29h...
what i have to do to keep it alive, which setting i missed? Do i have to add code on web.config?
-
How to turn a single page of a large web site into a stand-alone PWA with Blazor
I have a web site with thousands of pages and more being added daily due to the fact that users can add a page for their own product and associated information. In addition to storing the data the user enters when creating a new page in a database to be used when serving that page, I want to be able to create a Blazor PWA with just that one page so the user can install it on their device(s) and see updates to comments, etc. as the changes occur. My questions are these:
Can I programmatically create a Blazor PWA and then send the user the link so they can install just that one page on their device?
Can a Blazor server app contain Blazor client pages?
-
Frequent page updates in blazor server application
In a blazor server application, is it okay to send events and call
StateHasChanged
very often, e.g., 500 times per second?One of my pages needs to react to an external event and then update its state accordingly. I found the following solution:
- Create a service that detects the external event and invokes a C# event.
- Inject the service into the razor page.
- In the page, connect to the event and call
InvokeAsync(() => StateHasChanged())
in the handler.
This already works correctly. However, the event may occur very often, e.g., 500 times per second, and I worry about the performance of client and server. Unfortunately, I dont understand which part happens on the server, which part happens on the client, and which data is sent between them.
- Are the events actually sent 500 times per second from the server to the client? I think this would consume a lot of bandwidth.
- Does the client actually render the page after each call to
StateHasChanged
? I think this would impose a high CPU load on the client.