NET 5.0 web project - unable to change connection URL at startup (+ Docker)
I think I pretty much tried every possible way but no matter what I do, my NET 5.0 web app always connects to localhost:5000
.
At startup I get this:
webbackend | warn: Microsoft.AspNetCore.Server.Kestrel[0]
webbackend | Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'.
webbackend | info: Microsoft.Hosting.Lifetime[0]
webbackend | Now listening on: http://localhost:5000
webbackend | info: Microsoft.Hosting.Lifetime[0]
webbackend | Application started. Press Ctrl+C to shut down.
webbackend | info: Microsoft.Hosting.Lifetime[0]
webbackend | Hosting environment: Production
webbackend | info: Microsoft.Hosting.Lifetime[0]
webbackend | Content root path: /app
Even though I have these in place:
Program.cs
:
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddConsole();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel()
.UseStartup<Startup>()
.UseUrls(http://0.0.0.0:80);
});
appsettings.json
:
"commands": {
"web": "Microsoft.AspNet.Server.Kestrel --server.urls http://0.0.0.0:80"
}
Dockerfile
:
ENTRYPOINT ["dotnet", "webbackend.dll", "--urls", "http://0.0.0.0:80"]
docker-compose.yml
:
webbackend:
image: local_webbackend
container_name: webbackend
networks:
- my_network
environment:
ASPNETCORE_URLS: http://+:80
ports:
- "5001:80"
expose:
- "5432"
- "5001"
depends_on:
postgresdb:
condition: service_healthy
I really don't understand what is going on.
I just want this app to connect to localhost:80
inside its docker container. This port should then be connected to 5001
in the docker-compose network.
See also questions close to this topic
-
Cascading databound <ajaxtoolkit:combobox> and <asp:dropdownlist> in asp.net
I have an
asp.net
search form that includes anajaxToolkit Combobox
and a standardasp DropDownList
. Both controls are bound to two separatedSqlDatasource
components.Something like this:
<ajaxToolkit:ComboBox ID="cbConvenzionato" runat="server" AutoCompleteMode="SuggestAppend" DropDownStyle="DropDownList" DataSourceID="sdsConvenzionati" DataTextField="nome" DataValueField="id" AutoPostBack="true" OnSelectedIndexChanged="cbConvenzionato_SelectedIndexChanged" /> <asp:DropDownList ID="ddlVeicoli" DataSourceID="sdsVeicoli" DataTextField="targa" DataValueField="id" runat="server" AutoPostBack="true" OnSelectedIndexChanged="ddlVeicoli_SelectedIndexChanged" AppendDataBoundItems="true"> <asp:ListItem Text="TUTTI" Value="" Selected="True" /> </asp:DropDownList> <asp:SqlDataSource ID="sdsConvenzionati" runat="server" ConnectionString="<%$ ConnectionStrings:db %>" ProviderName="<%$ ConnectionStrings:db.ProviderName %>" SelectCommand=" SELECT id, nome FROM anag_convenzionati ORDER BY nome;" /> <asp:SqlDataSource ID="sdsVeicoli" runat="server" EnableCaching="false" CancelSelectOnNullParameter="false" ConnectionString="<%$ ConnectionStrings:db %>" ProviderName="<%$ ConnectionStrings:db.ProviderName %>" SelectCommand=" SELECT id, targa FROM veicoli_contratti WHERE ((@id_convenzionato IS NULL) OR (id_convenzionato = @id_convenzionato)) ORDER BY targa;"> <SelectParameters> <asp:ControlParameter Name="id_convenzionato" ControlID="cbConvenzionato" PropertyName="SelectedValue" Direction="Input" ConvertEmptyStringToNull="true" DbType="Int32" DefaultValue="" /> </SelectParameters> </asp:SqlDataSource>
There's also a third
sqldatasource
(sdsNoleggi
) that feeds agridview
but this's not a problem right now.In code behind I have two event handlers:
protected void cbConvenzionato_SelectedIndexChanged(object sender, EventArgs e) { sdsVeicoli.Select(DataSourceSelectArguments.Empty); Search(); } protected void ddlVeicoli_SelectedIndexChanged(object sender, EventArgs e) { Search(); } private void Search() { sdsNoleggi.Select(DataSourceSelectArguments.Empty); }
I tought in this way I should filter
ddlVeicoli
items after selecting an item incbConvenzionato
... but it's not working... why?If I look into
sdsVeicoli
SelectParameters
in debug I can seeid_convenzionato
being correctly set to selected value (id coming fromcbConvenzionato
) I bet also thatsdsNoleggi
dataset wiil be correctly updated with new values since I did this many times before. So why bound control it's not? I tried also to force addlVeicoli.DataBind()
aftersdsVeicoli.Select()
call ... but this had no effect. -
Swagger UI not working for REST API (asp.net web api2) application
I have asp.net mvc project with .NET Framework 4.7.2 and the same project contains asp.net web api2 controller in a separate folder : Controllers. The solution is legacy. The API are already in use in the PRODUCTION environment. Now I added the Swagger nuget package (Install-Package Swashbuckle -Version 5.6.0) to this existing project. Post that I see a SwaggerConfig.cs added to the App_Start folder of the Solution Explorer.
Here the asp.net mvc controllers are used by App1 pointing to the server: www.app1.com and asp.net web api2 controllers are used by another frontend angular app: App2 pointing to the server : www.app2.com
The complete deployment package for both App1 and App2 are hosted in IIS
Any request related to App1 is handled by www.app1.com and any api request related to App2 (Angular frontend) is handled by App1 only using IIS Rewrite rules at App2 level which redirect any api request to App1 only.
Now in this case when I tried to navigate to www.app1.com/swagger , I see it is loading the SwaggerUI for me, but when I tried to navigate to www.app2.com/swagger it is not working and instead of that it is loading the Angular frontend application
Here goes the App1 and App2 at IIS level:
Can anyone help me here by providing their guidance to fix this issue?
-
Cors error missing allow origin header. MailKit
I have
cors error missing allow origin header
error only on ONE post request. My CORS Policypublic void ConfigureServices(IServiceCollection services) { services.AddCors(options => { options.AddPolicy("AllowAllOrigins", builder => { builder.SetIsOriginAllowed(_ => true) .AllowAnyHeader() .AllowAnyMethod() .AllowCredentials(); }); }); }
Every request work fine, but one POST request fails, it's really weird. Code in controller action which failed use MailKit and SMTP to send email, maybe that's cause
-
GroupJoin without having to select contents from joined list
So I've got two classes that I'm trying to combine into a ViewModel for use in the application. I'm trying to find the lambda syntax to perform a left join between the two objects (Actual Delivery, DeliveryHdr). For some context the DeliveryHdr is a scheduled delivery on a schedule and the Actual Delivery is details collected once the delivery is made. I'm using the provided method (CovertToViewModel) to conmbine the two objects and return a list of ViewModels and it functions like I want it to but I'm thinking there has to be an easier way? How can I perform a 1 to 1 left join without having to perform Select's on the joined List like I've done below?
public partial class ActualDelivery { public int ActualDeliveryId { get; set; } public int DeliveryId { get; set; } public string DockDoor { get; set; } public string ArrivalComments { get; set; } public string SetTemp { get; set; } public string Unloader { get; set; } public DateTime? UnloadStart { get; set; } public DateTime? UnloadEnd { get; set; } public int? ActualAmbientFull { get; set; } public int? ActualAmbientMixed { get; set; } public int? ActualReefer { get; set; } public int? ActualCageVault { get; set; } public int? ActualBlc { get; set; } public int? ActualTotalPallets { get; set; } public int? ActualTotalLpns { get; set; } public DateTime? CreatedDttm { get; set; } public string CreatedUser { get; set; } public DateTime? UpdatedDttm { get; set; } public string UpdatedUser { get; set; } }
public partial class DeliveryHdr { public int DeliveryId { get; set; } public DateTime? LoadDate { get; set; } public DateTime? ScheduledTime { get; set; } public DateTime? ArrivalTime { get; set; } public int? LoadNumber { get; set; } public string Shift { get; set; } public string SupplierNumber { get; set; } public string SupplierName { get; set; } public string Carrier { get; set; } public bool? IsCancelled { get; set; } public bool? IsConfirmed { get; set; } public bool? IsBackHaul { get; set; } public int? ActAmbientFullPallets { get; set; } public int? ActAmbientMixedPallets { get; set; } public int? ActReeferPallets { get; set; } public int? ActCageVaultPallets { get; set; } public int? ActBlcpallets { get; set; } public int? ActTotalPallets { get; set; } public int? ActTotalLpns { get; set; } public bool? MarkAsDeleted { get; set; } public string CreatedUser { get; set; } public DateTime? CreatedDttm { get; set; } public string UpdatedUser { get; set; } public DateTime? UpdatedDttm { get; set; } }
public class ActualDeliveriesViewModel { public int DeliveryId { get; set; } public int ActualDeliveryId { get; set; } public string CompositeId { get; set; } public DateTime? LoadDate { get; set; } public DateTime? ScheduledTime { get; set; } public DateTime? ActualTime { get; set; } public string Shift { get; set; } public int? LoadNumber { get; set; } public string SupplierName { get; set; } public string Carrier { get; set; } public bool? IsConfirmed { get; set; } public bool? IsBackHaul { get; set; } public bool? IsCancelled { get; set; } public string DockDoor { get; set; } public string ArrivalComments { get; set; } public string SetTemp { get; set; } public string Unloader { get; set; } public DateTime? UnloadStart { get; set; } public DateTime? UnloadEnd { get; set; } public int? ActualAmbientFull { get; set; } public int? ActualAmbientMixed { get; set; } public int? ActualReefer { get; set; } public int? ActualCageVault { get; set; } public int? ActualBlc { get; set; } public int? ActualTotalPallets { get; set; } public int? ActualTotalLpns { get; set; } }
public List<ActualDeliveriesViewModel> CovertToViewModel(List<DeliveryHdr> deliveries, List<ActualDelivery> actuals) { return deliveries.Where(e => e.MarkAsDeleted == false).GroupJoin(actuals, d => d.DeliveryId, a => a.DeliveryId, (d, a) => new ActualDeliveriesViewModel() { DeliveryId = d.DeliveryId, ActualDeliveryId = a.Select(i => i.ActualDeliveryId).FirstOrDefault(), CompositeId = d.DeliveryId + "_" + a.Select(i => i.ActualDeliveryId).FirstOrDefault(), LoadDate = d.LoadDate, ScheduledTime = d.ScheduledTime, ActualTime = d.ArrivalTime, LoadNumber = d.LoadNumber, SupplierName = d.SupplierName, Carrier = d.Carrier, IsConfirmed = d.IsConfirmed, IsBackHaul = d.IsBackHaul, IsCancelled = d.IsCancelled, DockDoor = a.Select(i => i.DockDoor).FirstOrDefault(), ArrivalComments = a.Select(i => i.ArrivalComments).FirstOrDefault(), SetTemp = a.Select(i => i.SetTemp).FirstOrDefault(), Unloader = a.Select(i => i.Unloader).FirstOrDefault(), UnloadStart = a.Select(i => i.UnloadStart).FirstOrDefault(), UnloadEnd = a.Select(i => i.UnloadEnd).FirstOrDefault(), ActualAmbientFull = a.Select(i => i.ActualAmbientFull).FirstOrDefault(), ActualAmbientMixed = a.Select(i => i.ActualAmbientMixed).FirstOrDefault(), ActualBlc = a.Select(i => i.ActualBlc).FirstOrDefault(), ActualCageVault = a.Select(i => i.ActualCageVault).FirstOrDefault(), ActualReefer = a.Select(i => i.ActualReefer).FirstOrDefault(), ActualTotalPallets = (a.Select(i => i.ActualAmbientFull).FirstOrDefault() + a.Select(i => i.ActualAmbientMixed).FirstOrDefault() + a.Select(i => i.ActualBlc).FirstOrDefault() + a.Select(i => i.ActualCageVault).FirstOrDefault() + a.Select(i => i.ActualReefer).FirstOrDefault()) ?? 0 }).ToList(); }
-
How to update value to null using Elasticsearch NEST APIs UpdateAsync method?
I'm using
Elasticsearch NEST API
(7.8.1) and I'm having trouble with usingclient.UpdateAsync<T>
method to update a value to null.Is there any work-around to solve this issue?
Example model:
public class ProductSalesHistory { public Id { get; set; } public Sku { get; set; } public Disposition { get; set; } //This should be null after update }
Example of original document:
{ "id": 1, "sku": "somesku", "disposition": "C" }
Example of updated document:
{ "id": 1, "sku": "somesku", "disposition": null }
Example of NEST API call:
var response = await Client.UpdateAsync<ProductSalesHistory>(id, u => u .Index(IndexName) .Doc(document) .DocAsUpsert(true) .Refresh(Refresh.False));
As the result Elasticsearch NEST serialize document to such JSON:
{ "id": 1, "sku": "somesku" }
As you can see no "disposition" value provided to Elasticsearch and as the result nothing is changed in the document.
What I tried:
- I tried to add
[JsonProperty(NullValueHandling = NullValueHandling.Include)]
attribute toProductSalesHistory.Disposition
property, but it didn't work. - Adding
() => new JsonSerializerSettings { NullValueHandling = NullValueHandling.Include }
toConnectionSettings
as a parameter is not an option for me as I don't want to get side-effects on another queries.
- I tried to add
-
How to set the minimum required .NET framework version for my Nuget?
I am exposing several (Framework) class libraries through Nuget to my main application. I recently updated my main application to .NET 4.7.2 whereas my Nugets are still in .NET 4.5.2.
Surprisingly I am able to install these Nugets into my 4.7.2 application. I would expect this to be blocked as I am (at least that is what I thought) explicitly targeting .NET 4.5.2 in my Nuspec file:
<files> <file src="bin\$configuration$\$id$.pdb" target="lib\net452\" /> </files>
NuGet Package Manager does not show the minimum .NET version:
-
Why my cgi file is changed upon image build when in /usr/lib/cgi-bi?
I have two containers :
- "build", that does compile/build my cgi application, along with a linked library (version 1.8), from ubuntu:bionic-20201119
- "exec", that does run it (and does include the above library as well), still from ubuntu:bionic-20201119
After I launch my "exec" container, I noticed that my cgi fails, because it does miss the above library, but from a previous version :
error while loading shared libraries: libzoo_service.so.1.6: cannot open shared object file: No such file or directory
while it should be "libzoo_service.so.1.8" : the one that is actually provided, and that was build along with the cgi app.
I was quite puzzled, and thus tried to understand.
When I "ldd" the cgi file from the "build" container, it says :root@build:/# ldd /usr/lib/cgi-bin/zoo_loader.cgi linux-vdso.so.1 (0x00007ffe3d56a000) libzoo_service.so.1.8 => /usr/lib/libzoo_service.so.1.8 (0x00007f2f18186000) ...
When I do the same from the "exec" container, I see instead :
root@exec:/# ldd /usr/lib/cgi-bin/zoo_loader.cgi linux-vdso.so.1 (0x00007fffdf1af000) libzoo_service.so.1.6 => not found ...
And getting better : from the same "exec" container, when I copy the very same cgi file to 2 different paths, I have different results :
root@exec:/# ldd /usr/lib/cgi-bin/zoo_loader.cgi root@10907ce22d0b:/# ldd /usr/lib/cgi-bin/zoo_loader.cgi linux-vdso.so.1 (0x00007fffdf1af000) libzoo_service.so.1.6 => not found ... root@exec:/# ldd /tmp/cgi-bin/zoo_loader.cgi linux-vdso.so.1 (0x00007ffe3d56a000) libzoo_service.so.1.8 => /usr/lib/libzoo_service.so.1.8 (0x00007f2f18186000) ....
And the best is that the cgi file under /usr/lib/cgi-bin is half the size of the one from /tmp/cgi-bin, although they were copied from the same file:
FROM ubuntu:bionic-20201119 COPY dist/libzoo_service.so.1.8 /usr/lib/ COPY dist/zoo_loader.cgi /usr/lib/cgi-bin/ COPY dist/zoo_loader.cgi /tmp/cgi-bin/
So.... I am quite sure this is absolutey legitimate... and that it is about how Ubuntu/Linux do build shared libraries tree : the fact that only cgi files under /usr/lib/ are affected sounds a good hint.
But I am not familiar with either cgi or Linux, or not enough to understand what does happen here... and how I can fix it so my cgi app is actually linked with its matching libzoo_service.so.1.8
Thanks in advance!
-
TSC not found in Docker build
When building an image that needs to be compiled from typescript, I get this error.
sh: 1: tsc: not found
The command '/bin/sh -c npm run tsc' returned a non-zero code: 127
Here is the relevant code:
docker-compose.yaml
version: '3.1' services: nodeserver: build: context: . target: prod ports: - "3000:3000" volumes: - ./src:/app/src - ./public:/app/public - ./templates:/app/templates
Dockerfile
FROM node:15.11.0 AS base EXPOSE 3000 ENV NODE_ENV=production WORKDIR /app COPY package*.json ./ RUN npm install --only=production && npm cache clean --force ########################################################################################## FROM base AS dev ENV NODE_ENV=development RUN npm install --only=development CMD npm run dev ########################################################################################## FROM dev AS source COPY dist dist COPY templates templates COPY public public RUN npm run tsc ########################################################################################## FROM base AS test COPY --from=source /app/node_modules /app/node_modules COPY --from=source /app/templates /app/templates COPY --from=source /app/public /app/public COPY --from=source /app/dist /app/dist CMD npm run test ########################################################################################## FROM test AS prod CMD npm start
package.json
{ "name": "nodeserver", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "node ./dist/app.js", "deploy": "git add . && git commit -m Heroku && git push heroku main", "tsc": "tsc --outDir ./dist", "dev": "npm run ts-watch", "test": "npm run jest --runInBand", "ts-watch": "tsc-watch --project . --outDir ./dist --onSuccess \"nodemon ./dist/app.js\"" }, "jest": { "testEnvironment": "node" }, "repository": { "type": "git", "url": "git+https://github.com/MiquelPiza/nodeserver.git" }, "author": "", "license": "ISC", "bugs": { "url": "https://github.com/MiquelPiza/nodeserver/issues" }, "homepage": "https://github.com/MiquelPiza/nodeserver#readme", "dependencies": { "@sendgrid/mail": "^7.4.2", "bcryptjs": "^2.4.3", "express": "^4.17.1", "handlebars": "^4.7.7", "jsonwebtoken": "^8.5.1", "lodash": "^4.17.20", "mongodb": "^3.6.4", "mongoose": "^5.11.19", "multer": "^1.4.2", "socket.io": "^4.0.0", "validator": "^13.5.2" }, "devDependencies": { "@types/bcryptjs": "^2.4.2", "@types/express": "^4.17.11", "@types/jsonwebtoken": "^8.5.0", "@types/lodash": "^4.14.168", "@types/mongoose": "^5.10.3", "@types/multer": "^1.4.5", "@types/node": "^14.14.33", "@types/sendgrid": "^4.3.0", "@types/validator": "^13.1.3", "env-cmd": "^10.1.0", "jest": "^26.6.3", "nodemon": "^2.0.7", "supertest": "^6.1.3", "tsc-watch": "^4.2.9", "typescript": "^4.2.3" }, "engines": { "node": "15.11.0" } }
tsconfig.json
{ "compilerOptions": { "target": "es5", "module": "commonjs", "strict": true, "strictNullChecks": false, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, }, "include": ["src"] }
This dockerfile works:
FROM node:15.11.0 AS build WORKDIR /app COPY package.json . RUN npm install ADD . . RUN npm run tsc FROM node:15.11.0 WORKDIR /app COPY package.json . RUN npm install --production ADD public ./public ADD templates ./templates COPY --from=build /app/dist dist EXPOSE 3000 CMD npm start
I'm using this dockerfile for reference, from a Docker course: https://github.com/BretFisher/docker-mastery-for-nodejs/blob/master/typescript/Dockerfile I don't see what I'm doing wrong, the source stage should have the dev dependencies, among them typescript, so it should be able to run tsc.
Any help appreciated. Thanks.
EDIT:
In addition to using npm ci instead of npm install, I had to copy tsconfig.json to the working directory (and copy src directory instead of dist, which is created by tsc) for tsc to work properly. This is the modified source stage in the Dockerfile:
FROM dev AS source COPY src src COPY templates templates COPY public public COPY tsconfig.json tsconfig.json RUN npm run tsc
-
docker host and port info from the container
I am deploying an application in a Docker container. The application sends requests to another server with a callback URL. The callback URL contains the host and port name where actually the app runs.
To configure this callback URL in a "stable, non-dynamic" test environment is easy because we know the IP and port where the app runs. But in Docker, the callback URL is the IP address of the host machine + the port that was configured in the docker-compose.yml file. So both parameter is dynamic, can not be hardcoded in the Docker image.
I need the docker host IP and the exposed port by the container info somehow in the container.
This is how my container gets the docker host machine IP:
version: '3' services: my-server: image: ... container_name: my-server hostname: my-server ports: - "1234:9876" environment: - DOCKER_HOST_IP=${HOST_IP}
I set the host IP when I spin up the container:
HOST_IP=$(hostname -i) docker-compose up
Maybe this is not an elegant way but this is the best that I could do so far.
But I have no idea, how to get the exposed port info inside the container. My idea was that once I know the host IP in the container, I can use
nmap $HOST_IP
to get the opened port list and grep for the proper line somehow. But this does not work, because I run many Docker containers on this host, and I am not able to select the proper line with grep.here is the result of th nmap:
PORT STATE SERVICE 22/tcp open ssh 111/tcp open rpcbind 443/tcp open https 5001/tcp open commplex-link 5002/tcp open rfe 7201/tcp open dlip 1234/tcp open vcom-tunnel 1235/tcp open vcom-tunnel 1236/tcp open teradataordbms 60443/tcp open unknown
So when I execute
nmap
from the container then I can see all of the opened ports in my host machine. But I have no idea, how to select the line which belongs to the container where I am.Can I can customize somehow the service name before docker spin-up the containers?
What is the best way to get the port number that was opened on the host machine by the container?