Choosing output mode
I am new to OSDev, and I am currently trying to write my OS from bare bones, i.e. using GRUP as my bootloader, on x86 arch. I want my OS to support both BIOS and UEFI boot.
I am currently trying to write my own graphical interface. And I am kind of confused on how to do it so it would work on all platforms.
I observed that on UEFI 2.x I must use GOP, while on UEFI 1.x I should use UGA and on BIOS I must use either VESA or VGA.
Does it means that in order to implement a cross-firmware OS I would have to support all of them? And only choose one from the supported interfaces? How can I even tell from my OS if I was booted by UEFI? And what version of UEFI?
Should I just abandon the compatibility goal and simply pick one setting and implement my OS for that one?
do you know?
how many words do you know
See also questions close to this topic
-
My custom kernel printing out weird symbols
I'm trying to create a simple kernel, and when I try to print to the screen in 32 bit protected mode by writing to 0xb8000 as shown below:
char *screen = (char *)VIDEO_ADDRESS; int offset = printutils_get_offset(row, col); if (content == '\n') { printutils_set_cursor_offset(printutils_get_offset(printutils_get_offset_row(offset) + 1, 0)); } else { screen[offset] = content; screen[offset + 1] = color; offset += 2; } printutils_set_cursor_offset(offset);
The positions and color looks fine but the letters don't show up
Output:
Expected output: to print out text instead if weird symbols
Any help will be appreciated!
Thanks!
-
EHCI Transactions (Isochronous transfer descriptor) processing error EHCI HC Reset
I was writing a driver for EHCI, the driver detects EHCI Controller, resets it, (for now with interrupts disabled) and configurates it without any problem and works fine on real hardware. But When I try to create an empty Isochronous Transfer Descriptor. Qemu gives me this error "processing error - resetting ehci HC".
I created the driver by reading the Intel EHCI Specification carefully and I'm stuck here
To now I have several questions:
- How can I Setup an Isochronous Transfer Descriptor
- How can I detect devices connected to USB Ports
- How endpoints are used (is there a limit of endpoints, as I know an endpoint is like a window to connect with the device, or an Access Port)
- After configuring the EHC on real hardware (unlike QEMU it has 64 Bit capability), the mouse and the keyboard powers off (which may be normal)
Here is some of what my code does:
KERNELSTATUS EhcInitialize(PCI_CONFIGURATION_HEADER* PciConfiguration, EHCI_DEVICE* Ehci){ Ehci->EhciBase = PciGetBaseAddress(0); OperationnalRegisters->UsbCommand.RunStop = 0; /*OperationnalRegisters will be named (OpRegs)*/ OpRegs->ConfigurationFlag = 0; EhciResetController(Ehci); if(CapabilityRegs->CapParams.ProgrammableFrameList) UsbCommand.FrameListSize = 0; GetFrameListSize(Ehci); // Stores frame list size on the EHCI Device if(x64AddressingCapability) Ehci->QwordBitWidth = 1; // EhciAllocateMemory checks for the 64 bit capability, if not then KeExtendedAlloc With MaxAddress = (UINT32)-1; // My kernel provides this option Ehci->AssignedMemory = EhciAllocateMemory(EHCI_INTIAL_MEMORY_SIZE(Ehci), /*Alignement Field*/ 0x1000) Ehci->High32Bits = AssignedMemory >> 32; if(Ehci->QwordBitWidth) OpRegs->ControlDataSegment = Ehci->High32Bits; // Sets base address for all the entries on the periodic frame list, and sets their Terminate Bit (T-Bit) to 1 EhciSetupPeriodicFrameList(Ehci) } UsbCommand->InterruptTresholdControl = 8 UsbCommand->PeriodicScheduleEnable = 1 OpRegs->ConfigurationFlag = 1 UsbCommand->RunStop = 1 // Ehci Setup Ports: Ehci->NumPorts = CapRegs->StructuralParams.NumPorts // for i = 0,i<NumPorts,i++ // PortStatusControl[i].PortOwner = 1 // PortStatusControl[i].PortEnabled = 0 // PortStatusControl[i].PortPower = 1 EhciSetupPorts(Ehci) for(UINT i = 0;i<Ehci->NumPorts;i++){ // Ehci Enable Port: Reset Port, then enables it // if(!Port->CurrentConnectStatus) return FALSE; // Port cannot be enabled until its connected // Port->PortEnable = 0 // Port->PortReset = 1 // while(Port->PortReset) Port->PortReset = 0 // wait until port reset is read as 0 // if(!Port->PortEnabled) return FALSE; // switch Port->LineStatus, case FULL_SPEED, HIGH_SPEED, LOW_SPEED .... EhciEnablePort(Ehci, i); }
Until now everything works fine.
// processing error resetting EHCI HC EhciCreateIsochronousTransferDescriptor(Ehci);
Here is the code for creating the ITD
DWORD PeriodicListPointerIndex = EhciAllocatePeriodicListPointer(Ehci); if(PeriodicListPointerIndex == (DWORD)-1) return FALSE; EHCI_ISOCHRONOUS_TRANSFER_DESCRIPTOR* Itd = (EHCI_ISOCHRONOUS_TRANSFER_DESCRIPTOR*)(((UINT64)Ehci->High32Bits << 32) | ((UINT64)Ehci->PeriodicPointerList[PeriodicListPointerIndex].FrameListLinkPointer << 12)); Itd->Direction = 0; Itd->Multi = 1; Itd->DeviceAddress = 0; Itd->EndPointNumber = 0; Itd->NextLinkPointer.Terminate = 1; Ehci->PeriodicPointerList[PeriodicListPointerIndex].Type = EHCI_TYPE_ITD; // Here Crashes Ehci->PeriodicPointerList[PeriodicListPointerIndex].Terminate = 0;
-
What does this star like symbol mean in qemu?
I'm currently creating a os in assembly and c, but when I clear the screen in c I get a weird symbol on the bottom right-hand side corner:
Clear screen code:
char *screen = (char *)VIDEO_ADDRESS; for (int i = 0; i <= SCREEN_AREA; i++) { *(screen + (i * 2)) = ASCII_SPACE; *(screen + (i * 2 + 1)) = WHITE; } utils_set_cursor_offset(0,0);
Can anyone tell me what this symbol mean that will be great! Thanks!
- How do I make a bios like interface in 8086 assembly?
-
how do I use ilorest to Configuring AMD Preferred IO Bus Number
there are enough instructions out there to do that manually, but I need it to be done using a script. https://support.hpe.com/hpesc/public/docDisplay?docId=sd00001068en_us&docLocale=en_US&page=t_config_AMD_preferred_IO_bus_number.html Use the Preferred IO Bus Number option to avail an improved PCIe performance.
Prerequisites Ensure that you have enabled the Preferred IO Bus AMD option. Procedure From the System Utilities screen, select System Configuration > BIOS/Platform Configuration (RBSU) > Power and Performance Options > I/O Options > Preferred IO Bus Number. Enter the PCI bus number [ranging from 0 to 255] of a device to receive Preferred IO. All end-points on the same AMD NorthBridge I/O (NBIO) receive the same improved performance.
Save your setting.
-
Bios can`t see bootable flash
I am trying to install Ubuntu using a flash drive but my bios doesn't see it. I created it using Rufus, everything worked when I tried to install Manjaro in the same way with the same flash drive
-
Fedora 35: Install bootloader (GRUB2) manually
When installing Fedora 35 on my laptop (with Windows) I got the error
Failed to set new efi boot target
. I followed the same steps as I did on my other laptop, which is working fine with both Windows and Fedora on it.I have Windows 10 installed on this laptop and created a partition for Fedora, but after that error, I can't even boot Windows either. This installation seems to have messed up the boot configuration.
I found this question and tried to follow the same steps. I got to install Fedora without the bootloader (repeating the installation process and cleaning up the previous partitions created). But I having a hard time installing/configuring the GRUB2 bootloader (via the bootable Fedora flash drive, right after installation).
The steps I tried executing after installing Fedora without the bootloader (from that question):
# mount the new Fedora installation at /mnt sudo mount /dev/sda7 /mnt # sda7 is root partition sudo mount /dev/sda5 /mnt/boot/efi # sda5 is efi partition # create the mount points sudo mkdir /mtn/dev sudo mkdir /mtn/dev/pts sudo mkdir /mtn/proc sudo mkdir /mtn/sys for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$i; done # change the root to /mnt sudo chroot /mnt # create grub2 configuration sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg sudo reboot
When trying to
sudo chroot /mnt
I gotchroot: failed to run command '/bin/bash': No such file or directory
. Then I repeated themkdir
andmount
steps for/bin
,/lib
,/lib64
when googling about this error. By that, I got tochroot
into/mnt
, but I get other error on the next stepsudo: error while loading shared libraries: libsudo_util.so.0: cannot open shared object file: no such file or directory
.I am thinking I might be missing some step or doing something wrong... I am new to Linux and Fedora, so any help is appreciated.
How do I manually install and set up GRUB2 bootloader into Fedora 35, from the Flash drive?
-
Casting a ELF entry addr into a function in GNU EFI
How would I cast a Elf64_Ehdr.e_entry into a function?
eg:
void (*kernel_function)() = (void (*)())*kernel_entry_point;
Current code: (header is a Elf64_Ehdr) | The version of elf.h included is identical to /usr/include/elf.h
EFI_PHYSICAL_ADDRESS* kernel_entry_point; // get entry *kernel_entry_point = (&header)->e_entry; //cast into a usable function void (*kernel_function)() = (void (*)())*kernel_entry_point; //call the casted function kernel_function();
I've tried this but the EFI application exits before the kernel can write a string. (which in this case is "Hello World!")
If anyone can help me with this I'd appreciate it.
EDIT:
Forgot to include kernel code, so here it is
#include <stdint.h> void write_string( int colour, const char *string ) { volatile char *video = (volatile char*)0xB8000; while( *string != 0 ) { *video++ = *string++; *video++ = colour; } } extern "C" void _start() { write_string(15, "Hello world! \n"); return; }
-
DMAR tables address map
I wanted to understand how the DMAR range is defined in the linux. We are assigning VT-D (intel vitualization technology) bar address range 16K for each root bridge in the UEFI firmware if the root bridges hosts IO device.
But in the linux if I see in proc/iomem, I could see only 4k has been assigned for each dmar tables. Is this OS specific or is it generic implementation.
can some one help me on this regard.
iomem file information snippet about DMAR range.
93ffc000-93ffcfff : dmar5 --> 4k span for each DMAR
947fc000-947fcfff : dmar0
a03fc000-a03fcfff : dmar1
Thanks, Harinath
-
Freestanding C: Why does this function fail to return data depending on the structure of the array?
I'm currently watching a Udemy tutorial on basic graphical OS development, which has just begun to explain how to render text in VBE graphical mode using bitmap fonts. The presenter creates a function (auto-generated by a python script) to return a given row of a given character by looking it up in a set of arrays.
The key point here is that the code uses a set of arrays of binary data, each containing the bitmaps of 13 characters, like so.
int getArialCharacter(int index, int y) { unsigned int characters_arial_0[][150] = { { // List of 15 10-digit binary numbers, corresponding to rows of ASCII code 0 }, { // List of 15 10-digit binary numbers, corresponding to rows of ASCII code 1 }, ... { // List of 15 10-digit binary numbers, corresponding to rows of ASCII code 12 } }; unsigned int characters_arial_1[][150] = { ... }; unsigned int characters_arial_2[][150] = { ... }; unsigned int characters_arial_3[][150] = { ... }; unsigned int characters_arial_4[][150] = { ... }; unsigned int characters_arial_5[][150] = { ... }; unsigned int characters_arial_6[][150] = { ... }; unsigned int characters_arial_7[][150] = { ... }; int start = (int)(' '); if (index >= start && index < start + 13) { return characters_arial_0[index - start][y]; } else if (index >= start + 13 && index < start + 13 * 2) { return characters_arial_1[index - (start + 13)][y]; } ... else if (index >= start + 13 * 7 && index < start + 13 * 8) { return characters_arial_7[index - (start + 13 * 7)][y]; } }
I thought this seemed odd and unnecessary, so I tried refactoring it to use a single array (albeit with a different font), like so:
int font_func(int index, int y) { // I also tried uint8_t[128][64] uint8_t chars[][64] = { { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, // U+0000 (nul) { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, // U+0001 ... }; return chars[index][y]; }
However, when I attempted to use this, no text printed to the screen at all. I can think of 3 explanations for this, but none fully satisfy me:
-
- Since our linker only links the text sections and does not link data sections (the link command is
ld -m elf_i386 -o boot/bin/kernel.img -Ttext 0x1000 boot/bin/kernel_entry.bin boot/bin/kernel.o
), the large array won't get linked. This explanation seems weak, since the individual, smaller arrays in the tutorial's version were linked fine.
- Since our linker only links the text sections and does not link data sections (the link command is
-
- Since memcpy hasn't been implemented, declaring large data structures with optimizations turned on results in a call to memcpy that doesn't go right. This explanation also seems weak, since I would expect a compiler warning.
-
- I made some error elsewhere in the implementation. It seems unlikely, since the only thing I changed was the function pointer which got passed to the printing function. (For the record, I've tried accounting for signed and unsigned chars. No luck.)
What does the locus of this issue seem to be? Thanks for the help.
The makefile for this project is shown below, if it helps:
all: bootloader bootloader: nasm boot/boot.asm -f bin -o boot/bin/boot.bin nasm boot/kernel_entry.asm -f elf -o boot/bin/kernel_entry.bin gcc -m32 -fno-lto -nostdlib -ffreestanding -nodefaultlibs -c boot/final.c -o boot/bin/kernel.o ld -m elf_i386 -o boot/bin/kernel.img -Ttext 0x1000 boot/bin/kernel_entry.bin boot/bin/kernel.o objcopy -O binary -j .text boot/bin/kernel.img boot/bin/kernel.bin cat boot/bin/boot.bin boot/bin/kernel.bin > os.img clear: rm -f boot/boot.img run: qemu-system-x86_64 -vga virtio -drive format=raw,file=os.img
EDIT 2: Minimum reproducible example can be found here: https://drive.google.com/drive/folders/1Z5wlXrtRbXQ1wNhBU2_YsSMovaGe7DvB?usp=sharing
-
-
VBE: why does my code not provide a linear frame buffer?
I am a beginner who is trying to implement simple graphics in VBE. I have written the following assembly code to boot, enter 32-bit protected mode, and enter VBE mode 0x4117. (I was told that the output of [mode] OR 0x4000 would produce a version of the mode with a linear frame buffer, so I assumed that 0x0117 OR 0x4000 = 0x4117 should have a linear frame buffer.
[org 0x7c00] ; Origin is same as addr of MBR. [bits 16] section code switch: mov ax, 0x4f01 ; Querying VBE. mov cx, 0x4117 ; We want 0x117 mode graphics. ; i.e. 16 bits per pixel, 1024x768 res. mov bx, 0x0800 ; Offset for VBE info structure. mov es, bx mov di, 0x00 int 0x10 ; Graphics interrupt. ; Make the switch to graphics mode. mov ax, 0x4f02 ; What VBA service is wanted? ; 0x4f02 for actual switching. mov bx, 0x4117 int 0x10 ; Zero out registers. xor ax, ax mov ds, ax mov es, ax ; Here, we call interrupt 13H to read from hard disk. mov bx, 0x1000 ; Location where code is loaded from disk. mov ah, 0x02 ; Selects the 13H service, in this case ; reading sectors from drive. mov al, 30 ; Num sectors to read from hard disk. ; We'll make this larger the bigger our OS gets. mov ch, 0x00 ; Where is cylinder? mov dh, 0x00 ; Where is head? mov cl, 0x02 ; Sector. int 0x13 ; Call interrupt corresponding to disk services. cli ; Turn off interrupts. lgdt [gdt_descriptor] ; Load global descriptor table. mov eax, cr0 or eax, 0x1 mov cr0, eax ; Make switch. jmp code_seg:protected_start text: db "Jesus said I will rebuild this temple in three days. I could make a compiler in 3 days. - Terry A. Davis",0 [bits 32] protected_start: mov ax, data_seg ; Loads the data segment start ptr from GDT, mov ds, ax ; and set data segment start in program equal. mov ss, ax ; Set stack segment. mov es, ax ; Set extra segment. mov fs, ax ; Set fs (seg. w/ no specific use). mov gs, ax ; Set gs (seg. w/ no specific use). mov ebp, 0x90000 ; Update stack ptr to where it's expected. mov esp, ebp call 0x1000 ; Call kernel code which was loaded into 0x1000. jmp $ gdt_begin: gdt_null_descriptor: ; Null descriptor. Unclear why this is needed. dd 0x00 dd 0x00 gdt_code_seg: dw 0xffff ; Limit of code segment dw 0x00 ; Base of code segment. db 0x00 ; Base of code segment (con.). db 10011010b ; Acess byte of form: ; - Present (1) - 1 for valid segment. ; - Privl (2) - 0 for kernel. ; - S (1) - 1 for code/data segment. ; - Ex (1) - 1 for code segment. ; - Direction bit (1) - 0 for upward growth. ; - RW (1) - 1 for read/writable. ; - Ac (1) - 0 to indicate not accessed yet. db 11001111b ; Split byte. ; - Upper 4 bits are limit (con.), another 0xf. ; - Lower 4 bits are flags in order of: ; - Gr - 1 for 4KiB page granularity. ; - Sz - 1 for 32-bit protected mode. ; - L - 0, since we aren't in long mode. ; - Reserved bit. db 0x00 ; Base of code segment (con.). gdt_data_seg: dw 0xffff ; Limit of data segment. dw 0x00 ; Base of data segment. db 0x00 ; Base of data segment (con.). db 10010010b ; Acess byte. ; Same as for code segment but Ex=0 for data seg. db 11001111b ; Split byte, same as for code segment. db 0x00 ; Base of code segment (con.). gdt_end: gdt_descriptor: dw gdt_end - gdt_begin - 1 ; GDT limit. dd gdt_begin ; GDT base. code_seg equ gdt_code_seg - gdt_begin data_seg equ gdt_data_seg - gdt_begin times 510 - ($ - $$) db 0x00 ; Pads file w/ 0s until it reaches 512 bytes. db 0x55 db 0xaa
The above calls "kernel_entry.asm", shown below:
[bits 32] START: [extern start] call start ; Call kernel func from C file. jmp $ ; Infinite loop.
"kernel_entry.asm", in turn, calls my main.c file:
#define PACK_RGB565(r, g, b) \ (((((r) >> 3) & 0x1f) << 11) | \ ((((g) >> 2) & 0x3f) << 5) | \ (((b) >> 3) & 0x1f)) typedef struct VbeInfoBlockStruct { unsigned short mode_attribute_; unsigned char win_a_attribute_; unsigned char win_b_attribute_; unsigned short win_granuality_; unsigned short win_size_; unsigned short win_a_segment_; unsigned short win_b_segment_; unsigned int win_func_ptr_; unsigned short bytes_per_scan_line_; unsigned short x_resolution_; unsigned short y_resolution_; unsigned char char_x_size_; unsigned char char_y_size_; unsigned char number_of_planes_; unsigned char bits_per_pixel_; unsigned char number_of_banks_; unsigned char memory_model_; unsigned char bank_size_; unsigned char number_of_image_pages_; unsigned char b_reserved_; unsigned char red_mask_size_; unsigned char red_field_position_; unsigned char green_mask_size_; unsigned char green_field_position_; unsigned char blue_mask_size_; unsigned char blue_field_position_; unsigned char reserved_mask_size_; unsigned char reserved_field_position_; unsigned char direct_color_info_; unsigned int screen_ptr_; } VbeInfoBlock; // VBE Info block will be located at this address at boot time. #define VBE_INFO_ADDR 0x8000 int start() { VbeInfoBlock *gVbe = (VbeInfoBlock*) VBE_INFO_ADDR; for(int i = 0; i < gVbe->y_resolution_; ++i) { for(int j = 0; j < gVbe->x_resolution_; ++j) { unsigned long offset = i * gVbe->y_resolution_ + j; *((unsigned short*) gVbe->screen_ptr_ + offset) = PACK_RGB565(0,i,j); } } }
If I had correctly loaded a linear frame buffer, I would expect to see a gradation. Instead, I see this:
A series of boxes, each containing a gradation within it that it abruptly cut off. This seems to indicate that I'm writing in a mode with banked frame buffers instead of a linear one; the gradient goes out one buffer, continued for several hundred iterations, and eventually reaches the start of the next, causing the abrupt shift and the "boxes" effect.
Is my interpretation correct? Have I correctly loaded a linear frame buffer, and, if not, how could I do so?
EDIT: I have tried changing
unsigned long offset = i * gVbe->y_resolution_ + j;
tounsigned long offset = i * gVbe->bytes_per_scan_line_ + j
, as jester suggested below. This produced the following image. It is similarly boxy.