- USB: documentation.
- Yeah. MD/LVM should really be fixed this time.
- SH architecture update
- i810 RNG driver update
- IDE-PCI: make sure we initialize the chipsets correctly
- VIA IDE driver fixes
- VM balancing, part 53761 of 798321
- SCHED_YIELD cleanups
@@ -5,6+5,7 @@ order by last name). I'm sure this list should be longer, its difficult to maintain, add yourself with a patch if desired.
Georg Acher <acher@informatik.tu-muenchen.de>
+ David Brownell <dbrownell@users.sourceforge.net>
Alan Cox <alan@lxorguk.ukuu.org.uk>
Randy Dunlap <randy.dunlap@intel.com>
Johannes Erdfelt <jerdfelt@sventech.com>
@@ -26,6+26,7 @@ You only have read/write the data from/to the buffers in the completion handler, the continuous streaming itself is transparently done by the
URB-machinery.
+
1.2. The URB structure
typedef struct urb
@@ -68,6+69,7 @@ typedef struct urb iso_packet_descriptor_t iso_frame_desc[0];
} urb_t, *purb_t;
+
1.3. How to get an URB?
URBs are allocated with the following call
@@ -85,6+87,7 @@ To free an URB, use This call also may free internal (host controller specific) memory in the
future.
+
1.4. What has to be filled in?
Depending on the type of transaction, there are some macros
@@ -107,6+110,7 @@ AFTER the URB re-submission. You can get the other way by setting USB_URB_EARLY_COMPLETE in transfer_flags. This is implicit for
INT transfers.
+
1.5. How to submit an URB?
Just call
@@ -133,9+137,12 @@ execution cannot be scheduled later than 900 frames from the 'now'-time. The same applies to INT transfers, but here the seamless continuation is
independent of the transfer flags (implicitly ASAP).
+
1.6. How to cancel an already running URB?
-Call
+For an URB which you've submitted, but which hasn't been returned to
+your driver by the host controller, call
+
int unlink_urb(purb_t purb)
It removes the urb from the internal list and frees all allocated
@@ -143,6+150,13 @@ HW descriptors. The status is changed to USB_ST_URB_KILLED. After unlink_urb() returns, you can safely free the URB with free_urb(urb)
and all other possibly associated data (urb->context etc.)
+There is also an asynchronous unlink mode. To use this, set the
+the USB_ASYNC_UNLINK flag in urb->transfer flags before calling
+usb_unlink_urb(). When using async unlinking, the URB will not
+normally be unlinked when unlink_urb() returns. Instead, wait for
+the completion handler to be called.
+
+
1.7. What about the completion handler?
The completion handler is optional, but useful for fast data processing
@@ -158,6+172,16 @@ In the completion handler, you should have a look at urb->status to detect any USB errors. Since the context parameter is included in the URB,
you can pass information to the completion handler.
+Avoid using the urb->dev field in your completion handler; it's cleared
+as part of URB unlinking. Instead, use urb->context to hold all the
+data your driver needs.
+
+Also, NEVER SLEEP IN A COMPLETION HANDLER. These are normally called
+during hardware interrupt processing. If you can, defer substantial
+work to a tasklet (bottom half) to keep system latencies low. You'll
+probably need to use spinlocks to protect data structures you manipulate
+in completion handlers.
+
1.8. How to do isochronous (ISO) transfers?
@@ -184,11+208,13 @@ queuing more than one ISO frame with ASAP to the same device&endpoint result in seamless ISO streaming. For continuous streaming you have to use URB
linking.
+
1.9. How to start interrupt (INT) transfers?
-INT transfers are currently implemented with 8 different queues for intervals
-for 1, 2, 4,... 128ms. Only one TD is allocated for each interrupt. After
-calling the completion handler, the TD is recycled.
+INT transfers are currently implemented with different queues for intervals
+for 1, 2, 4,... 128ms. Only one URB is allocated for each interrupt. After
+calling the completion handler, that URB is recycled by the host controller
+driver (HCD).
With the submission of one URB, the interrupt is scheduled until it is
canceled by unlink_urb.
@@ -46,6+46,9 @@ USB_ST_BANDWIDTH_ERROR -ENOSPC The host controller's bandwidth is already consumed and
this request would push it past its allowed limit.
+-ESHUTDOWN The host controller has been disabled due to some
+ problem that could not be worked around.
+
**************************************************************************
* Error codes returned by in urb->status *
@@ -93,6+96,8 @@ USB_ST_PARTIAL_ERROR USB_ST_URB_INVALID_ERROR
-EINVAL ISO madness, if this happens: Log off and go home
+-ECONNRESET the URB is being unlinked asynchronously
+
**************************************************************************
* Error codes returned by usbcore-functions *
* (expect also other submit and transfer status codes) *
@@ -176,7+176,7 @@ DRIVERS-$(CONFIG_IRDA) += drivers/net/irda/irda.o DRIVERS-$(CONFIG_I2C) += drivers/i2c/i2c.o
DRIVERS-$(CONFIG_PHONE) += drivers/telephony/telephony.o
DRIVERS-$(CONFIG_ACPI_INTERPRETER) += drivers/acpi/acpi.o
-DRIVERS-$(CONFIG_BLK_DEV_MD) += drivers/md/mddev.o
+DRIVERS-$(CONFIG_MD) += drivers/md/mddev.o
DRIVERS += $(DRIVERS-y)
@@ -26,8+26,8 @@ CONFIG_KMOD=y # CONFIG_M586 is not set
# CONFIG_M586TSC is not set
# CONFIG_M586MMX is not set
-CONFIG_M686=y
-# CONFIG_M686FXSR is not set
+# CONFIG_M686 is not set
+CONFIG_M686FXSR=y
# CONFIG_MK6 is not set
# CONFIG_MK7 is not set
# CONFIG_MCRUSOE is not set
@@ -44,6+44,8 @@ CONFIG_X86_TSC=y CONFIG_X86_GOOD_APIC=y
CONFIG_X86_PGE=y
CONFIG_X86_USE_PPRO_CHECKSUM=y
+CONFIG_X86_FXSR=y
+CONFIG_X86_XMM=y
# CONFIG_TOSHIBA is not set
# CONFIG_MICROCODE is not set
# CONFIG_X86_MSR is not set
@@ -51,7+53,6 @@ CONFIG_X86_USE_PPRO_CHECKSUM=y CONFIG_NOHIGHMEM=y
# CONFIG_HIGHMEM4G is not set
# CONFIG_HIGHMEM64G is not set
-# CONFIG_MATH_EMULATION is not set
# CONFIG_MTRR is not set
CONFIG_SMP=y
CONFIG_HAVE_DEC_LOCK=y
@@ -123,6+124,7 @@ CONFIG_BLK_DEV_FD=y #
# Multi-device support (RAID and LVM)
#
+# CONFIG_MD is not set
# CONFIG_BLK_DEV_MD is not set
# CONFIG_MD_LINEAR is not set
# CONFIG_MD_RAID0 is not set
@@ -648,6+648,7 @@ ENTRY(sys_call_table) .long SYMBOL_NAME(sys_madvise)
.long SYMBOL_NAME(sys_getdents64) /* 220 */
.long SYMBOL_NAME(sys_fcntl64)
+ .long SYMBOL_NAME(sys_ni_syscall) /* reserved for TUX */
/*
* NOTE!! This doesn't have to be exact - we just have
@@ -416,8+416,8 @@ inline void nmi_watchdog_tick(struct pt_regs * regs) * here too!]
*/
- static unsigned int last_irq_sums [NR_CPUS] = { 0, },
- alert_counter [NR_CPUS] = { 0, };
+ static unsigned int last_irq_sums [NR_CPUS],
+ alert_counter [NR_CPUS];
/*
* Since current-> is always on the stack, and we always switch
@@ -36,7+36,7 @@ piggy.o: $(SYSTEM) $(OBJCOPY) -R .empty_zero_page $(SYSTEM) $$tmppiggy; \
gzip -f -9 < $$tmppiggy > $$tmppiggy.gz; \
echo "SECTIONS { .data : { input_len = .; LONG(input_data_end - input_data) input_data = .; *(.data) input_data_end = .; }}" > $$tmppiggy.lnk; \
- $(LD) -r -o piggy.o -b binary $$tmppiggy.gz -b elf32-shl -T $$tmppiggy.lnk; \
+ $(LD) -r -o piggy.o -b binary $$tmppiggy.gz -b elf32-sh-linux -T $$tmppiggy.lnk; \
rm -f $$tmppiggy $$tmppiggy.gz $$tmppiggy.lnk
clean:
#include <linux/linkage.h>
#include <linux/config.h>
-#define COMPAT_OLD_SYSCALL_ABI 1
+
+/*
+ * Define this to turn on compatibility with the previous
+ * system call ABI. This feature is not properly maintained.
+ */
+#undef COMPAT_OLD_SYSCALL_ABI
! NOTE:
! GNU as (as of 2.9.1) changes bf/s into bt/s and bra, when the address
* NOTE: This code handles signal-recognition, which happens every time
* after a timer-interrupt and after each system call.
*
+ * NOTE: This code uses a convention that instructions in the delay slot
+ * of a transfer-control instruction are indented by an extra space, thus:
+ *
+ * jmp @$k0 ! control-transfer instruction
+ * ldc $k1, $ssr ! delay slot
+ *
* Stack layout in 'ret_from_syscall':
* ptrace needs to have all regs on the stack.
* if the order here is changed, it needs to be
@@ -58,6+69,7 @@ PT_TRACESYS = 0x00000002 PF_USEDFPU = 0x00100000
ENOSYS = 38
+EINVAL = 22
#if defined(__sh3__)
TRA = 0xffffffd0
@@ -76,7+88,14 @@ MMU_TEA = 0xff00000c ! TLB Exception Address Register #endif
/* Offsets to the stack */
-R0 = 0 /* Return value */
+R0 = 0 /* Return value. New ABI also arg4 */
+R1 = 4 /* New ABI: arg5 */
+R2 = 8 /* New ABI: arg6 */
+R3 = 12 /* New ABI: syscall_nr */
+R4 = 16 /* New ABI: arg0 */
+R5 = 20 /* New ABI: arg1 */
+R6 = 24 /* New ABI: arg2 */
+R7 = 28 /* New ABI: arg3 */
SP = (15*4)
SR = (16*4+8)
SYSCALL_NR = (16*4+6*4)
@@ -132,7+151,6 @@ SYSCALL_NR = (16*4+6*4) tlb_miss_load:
mov.l 2f, $r0
mov.l @$r0, $r6
- STI()
mov $r15, $r4
mov.l 1f, $r0
jmp @$r0
@@ -142,7+160,6 @@ tlb_miss_load: tlb_miss_store:
mov.l 2f, $r0
mov.l @$r0, $r6
- STI()
mov $r15, $r4
mov.l 1f, $r0
jmp @$r0
@@ -152,7+169,6 @@ tlb_miss_store: initial_page_write:
mov.l 2f, $r0
mov.l @$r0, $r6
- STI()
mov $r15, $r4
mov.l 1f, $r0
jmp @$r0
@@ -162,7+178,6 @@ initial_page_write: tlb_protection_violation_load:
mov.l 2f, $r0
mov.l @$r0, $r6
- STI()
mov $r15, $r4
mov.l 1f, $r0
jmp @$r0
@@ -172,14+187,13 @@ tlb_protection_violation_load: tlb_protection_violation_store:
mov.l 2f, $r0
mov.l @$r0, $r6
- STI()
mov $r15, $r4
mov.l 1f, $r0
jmp @$r0
mov #1, $r5
.align 2
-1: .long SYMBOL_NAME(do_page_fault)
+1: .long SYMBOL_NAME(__do_page_fault)
2: .long MMU_TEA
#if defined(CONFIG_DEBUG_KERNEL_WITH_GDB_STUB) || defined(CONFIG_SH_STANDARD_BIOS)
@@ -249,9+263,6 @@ error: .align 2
1: .long SYMBOL_NAME(do_exception_error)
-badsys: mov #-ENOSYS, $r0
- rts ! go to ret_from_syscall..
- mov.l $r0, @(R0,$r15)
!
!
@@ -291,7+302,7 @@ ENTRY(ret_from_fork) */
system_call:
- mov.l 1f, $r9
+ mov.l __TRA, $r9
mov.l @$r9, $r8
!
! Is the trap argument >= 0x20? (TRA will be >= 0x80)
@@ -304,122+315,160 @@ system_call: mov #SYSCALL_NR, $r14
add $r15, $r14
!
- mov #0x40, $r9
#ifdef COMPAT_OLD_SYSCALL_ABI
+ mov #0x40, $r9
cmp/hs $r9, $r8
- mov $r0, $r10
- bf/s 0f
- mov $r0, $r9
+ bf/s old_abi_system_call
+ nop
#endif
! New Syscall ABI
add #-0x40, $r8
shlr2 $r8
shll8 $r8
- shll8 $r8
+ shll8 $r8 ! $r8 = num_args<<16
mov $r3, $r10
or $r8, $r10 ! Encode syscall # and # of arguments
- !
- mov $r3, $r9
- mov #0, $r8
-0:
mov.l $r10, @$r14 ! set syscall_nr
STI()
- mov.l __n_sys, $r10
- cmp/hs $r10, $r9
- bt badsys
!
-#ifdef COMPAT_OLD_SYSCALL_ABI
- ! Build the stack frame if TRA > 0
- mov $r8, $r10
- cmp/pl $r10
- bf 0f
- mov.l @(SP,$r15), $r0 ! get original stack
-7: add #-4, $r10
-4: mov.l @($r0,$r10), $r1 ! May cause address error exception..
- mov.l $r1, @-$r15
- cmp/pl $r10
- bt 7b
-#endif
-0: stc $k_current, $r11
- mov.l @(tsk_ptrace,$r11), $r10 ! Is it trace?
+ stc $k_current, $r11
+ mov.l @(tsk_ptrace,$r11), $r10 ! Is current PTRACE_SYSCALL'd?
mov #PT_TRACESYS, $r11
tst $r11, $r10
bt 5f
- ! Trace system call
- mov #-ENOSYS, $r11
- mov.l $r11, @(R0,$r15)
- ! Push up $R0--$R2, and $R4--$R7
- mov.l $r0, @-$r15
- mov.l $r1, @-$r15
- mov.l $r2, @-$r15
- mov.l $r4, @-$r15
- mov.l $r5, @-$r15
- mov.l $r6, @-$r15
- mov.l $r7, @-$r15
- !
- mov.l 2f, $r11
- jsr @$r11
+ ! Yes it is traced.
+ mov.l __syscall_trace, $r11 ! Call syscall_trace() which notifies
+ jsr @$r11 ! superior (will chomp $R[0-7])
nop
- ! Pop down $R0--$R2, and $R4--$R7
- mov.l @$r15+, $r7
- mov.l @$r15+, $r6
- mov.l @$r15+, $r5
- mov.l @$r15+, $r4
- mov.l @$r15+, $r2
- mov.l @$r15+, $r1
- mov.l @$r15+, $r0
- !
+ ! Reload $R0-$R4 from kernel stack, where the
+ ! parent may have modified them using
+ ! ptrace(POKEUSR). (Note that $R0-$R2 are
+ ! used by the system call handler directly
+ ! from the kernel stack anyway, so don't need
+ ! to be reloaded here.) This allows the parent
+ ! to rewrite system calls and args on the fly.
+ mov.l @(R4,$r15), $r4 ! arg0
+ mov.l @(R5,$r15), $r5
+ mov.l @(R6,$r15), $r6
+ mov.l @(R7,$r15), $r7 ! arg3
+ mov.l @(R3,$r15), $r3 ! syscall_nr
+ ! Arrange for syscall_trace() to be called
+ ! again as the system call returns.
mov.l __syscall_ret_trace, $r10
bra 6f
lds $r10, $pr
- !
+ ! No it isn't traced.
+ ! Arrange for normal system call return.
5: mov.l __syscall_ret, $r10
lds $r10, $pr
- !
-6: mov $r9, $r10
- shll2 $r10 ! x4
+ ! Call the system call handler through the table.
+ ! (both normal and ptrace'd)
+ ! First check for bad syscall number
+6: mov $r3, $r9
+ mov.l __n_sys, $r10
+ cmp/hs $r10, $r9
+ bf 2f
+ ! Bad syscall number
+ rts ! go to syscall_ret or syscall_ret_trace
+ mov #-ENOSYS, $r0
+ ! Good syscall number
+2: shll2 $r9 ! x4
mov.l __sct, $r11
- add $r11, $r10
- mov.l @$r10, $r11
- jmp @$r11
+ add $r11, $r9
+ mov.l @$r9, $r11
+ jmp @$r11 ! jump to specific syscall handler
nop
! In case of trace
- .align 2
-3:
-#ifdef COMPAT_OLD_SYSCALL_ABI
- add $r8, $r15 ! pop off the arguments
-#endif
+syscall_ret_trace:
mov.l $r0, @(R0,$r15) ! save the return value
- mov.l 2f, $r1
+ mov.l __syscall_trace, $r1
mova SYMBOL_NAME(ret_from_syscall), $r0
- jmp @$r1
- lds $r0, $pr
- .align 2
-1: .long TRA
-2: .long SYMBOL_NAME(syscall_trace)
-__n_sys: .long NR_syscalls
-__sct: .long SYMBOL_NAME(sys_call_table)
-__syscall_ret_trace:
- .long 3b
-__syscall_ret:
- .long SYMBOL_NAME(syscall_ret)
+ jmp @$r1 ! Call syscall_trace() which notifies superior
+ lds $r0, $pr ! Then return to ret_from_syscall()
+
+
#ifdef COMPAT_OLD_SYSCALL_ABI
+! Handle old ABI system call.
+! Note that ptrace(SYSCALL) is not supported for the old ABI.
+! At this point:
+! $r0, $r4-7 as per ABI
+! $r8 = value of TRA register (= num_args<<2)
+! $r14 = points to SYSCALL_NR in stack frame
+old_abi_system_call:
+ mov $r0, $r9 ! Save system call number in $r9
+ ! ! arrange for return which pops stack
+ mov.l __old_abi_syscall_ret, $r10
+ lds $r10, $pr
+ ! Build the stack frame if TRA > 0
+ mov $r8, $r10
+ cmp/pl $r10
+ bf 0f
+ mov.l @(SP,$r15), $r0 ! get original user stack
+7: add #-4, $r10
+4: mov.l @($r0,$r10), $r1 ! May cause address error exception..
+ mov.l $r1, @-$r15
+ cmp/pl $r10
+ bt 7b
+0:
+ mov.l $r9, @$r14 ! set syscall_nr
+ STI()
+ ! Call the system call handler through the table.
+ ! First check for bad syscall number
+ mov.l __n_sys, $r10
+ cmp/hs $r10, $r9
+ bf 2f
+ ! Bad syscall number
+ rts ! return to old_abi_syscall_ret
+ mov #-ENOSYS, $r0
+ ! Good syscall number
+2: shll2 $r9 ! x4
+ mov.l __sct, $r11
+ add $r11, $r9
+ mov.l @$r9, $r11
+ jmp @$r11 ! call specific syscall handler,
+ nop
+
+ .align 2
+__old_abi_syscall_ret:
+ .long old_abi_syscall_ret
+
+ ! This code gets called on address error exception when copying
+ ! syscall arguments from user stack to kernel stack. It is
+ ! supposed to return -EINVAL through old_abi_syscall_ret, but it
+ ! appears to have been broken for a long time in that the $r0
+ ! return value will be saved into the kernel stack relative to $r15
+ ! but the value of $r15 is not correct partway through the loop.
+ ! So the user prog is returned its old $r0 value, not -EINVAL.
+ ! Greg Banks 28 Aug 2000.
.section .fixup,"ax"
fixup_syscall_argerr:
+ ! First get $r15 back to
rts
- mov.l 1f, $r0
-1: .long -22 ! -EINVAL
-.previous
+ mov #-EINVAL, $r0
+ .previous
.section __ex_table, "a"
.align 2
.long 4b,fixup_syscall_argerr
-.previous
+ .previous
#endif
.align 2
+__TRA: .long TRA
+__syscall_trace:
+ .long SYMBOL_NAME(syscall_trace)
+__n_sys:.long NR_syscalls
+__sct: .long SYMBOL_NAME(sys_call_table)
+__syscall_ret_trace:
+ .long syscall_ret_trace
+__syscall_ret:
+ .long SYMBOL_NAME(syscall_ret)
+
+
+
+ .align 2
reschedule:
mova SYMBOL_NAME(ret_from_syscall), $r0
mov.l 1f, $r1
@@ -454,10+503,12 @@ __INV_IMASK: .long 0xffffff0f ! ~(IMASK)
.align 2
-syscall_ret:
#ifdef COMPAT_OLD_SYSCALL_ABI
+old_abi_syscall_ret:
add $r8, $r15 ! pop off the arguments
+ /* fall through */
#endif
+syscall_ret:
mov.l $r0, @(R0,$r15) ! save the return value
/* fall through */
@@ -707,7+758,7 @@ handle_exception: #endif
8: /* User space to kernel */
mov #0x20, $k1
- shll8 $k1 ! $k1 <= 8192
+ shll8 $k1 ! $k1 <= 8192 == THREAD_SIZE
add $current, $k1
mov $k1, $r15 ! change to kernel stack
!
@@ -1107,6+1158,7 @@ ENTRY(sys_call_table) .long SYMBOL_NAME(sys_mincore)
.long SYMBOL_NAME(sys_madvise)
.long SYMBOL_NAME(sys_getdents64) /* 220 */
+ .long SYMBOL_NAME(sys_fcntl64)
/*
* NOTE!! This doesn't have to be exact - we just have
@@ -1114,7+1166,7 @@ ENTRY(sys_call_table) * entries. Don't panic if you notice that this hasn't
* been shrunk every time we add a new system call.
*/
- .rept NR_syscalls-220
+ .rept NR_syscalls-221
.long SYMBOL_NAME(sys_ni_syscall)
.endr
@@ -21,9+21,9 @@ ENTRY(empty_zero_page) .long 0x00360000 /* INITRD_START */
.long 0x000a0000 /* INITRD_SIZE */
.long 0
+ .balign 4096,0,4096
.text
- .balign 4096,0,4096
/*
* Condition at the entry of _stext:
*
/*
- * linux/arch/sh/kernel/io_generic.c
+ * linux/arch/sh/kernel/io.c
*
* Copyright (C) 2000 Stuart Menefy
*
@@ -41,7+41,7 @@ static void end_imask_irq(unsigned int irq);
static unsigned int startup_imask_irq(unsigned int irq)
{
- enable_imask_irq(irq);
+ /* Nothing to do */
return 0; /* never anything pending */
}
@@ -71,7+71,8 @@ void static inline set_interrupt_registers(int ip) "ldc %0, $sr\n"
"1:"
: "=&z" (__dummy)
- : "r" (~0xf0), "r" (ip << 4));
+ : "r" (~0xf0), "r" (ip << 4)
+ : "t");
}
static void disable_imask_irq(unsigned int irq)
@@ -103,7+104,7 @@ static void end_imask_irq(unsigned int irq)
static void shutdown_imask_irq(unsigned int irq)
{
- disable_imask_irq(irq);
+ /* Nothing to do */
}
void make_imask_irq(unsigned int irq)
@@ -128,12+128,14 @@ void __init init_IRQ(void) #ifdef SCIF_ERI_IRQ
make_ipr_irq(SCIF_ERI_IRQ, SCIF_IPR_ADDR, SCIF_IPR_POS, SCIF_PRIORITY);
make_ipr_irq(SCIF_RXI_IRQ, SCIF_IPR_ADDR, SCIF_IPR_POS, SCIF_PRIORITY);
+ make_ipr_irq(SCIF_BRI_IRQ, SCIF_IPR_ADDR, SCIF_IPR_POS, SCIF_PRIORITY);
make_ipr_irq(SCIF_TXI_IRQ, SCIF_IPR_ADDR, SCIF_IPR_POS, SCIF_PRIORITY);
#endif
#ifdef IRDA_ERI_IRQ
make_ipr_irq(IRDA_ERI_IRQ, IRDA_IPR_ADDR, IRDA_IPR_POS, IRDA_PRIORITY);
make_ipr_irq(IRDA_RXI_IRQ, IRDA_IPR_ADDR, IRDA_IPR_POS, IRDA_PRIORITY);
+ make_ipr_irq(IRDA_BRI_IRQ, IRDA_IPR_ADDR, IRDA_IPR_POS, IRDA_PRIORITY);
make_ipr_irq(IRDA_TXI_IRQ, IRDA_IPR_ADDR, IRDA_IPR_POS, IRDA_PRIORITY);
#endif
@@ -136,11+136,12 @@ void free_task_struct(struct task_struct *p) */
int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
{ /* Don't use this in BL=1(cli). Or else, CPU resets! */
- register unsigned long __sc0 __asm__ ("$r3") = __NR_clone;
- register unsigned long __sc4 __asm__ ("$r4") = (long) flags | CLONE_VM;
- register unsigned long __sc5 __asm__ ("$r5") = 0;
- register unsigned long __sc8 __asm__ ("$r8") = (long) arg;
- register unsigned long __sc9 __asm__ ("$r9") = (long) fn;
+ register unsigned long __sc0 __asm__ ("r0");
+ register unsigned long __sc3 __asm__ ("r3") = __NR_clone;
+ register unsigned long __sc4 __asm__ ("r4") = (long) flags | CLONE_VM;
+ register unsigned long __sc5 __asm__ ("r5") = 0;
+ register unsigned long __sc8 __asm__ ("r8") = (long) arg;
+ register unsigned long __sc9 __asm__ ("r9") = (long) fn;
__asm__("trapa #0x12\n\t" /* Linux/SH system call */
"tst #0xff, $r0\n\t" /* child or parent? */
@@ -148,13+149,13 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags) "jsr @$r9\n\t" /* call fn */
" mov $r8, $r4\n\t" /* push argument */
"mov $r0, $r4\n\t" /* return value to arg of exit */
- "mov %2, $r3\n\t" /* exit */
+ "mov %1, $r3\n\t" /* exit */
"trapa #0x11\n"
"1:"
: "=z" (__sc0)
- : "0" (__sc0), "i" (__NR_exit),
- "r" (__sc4), "r" (__sc5), "r" (__sc8), "r" (__sc9)
- : "memory");
+ : "i" (__NR_exit), "r" (__sc3), "r" (__sc4), "r" (__sc5),
+ "r" (__sc8), "r" (__sc9)
+ : "memory", "t");
return __sc0;
}
-/* $Id: setup_cqreek.c,v 1.1 2000/08/05 06:25:23 gniibe Exp $
+/* $Id: setup_cqreek.c,v 1.5 2000/09/18 05:51:24 gniibe Exp $
*
* arch/sh/kernel/setup_cqreek.c
*
@@ -44,15+44,24 @@ static unsigned long cqreek_port2addr(unsigned long port) return ISA_OFFSET + port;
}
+struct cqreek_irq_data {
+ unsigned short mask_port; /* Port of Interrupt Mask Register */
+ unsigned short stat_port; /* Port of Interrupt Status Register */
+ unsigned short bit; /* Value of the bit */
+};
+static struct cqreek_irq_data cqreek_irq_data[NR_IRQS];
+
static void disable_cqreek_irq(unsigned int irq)
{
unsigned long flags;
unsigned short mask;
+ unsigned short mask_port = cqreek_irq_data[irq].mask_port;
+ unsigned short bit = cqreek_irq_data[irq].bit;
save_and_cli(flags);
/* Disable IRQ */
- mask = inw(BRIDGE_ISA_INTR_MASK) & ~(1 << irq);
- outw_p(mask, BRIDGE_ISA_INTR_MASK);
+ mask = inw(mask_port) & ~bit;
+ outw_p(mask, mask_port);
restore_flags(flags);
}
@@ -60,32+69,29 @@ static void enable_cqreek_irq(unsigned int irq) {
unsigned long flags;
unsigned short mask;
+ unsigned short mask_port = cqreek_irq_data[irq].mask_port;
+ unsigned short bit = cqreek_irq_data[irq].bit;
save_and_cli(flags);
/* Enable IRQ */
- mask = inw(BRIDGE_ISA_INTR_MASK) | (1 << irq);
- outw_p(mask, BRIDGE_ISA_INTR_MASK);
+ mask = inw(mask_port) | bit;
+ outw_p(mask, mask_port);
restore_flags(flags);
}
-#define CLEAR_AT_ACCEPT
-
static void mask_and_ack_cqreek(unsigned int irq)
{
- inw(BRIDGE_ISA_INTR_STAT);
+ unsigned short stat_port = cqreek_irq_data[irq].stat_port;
+ unsigned short bit = cqreek_irq_data[irq].bit;
+
+ inw(stat_port);
disable_cqreek_irq(irq);
-#ifdef CLEAR_AT_ACCEPT
/* Clear IRQ (it might be edge IRQ) */
- outw_p((1<<irq), BRIDGE_ISA_INTR_STAT);
-#endif
+ outw_p(bit, stat_port);
}
static void end_cqreek_irq(unsigned int irq)
{
-#ifndef CLEAR_AT_ACCEPT
- /* Clear IRQ (it might be edge IRQ) */
- outw_p((1<<irq), BRIDGE_ISA_INTR_STAT);
-#endif
enable_cqreek_irq(irq);
}
@@ -101,7+107,7 @@ static void shutdown_cqreek_irq(unsigned int irq) }
static struct hw_interrupt_type cqreek_irq_type = {
- "CQREEK-IRQ",
+ "CqREEK-IRQ",
startup_cqreek_irq,
shutdown_cqreek_irq,
enable_cqreek_irq,
@@ -116,10+122,24 @@ static int has_ide, has_isa; What we really need is virtualized IRQ and demultiplexer like HP600 port */
void __init init_cqreek_IRQ(void)
{
- if (has_ide)
- make_ipr_irq(14, IDE_OFFSET+BRIDGE_IDE_INTR_LVL, 0, 0x0f-14);
+ if (has_ide) {
+ cqreek_irq_data[14].mask_port = BRIDGE_IDE_INTR_MASK;
+ cqreek_irq_data[14].stat_port = BRIDGE_IDE_INTR_STAT;
+ cqreek_irq_data[14].bit = 1;
+
+ irq_desc[14].handler = &cqreek_irq_type;
+ irq_desc[14].status = IRQ_DISABLED;
+ irq_desc[14].action = 0;
+ irq_desc[14].depth = 1;
+
+ disable_cqreek_irq(14);
+ }
if (has_isa) {
+ cqreek_irq_data[10].mask_port = BRIDGE_ISA_INTR_MASK;
+ cqreek_irq_data[10].stat_port = BRIDGE_ISA_INTR_STAT;
+ cqreek_irq_data[10].bit = (1 << 10);
+
/* XXX: Err... we may need demultiplexer for ISA irq... */
irq_desc[10].handler = &cqreek_irq_type;
irq_desc[10].status = IRQ_DISABLED;
@@ -135,10+155,17 @@ void __init init_cqreek_IRQ(void) */
void __init setup_cqreek(void)
{
+ extern void disable_hlt(void);
int i;
/* udelay is not available at setup time yet... */
#define DELAY() do {for (i=0; i<10000; i++) ctrl_inw(0xa0000000);} while(0)
+ /*
+ * XXX: I don't know the reason, but it becomes so fragile with
+ * "sleep", so we need to stop sleeping.
+ */
+ disable_hlt();
+
if ((inw (BRIDGE_FEATURE) & 1)) { /* We have IDE interface */
outw_p(0, BRIDGE_IDE_INTR_LVL);
outw_p(0, BRIDGE_IDE_INTR_MASK);
@@ -219,7+246,6 @@ struct sh_machine_vector mv_cqreek __initmv = { mv_init_arch: setup_cqreek,
mv_init_irq: init_cqreek_IRQ,
- mv_port2addr: cqreek_port2addr,
mv_isa_port2addr: cqreek_port2addr,
};
ALIAS_MV(cqreek)
-/* $Id: sh_bios.c,v 1.2 2000/07/26 04:37:32 gniibe Exp $
+/* $Id: sh_bios.c,v 1.3 2000/09/30 03:43:30 gniibe Exp $
*
* linux/arch/sh/kernel/sh_bios.c
* C interface for trapping into the standard LinuxSH BIOS.
static __inline__ long sh_bios_call(long func, long arg0, long arg1, long arg2, long arg3)
{
- register long r0 __asm__("$r0") = func;
- register long r4 __asm__("$r4") = arg0;
- register long r5 __asm__("$r5") = arg1;
- register long r6 __asm__("$r6") = arg2;
- register long r7 __asm__("$r7") = arg3;
+ register long r0 __asm__("r0") = func;
+ register long r4 __asm__("r4") = arg0;
+ register long r5 __asm__("r5") = arg1;
+ register long r6 __asm__("r6") = arg2;
+ register long r7 __asm__("r7") = arg3;
__asm__ __volatile__("trapa #0x3f"
: "=z" (r0)
: "0" (r0), "r" (r4), "r" (r5), "r" (r6), "r" (r7)
#include <asm/hardirq.h>
#include <asm/delay.h>
#include <asm/irq.h>
+#include <asm/pgtable.h>
extern void dump_thread(struct pt_regs *, struct user *);
extern int dump_fpu(elf_fpregset_t *);
@@ -35,7+36,35 @@ EXPORT_SYMBOL(csum_partial_copy); EXPORT_SYMBOL(strtok);
EXPORT_SYMBOL(strpbrk);
EXPORT_SYMBOL(strstr);
+EXPORT_SYMBOL(strlen);
+
+/* mem exports */
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL(memmove);
+
+/* this is not provided by arch/sh/lib/*.S but is
+ potentially needed by modules (af_packet.o/unix.o
+ use memcmp, for instance) */
+EXPORT_SYMBOL(memcmp);
#ifdef CONFIG_VT
EXPORT_SYMBOL(screen_info);
#endif
+
+
+#define DECLARE_EXPORT(name) extern void name(void);EXPORT_SYMBOL_NOVERS(name)
+
+/* These symbols are generated by the compiler itself */
+#ifdef __SH4__
+
+DECLARE_EXPORT(__udivsi3_i4);
+DECLARE_EXPORT(__sdivsi3_i4);
+DECLARE_EXPORT(__movstr_i4_even);
+DECLARE_EXPORT(__movstr_i4_odd);
+DECLARE_EXPORT(__ashrdi3);
+DECLARE_EXPORT(__ashldi3);
+
+/* needed by some modules */
+EXPORT_SYMBOL(flush_dcache_page);
+#endif
@@ -342,7+342,8 @@ static __init unsigned int get_cpu_mhz(void) "bt/s 1b\n\t"
" add #1,%0"
: "=r"(count), "=z" (__dummy)
- : "0" (0), "1" (0));
+ : "0" (0), "1" (0)
+ : "t");
cli();
/*
* SH-3:
@@ -131,9+131,16 @@ void dump_stack(void)
asm("mov $r15, %0" : "=r" (start));
asm("stc $r7_bank, %0" : "=r" (end));
- end += 8192;
+ end += 8192/4;
printk("%08lx:%08lx\n", (unsigned long)start, (unsigned long)end);
- for (p=start; p < end; p++)
- printk("%08lx\n", *p);
+ for (p=start; p < end; p++) {
+ extern long _text, _etext;
+ unsigned long v=*p;
+
+ if ((v >= (unsigned long )&_text)
+ && (v <= (unsigned long )&_etext)) {
+ printk("%08lx\n", v);
+ }
+ }
}
@@ -159,14+159,14 @@ unsigned int csum_partial_copy_generic (const char *src, char *dst, int len, * them all but there's no guarantee.
*/
-#define SRC(y...) \
- 9999: y; \
+#define SRC(x,y) \
+ 9999: x,y; \
.section __ex_table, "a"; \
.long 9999b, 6001f ; \
.previous
-#define DST(y...) \
- 9999: y; \
+#define DST(x,y) \
+ 9999: x,y; \
.section __ex_table, "a"; \
.long 9999b, 6002f ; \
.previous
@@ -276,7+276,7 @@ DST( mov.l r0,@r5 ) DST( mov.l r1,@r5 )
add #4,r5
-SRC( mov.l @r4+,r0 )
+SRC( mov.l @r4+,r0 )
SRC( mov.l @r4+,r1 )
addc r0,r7
DST( mov.l r0,@r5 )
@@ -64,9+64,9 @@ static struct _cache_system_info cache_system_info = {0,}; #define CACHE_IC_WAY_SHIFT 13
#define CACHE_OC_ENTRY_SHIFT 5
#define CACHE_IC_ENTRY_SHIFT 5
-#define CACHE_OC_ENTRY_MASK 0x3fe0
-#define CACHE_OC_ENTRY_PHYS_MASK 0x0fe0
-#define CACHE_IC_ENTRY_MASK 0x1fe0
+#define CACHE_OC_ENTRY_MASK 0x3fe0
+#define CACHE_OC_ENTRY_PHYS_MASK 0x0fe0
+#define CACHE_IC_ENTRY_MASK 0x1fe0
#define CACHE_IC_NUM_ENTRIES 256
#define CACHE_OC_NUM_ENTRIES 512
#define CACHE_OC_NUM_WAYS 1
@@ -92,7+92,8 @@ static inline void cache_wback_all(void) addr = CACHE_OC_ADDRESS_ARRAY|(j<<CACHE_OC_WAY_SHIFT)|
(i<<CACHE_OC_ENTRY_SHIFT);
data = ctrl_inl(addr);
- if (data & CACHE_UPDATED) {
+ if ((data & (CACHE_UPDATED|CACHE_VALID))
+ == (CACHE_UPDATED|CACHE_VALID)) {
data &= ~CACHE_UPDATED;
ctrl_outl(data, addr);
}
@@ -114,17+115,25 @@ detect_cpu_and_cache_system(void) */
addr0 = CACHE_OC_ADDRESS_ARRAY + (3 << 12);
addr1 = CACHE_OC_ADDRESS_ARRAY + (1 << 12);
+
+ /* First, write back & invalidate */
data0 = ctrl_inl(addr0);
- data0 ^= 0x00000001;
- ctrl_outl(data0,addr0);
+ ctrl_outl(data0&~(CACHE_VALID|CACHE_UPDATED), addr0);
+ data1 = ctrl_inl(addr1);
+ ctrl_outl(data1&~(CACHE_VALID|CACHE_UPDATED), addr1);
+
+ /* Next, check if there's shadow or not */
+ data0 = ctrl_inl(addr0);
+ data0 ^= CACHE_VALID;
+ ctrl_outl(data0, addr0);
data1 = ctrl_inl(addr1);
- data2 = data1 ^ 0x00000001;
- ctrl_outl(data2,addr1);
+ data2 = data1 ^ CACHE_VALID;
+ ctrl_outl(data2, addr1);
data3 = ctrl_inl(addr0);
- /* Invaliate them, in case the cache has been enabled already. */
- ctrl_outl(data0&~0x00000001, addr0);
- ctrl_outl(data2&~0x00000001, addr1);
+ /* Lastly, invaliate them. */
+ ctrl_outl(data0&~CACHE_VALID, addr0);
+ ctrl_outl(data2&~CACHE_VALID, addr1);
back_to_P1();
if (data0 == data1 && data2 == data3) { /* Shadow */
@@ -150,8+159,6 @@ void __init cache_init(void) detect_cpu_and_cache_system();
ccr = ctrl_inl(CCR);
- if (ccr == CCR_CACHE_VAL)
- return;
jump_to_P2();
if (ccr & CCR_CACHE_ENABLE)
/*
@@ -380,29+387,114 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long addr) }
/*
+ * Write-back & invalidate the cache.
+ *
* After accessing the memory from kernel space (P1-area), we need to
- * write back the cache line to maintain DMA coherency.
+ * write back the cache line.
*
* We search the D-cache to see if we have the entries corresponding to
* the page, and if found, write back them.
*/
+void __flush_page_to_ram(void *kaddr)
+{
+ unsigned long phys, addr, data, i;
+
+ /* Physical address of this page */
+ phys = PHYSADDR(kaddr);
+
+ jump_to_P2();
+ /* Loop all the D-cache */
+ for (i=0; i<CACHE_OC_NUM_ENTRIES; i++) {
+ addr = CACHE_OC_ADDRESS_ARRAY| (i<<CACHE_OC_ENTRY_SHIFT);
+ data = ctrl_inl(addr);
+ if ((data & CACHE_VALID) && (data&PAGE_MASK) == phys) {
+ data &= ~(CACHE_UPDATED|CACHE_VALID);
+ ctrl_outl(data, addr);
+ }
+ }
+ back_to_P1();
+}
+
void flush_page_to_ram(struct page *pg)
{
+ unsigned long phys;
+
+ /* Physical address of this page */
+ phys = (pg - mem_map)*PAGE_SIZE + __MEMORY_START;
+ __flush_page_to_ram(phys_to_virt(phys));
+}
+
+/*
+ * Check entries of the I-cache & D-cache of the page.
+ * (To see "alias" issues)
+ */
+void check_cache_page(struct page *pg)
+{
unsigned long phys, addr, data, i;
+ unsigned long kaddr;
+ unsigned long cache_line_index;
+ int bingo = 0;
/* Physical address of this page */
phys = (pg - mem_map)*PAGE_SIZE + __MEMORY_START;
+ kaddr = phys + PAGE_OFFSET;
+ cache_line_index = (kaddr&CACHE_OC_ENTRY_MASK)>>CACHE_OC_ENTRY_SHIFT;
jump_to_P2();
/* Loop all the D-cache */
for (i=0; i<CACHE_OC_NUM_ENTRIES; i++) {
addr = CACHE_OC_ADDRESS_ARRAY| (i<<CACHE_OC_ENTRY_SHIFT);
data = ctrl_inl(addr);
- if ((data & CACHE_UPDATED) && (data&PAGE_MASK) == phys) {
- data &= ~CACHE_UPDATED;
+ if ((data & (CACHE_UPDATED|CACHE_VALID))
+ == (CACHE_UPDATED|CACHE_VALID)
+ && (data&PAGE_MASK) == phys) {
+ data &= ~(CACHE_VALID|CACHE_UPDATED);
ctrl_outl(data, addr);
+ if ((i^cache_line_index)&0x180)
+ bingo = 1;
+ }
+ }
+
+ cache_line_index &= 0xff;
+ /* Loop all the I-cache */
+ for (i=0; i<CACHE_IC_NUM_ENTRIES; i++) {
+ addr = CACHE_IC_ADDRESS_ARRAY| (i<<CACHE_IC_ENTRY_SHIFT);
+ data = ctrl_inl(addr);
+ if ((data & CACHE_VALID) && (data&PAGE_MASK) == phys) {
+ data &= ~CACHE_VALID;
+ ctrl_outl(data, addr);
+ if (((i^cache_line_index)&0x80))
+ bingo = 2;
}
}
back_to_P1();
+
+ if (bingo) {
+ extern void dump_stack(void);
+
+ if (bingo ==1)
+ printk("BINGO!\n");
+ else
+ printk("Bingo!\n");
+ dump_stack();
+ printk("--------------------\n");
+ }
+}
+
+/* Page is 4K, OC size is 16K, there are four lines. */
+#define CACHE_ALIAS 0x00003000
+
+void clear_user_page(void *to, unsigned long address)
+{
+ clear_page(to);
+ if (((address ^ (unsigned long)to) & CACHE_ALIAS))
+ __flush_page_to_ram(to);
+}
+
+void copy_user_page(void *to, void *from, unsigned long address)
+{
+ copy_page(to, from);
+ if (((address ^ (unsigned long)to) & CACHE_ALIAS))
+ __flush_page_to_ram(to);
}
#endif
#include <asm/mmu_context.h>
extern void die(const char *,struct pt_regs *,long);
-static void __flush_tlb_page(struct mm_struct *mm, unsigned long page);
+static void __flush_tlb_page(unsigned long asid, unsigned long page);
#if defined(__SH4__)
-static void __flush_tlb_phys(struct mm_struct *mm, unsigned long phys);
+static void __flush_tlb_phys(unsigned long phys);
#endif
/*
@@ -85,42+85,6 @@ bad_area: return 0;
}
-static void handle_vmalloc_fault(struct mm_struct *mm, unsigned long address)
-{
- pgd_t *dir;
- pmd_t *pmd;
- pte_t *pte;
- pte_t entry;
-
- dir = pgd_offset_k(address);
- pmd = pmd_offset(dir, address);
- if (pmd_none(*pmd)) {
- printk(KERN_ERR "vmalloced area %08lx bad\n", address);
- return;
- }
- if (pmd_bad(*pmd)) {
- pmd_ERROR(*pmd);
- pmd_clear(pmd);
- return;
- }
- pte = pte_offset(pmd, address);
- entry = *pte;
- if (pte_none(entry) || !pte_present(entry) || !pte_write(entry)) {
- printk(KERN_ERR "vmalloced area %08lx bad\n", address);
- return;
- }
-
-#if defined(__SH4__)
- /*
- * ITLB is not affected by "ldtlb" instruction.
- * So, we need to flush the entry by ourselves.
- */
- if (mm)
- __flush_tlb_page(mm, address&PAGE_MASK);
-#endif
- update_mmu_cache(NULL, address, entry);
-}
-
/*
* This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate
@@ -138,11+102,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long writeaccess, tsk = current;
mm = tsk->mm;
- if (address >= VMALLOC_START && address < VMALLOC_END) {
- handle_vmalloc_fault(mm, address);
- return;
- }
-
/*
* If we're in an interrupt or have no user
* context, we must not take the fault..
@@ -272,6+231,67 @@ do_sigbus: goto no_context;
}
+static int __do_page_fault1(struct pt_regs *regs, unsigned long writeaccess,
+ unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+
+ if (address >= VMALLOC_START && address < VMALLOC_END)
+ /* We can change the implementation of P3 area pte entries.
+ set_pgdir and such. */
+ dir = pgd_offset_k(address);
+ else
+ dir = pgd_offset(current->mm, address);
+
+ pmd = pmd_offset(dir, address);
+ if (pmd_none(*pmd))
+ return 1;
+ if (pmd_bad(*pmd)) {
+ pmd_ERROR(*pmd);
+ pmd_clear(pmd);
+ return 1;
+ }
+ pte = pte_offset(pmd, address);
+ entry = *pte;
+ if (pte_none(entry) || !pte_present(entry)
+ || (writeaccess && !pte_write(entry)))
+ return 1;
+
+ if (writeaccess)
+ entry = pte_mkdirty(entry);
+ entry = pte_mkyoung(entry);
+#if defined(__SH4__)
+ /*
+ * ITLB is not affected by "ldtlb" instruction.
+ * So, we need to flush the entry by ourselves.
+ */
+ __flush_tlb_page(get_asid(), address&PAGE_MASK);
+#endif
+ set_pte(pte, entry);
+ update_mmu_cache(NULL, address, entry);
+ return 0;
+}
+
+/*
+ * Called with interrupt disabled.
+ */
+asmlinkage void __do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
+ unsigned long address)
+{
+ /*
+ * XXX: Could you please implement this (calling __do_page_fault1)
+ * in assembler language in entry.S?
+ */
+ if (__do_page_fault1(regs, writeaccess, address) == 0)
+ /* Done. */
+ return;
+ sti();
+ do_page_fault(regs, writeaccess, address);
+}
+
void update_mmu_cache(struct vm_area_struct * vma,
unsigned long address, pte_t pte)
{
@@ -282,28+302,30 @@ void update_mmu_cache(struct vm_area_struct * vma, save_and_cli(flags);
#if defined(__SH4__)
- if (vma && (vma->vm_flags & VM_SHARED)) {
+ if (pte_shared(pte)) {
struct page *pg;
pteval = pte_val(pte);
pteval &= PAGE_MASK; /* Physicall page address */
- __flush_tlb_phys(vma->vm_mm, pteval);
+ __flush_tlb_phys(pteval);
pg = virt_to_page(__va(pteval));
flush_dcache_page(pg);
}
#endif
- /* Set PTEH register */
- if (vma) {
- pteaddr = (address & MMU_VPN_MASK) |
- (vma->vm_mm->context & MMU_CONTEXT_ASID_MASK);
- ctrl_outl(pteaddr, MMU_PTEH);
+ /* Ptrace may call this routine. */
+ if (vma && current->active_mm != vma->vm_mm) {
+ restore_flags(flags);
+ return;
}
+ /* Set PTEH register */
+ pteaddr = (address & MMU_VPN_MASK) | get_asid();
+ ctrl_outl(pteaddr, MMU_PTEH);
+
/* Set PTEL register */
pteval = pte_val(pte);
pteval &= _PAGE_FLAGS_HARDWARE_MASK; /* drop software flags */
- pteval |= _PAGE_FLAGS_HARDWARE_DEFAULT; /* add default flags */
ctrl_outl(pteval, MMU_PTEL);
/* Load the TLB */
@@ -311,24+333,16 @@ void update_mmu_cache(struct vm_area_struct * vma, restore_flags(flags);
}
-static void __flush_tlb_page(struct mm_struct *mm, unsigned long page)
+static void __flush_tlb_page(unsigned long asid, unsigned long page)
{
- unsigned long addr, data, asid;
- unsigned long saved_asid = MMU_NO_ASID;
-
- if (mm->context == NO_CONTEXT)
- return;
-
- asid = mm->context & MMU_CONTEXT_ASID_MASK;
- if (mm != current->mm) {
- saved_asid = get_asid();
- /*
- * We need to set ASID of the target entry to flush,
- * because TLB is indexed by (ASID and PAGE).
- */
- set_asid(asid);
- }
+ unsigned long addr, data;
+ /*
+ * NOTE: PTEH.ASID should be set to this MM
+ * _AND_ we need to write ASID to the array.
+ *
+ * It would be simple if we didn't need to set PTEH.ASID...
+ */
#if defined(__sh3__)
addr = MMU_TLB_ADDRESS_ARRAY |(page & 0x1F000)| MMU_PAGE_ASSOC_BIT;
data = (page & 0xfffe0000) | asid; /* VALID bit is off */
@@ -340,12+354,10 @@ static void __flush_tlb_page(struct mm_struct *mm, unsigned long page) ctrl_outl(data, addr);
back_to_P1();
#endif
- if (saved_asid != MMU_NO_ASID)
- set_asid(saved_asid);
}
#if defined(__SH4__)
-static void __flush_tlb_phys(struct mm_struct *mm, unsigned long phys)
+static void __flush_tlb_phys(unsigned long phys)
{
int i;
unsigned long addr, data;
@@ -373,12+385,22 @@ static void __flush_tlb_phys(struct mm_struct *mm, unsigned long phys)
void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
{
- unsigned long flags;
+ if (vma->vm_mm && vma->vm_mm->context != NO_CONTEXT) {
+ unsigned long flags;
+ unsigned long asid;
+ unsigned long saved_asid = MMU_NO_ASID;
- if (vma->vm_mm) {
+ asid = vma->vm_mm->context & MMU_CONTEXT_ASID_MASK;
page &= PAGE_MASK;
+
save_and_cli(flags);
- __flush_tlb_page(vma->vm_mm, page);
+ if (vma->vm_mm != current->mm) {
+ saved_asid = get_asid();
+ set_asid(asid);
+ }
+ __flush_tlb_page(asid, page);
+ if (saved_asid != MMU_NO_ASID)
+ set_asid(saved_asid);
restore_flags(flags);
}
}
@@ -397,13+419,22 @@ void flush_tlb_range(struct mm_struct *mm, unsigned long start, if (mm == current->mm)
activate_context(mm);
} else {
+ unsigned long asid = mm->context&MMU_CONTEXT_ASID_MASK;
+ unsigned long saved_asid = MMU_NO_ASID;
+
start &= PAGE_MASK;
end += (PAGE_SIZE - 1);
end &= PAGE_MASK;
+ if (mm != current->mm) {
+ saved_asid = get_asid();
+ set_asid(asid);
+ }
while (start < end) {
- __flush_tlb_page(mm, start);
+ __flush_tlb_page(asid, start);
start += PAGE_SIZE;
}
+ if (saved_asid != MMU_NO_ASID)
+ set_asid(saved_asid);
}
restore_flags(flags);
}
@@ -241,6+241,7 @@ void __init mem_init(void)
/* clear the zero-page */
memset(empty_zero_page, 0, PAGE_SIZE);
+ flush_page_to_ram(virt_to_page(empty_zero_page));
/* this will put all low memory onto the freelists */
totalram_pages += free_all_bootmem();
*/
#include <linux/config.h>
#ifdef CONFIG_CPU_LITTLE_ENDIAN
-OUTPUT_FORMAT("elf32-shl", "elf32-shl", "elf32-shl")
+OUTPUT_FORMAT("elf32-sh-linux", "elf32-sh-linux", "elf32-sh-linux")
#else
-OUTPUT_FORMAT("elf32-sh", "elf32-sh", "elf32-sh")
+OUTPUT_FORMAT("elf32-shbig-linux", "elf32-shbig-linux", "elf32-shbig-linux")
#endif
OUTPUT_ARCH(sh)
ENTRY(_start)
@@ -89,6+89,7 @@ SECTIONS /DISCARD/ : {
*(.text.exit)
*(.data.exit)
+ *(.exitcall.exit)
}
/* Stabs debugging sections. */
@@ -67,7+67,7 @@ static const card_ids __init etherh_cids[] = { MODULE_AUTHOR("Russell King");
MODULE_DESCRIPTION("i3 EtherH driver");
-static char *version __initdata =
+static char version[] __initdata =
"etherh [500/600/600A] ethernet driver (c) 2000 R.M.King v1.07\n";
#define ETHERH500_DATAPORT 0x200 /* MEMC */
@@ -116,7+116,7 @@ static unsigned int xd_bases[] __initdata = };
static struct hd_struct xd_struct[XD_MAXDRIVES << 6];
-static int xd_sizes[XD_MAXDRIVES << 6], xd_access[XD_MAXDRIVES] = { 0, 0 };
+static int xd_sizes[XD_MAXDRIVES << 6], xd_access[XD_MAXDRIVES];
static int xd_blocksizes[XD_MAXDRIVES << 6];
extern struct block_device_operations xd_fops;
@@ -141,12+141,12 @@ static struct block_device_operations xd_fops = { static DECLARE_WAIT_QUEUE_HEAD(xd_wait_int);
static DECLARE_WAIT_QUEUE_HEAD(xd_wait_open);
static u_char xd_valid[XD_MAXDRIVES] = { 0,0 };
-static u_char xd_drives = 0, xd_irq = 5, xd_dma = 3, xd_maxsectors;
-static u_char xd_override __initdata = 0, xd_type = 0;
+static u_char xd_drives, xd_irq = 5, xd_dma = 3, xd_maxsectors;
+static u_char xd_override __initdata, xd_type __initdata;
static u_short xd_iobase = 0x320;
-static int xd_geo[XD_MAXDRIVES*3] __initdata = { 0,0,0,0,0,0 };
+static int xd_geo[XD_MAXDRIVES*3] __initdata;
-static volatile int xdc_busy = 0;
+static volatile int xdc_busy;
static DECLARE_WAIT_QUEUE_HEAD(xdc_wait);
static struct timer_list xd_timer, xd_watchdog_int;
* The timer and the character device may be used simultaneously,
if desired.
- * FIXME: Currently only one open() of the character device is allowed.
- If another user tries to open() the device, they will get an
- -EBUSY error. Instead, this really should either support
- multiple simultaneous users of the character device (not hard),
- or simply block open() until the current user of the chrdev
- calls close().
-
* FIXME: support poll()
* FIXME: should we be crazy and support mmap()?
This will slow things down but guarantee that bad data is
never passed upstream.
+ * Since the RNG is accessed from a timer as well as normal
+ kernel code, but not from interrupts, we use spin_lock_bh
+ in regular code, and spin_lock in the timer function, to
+ serialize access to the RNG hardware area.
+
----------------------------------------------------------
Change history:
- 0.6.2:
+ Version 0.6.2:
* Clean up spinlocks. Since we don't have any interrupts
to worry about, but we do have a timer to worry about,
we use spin_lock_bh everywhere except the timer function
* New timer interval sysctl
* Clean up sysctl names
+ Version 0.9.0:
+ * Don't register a pci_driver, because we are really
+ using PCI bridge vendor/device ids, and someone
+ may want to register a driver for the bridge. (bug fix)
+ * Don't let the usage count go negative (bug fix)
+ * Clean up spinlocks (bug fix)
+ * Enable PCI device, if necessary (bug fix)
+ * iounmap on module unload (bug fix)
+ * If RNG chrdev is already in use when open(2) is called,
+ sleep until it is available.
+ * Remove redundant globals rng_allocated, rng_use_count
+ * Convert numeric globals to unsigned
+ * Module unload cleanup
+
*/
#include <linux/sysctl.h>
#include <linux/miscdevice.h>
#include <linux/smp_lock.h>
+#include <linux/mm.h>
#include <asm/io.h>
#include <asm/uaccess.h>
/*
* core module and version information
*/
-#define RNG_VERSION "0.6.2"
+#define RNG_VERSION "0.9.0"
#define RNG_MODULE_NAME "i810_rng"
#define RNG_DRIVER_NAME RNG_MODULE_NAME " hardware driver " RNG_VERSION
#define PFX RNG_MODULE_NAME ": "
@@ -253,22+266,24 @@ static void rng_run_fips_test (void); * various RNG status variables. they are globals
* as we only support a single RNG device
*/
-static int rng_allocated; /* is someone using the RNG region? */
static int rng_hw_enabled; /* is the RNG h/w enabled? */
static int rng_timer_enabled; /* is the RNG timer enabled? */
-static int rng_use_count; /* number of times RNG has been enabled */
static int rng_trusted; /* does FIPS trust out data? */
static int rng_enabled_sysctl; /* sysctl for enabling/disabling RNG */
-static int rng_entropy = 8; /* number of entropy bits we submit to /dev/random */
-static int rng_entropy_sysctl; /* sysctl for changing entropy bits */
-static int rng_interval_sysctl; /* sysctl for changing timer interval */
+static unsigned int rng_entropy = 8; /* number of entropy bits we submit to /dev/random */
+static unsigned int rng_entropy_sysctl; /* sysctl for changing entropy bits */
+static unsigned int rng_interval_sysctl; /* sysctl for changing timer interval */
static int rng_have_mem_region; /* did we grab RNG region via request_mem_region? */
-static int rng_fips_counter; /* size of internal FIPS test data pool */
-static int rng_timer_len = RNG_DEF_TIMER_LEN; /* timer interval, in jiffies */
+static unsigned int rng_fips_counter; /* size of internal FIPS test data pool */
+static unsigned int rng_timer_len = RNG_DEF_TIMER_LEN; /* timer interval, in jiffies */
static void *rng_mem; /* token to our ioremap'd RNG register area */
static spinlock_t rng_lock = SPIN_LOCK_UNLOCKED; /* hardware lock */
static struct timer_list rng_timer; /* kernel timer for RNG hardware reads and tests */
-static int rng_open; /* boolean, 0 (false) if chrdev is closed, 1 (true) if open */
+static struct pci_dev *rng_pdev; /* Firmware Hub PCI device found during PCI probe */
+static struct semaphore rng_open_sem; /* Semaphore for serializing rng_open/release */
+static wait_queue_head_t rng_open_wait; /* Wait queue for serializing open/release */
+static int rng_open_mode; /* Open mode (we only allow reads) */
+
/*
* inlined helper functions for accessing RNG registers
@@ -320,6+335,8 @@ static void rng_timer_tick (unsigned long data) /* gimme some thermal noise, baby */
rng_data = rng_data_read ();
+ spin_unlock (&rng_lock);
+
/*
* if RNG has been verified in the past, add
* data just read to the /dev/random pool,
@@ -333,6+350,8 @@ static void rng_timer_tick (unsigned long data) rng_fips_test_store (rng_data);
if (rng_fips_counter > RNG_FIPS_TEST_THRESHOLD)
rng_run_fips_test ();
+ } else {
+ spin_unlock (&rng_lock);
}
/* run the timer again, if enabled */
@@ -340,9+359,6 @@ static void rng_timer_tick (unsigned long data) rng_timer.expires = jiffies + rng_timer_len;
add_timer (&rng_timer);
}
-
- spin_unlock (&rng_lock);
-
}
@@ -351,8+367,8 @@ static void rng_timer_tick (unsigned long data) */
static int rng_enable (int enable)
{
- int rc = 0;
- u8 hw_status;
+ int rc = 0, action = 0;
+ u8 hw_status, new_status;
DPRINTK ("ENTER\n");
@@ -362,28+378,36 @@ static int rng_enable (int enable)
if (enable) {
rng_hw_enabled = 1;
- rng_use_count++;
MOD_INC_USE_COUNT;
} else {
- rng_use_count--;
- if (rng_use_count == 0)
+#ifndef __alpha__
+ if (GET_USE_COUNT (THIS_MODULE) > 0)
+ MOD_DEC_USE_COUNT;
+ if (GET_USE_COUNT (THIS_MODULE) == 0)
rng_hw_enabled = 0;
- MOD_DEC_USE_COUNT;
+#endif
}
if (rng_hw_enabled && ((hw_status & RNG_ENABLED) == 0)) {
rng_hwstatus_set (hw_status | RNG_ENABLED);
- printk (KERN_INFO PFX "RNG h/w enabled\n");
+ action = 1;
}
else if (!rng_hw_enabled && (hw_status & RNG_ENABLED)) {
rng_hwstatus_set (hw_status & ~RNG_ENABLED);
- printk (KERN_INFO PFX "RNG h/w disabled\n");
+ action = 2;
}
+ new_status = rng_hwstatus ();
+
spin_unlock_bh (&rng_lock);
- if ((!!enable) != (!!(rng_hwstatus () & RNG_ENABLED))) {
+ if (action == 1)
+ printk (KERN_INFO PFX "RNG h/w enabled\n");
+ else if (action == 2)
+ printk (KERN_INFO PFX "RNG h/w disabled\n");
+
+ if ((!!enable) != (!!(new_status & RNG_ENABLED))) {
printk (KERN_ERR PFX "Unable to %sable the RNG\n",
enable ? "en" : "dis");
rc = -EIO;
@@ -406,15+430,14 @@ static int rng_handle_sysctl_enable (ctl_table * table, int write, struct file * DPRINTK ("ENTER\n");
spin_lock_bh (&rng_lock);
-
rng_enabled_sysctl = enabled_save = rng_timer_enabled;
+ spin_unlock_bh (&rng_lock);
rc = proc_dointvec (table, write, filp, buffer, lenp);
- if (rc) {
- spin_unlock_bh (&rng_lock);
+ if (rc)
return rc;
- }
+ spin_lock_bh (&rng_lock);
if (enabled_save != rng_enabled_sysctl) {
rng_timer_enabled = rng_enabled_sysctl;
spin_unlock_bh (&rng_lock);
@@ -591,53+614,49 @@ static int rng_dev_open (struct inode *inode, struct file *filp) int rc = -EINVAL;
if ((filp->f_mode & FMODE_READ) == 0)
- goto err_out;
+ goto err_out_ret;
if (filp->f_mode & FMODE_WRITE)
- goto err_out;
+ goto err_out_ret;
- spin_lock_bh (&rng_lock);
-
- /* only allow one open of this device, exit with -EBUSY if already open */
- /* FIXME: we should sleep on a semaphore here, unless O_NONBLOCK */
- if (rng_open) {
- spin_unlock_bh (&rng_lock);
- rc = -EBUSY;
- goto err_out;
+ /* wait for device to become free */
+ down (&rng_open_sem);
+ while (rng_open_mode & filp->f_mode) {
+ if (filp->f_flags & O_NONBLOCK) {
+ up (&rng_open_sem);
+ return -EWOULDBLOCK;
+ }
+ up (&rng_open_sem);
+ interruptible_sleep_on (&rng_open_wait);
+ if (signal_pending (current))
+ return -ERESTARTSYS;
+ down (&rng_open_sem);
}
- rng_open = 1;
-
- spin_unlock_bh (&rng_lock);
-
- if (rng_enable(1) != 0) {
- spin_lock_bh (&rng_lock);
- rng_open = 0;
- spin_unlock_bh (&rng_lock);
+ if (rng_enable (1)) {
rc = -EIO;
goto err_out;
}
+ rng_open_mode |= filp->f_mode & (FMODE_READ | FMODE_WRITE);
+ up (&rng_open_sem);
return 0;
err_out:
+ up (&rng_open_sem);
+err_out_ret:
return rc;
}
static int rng_dev_release (struct inode *inode, struct file *filp)
{
+ down(&rng_open_sem);
- lock_kernel();
- if (rng_enable(0) != 0) {
- unlock_kernel();
- return -EIO;
- }
-
- spin_lock_bh (&rng_lock);
- rng_open = 0;
- spin_unlock_bh (&rng_lock);
- unlock_kernel();
+ rng_enable(0);
+ rng_open_mode &= (~filp->f_mode) & (FMODE_READ|FMODE_WRITE);
+ up (&rng_open_sem);
+ wake_up (&rng_open_wait);
return 0;
}
@@ -705,19+724,15 @@ read_loop: /*
* rng_init_one - look for and attempt to init a single RNG
*/
-static int __init rng_init_one (struct pci_dev *dev,
- const struct pci_device_id *id)
+static int __init rng_init_one (struct pci_dev *dev)
{
int rc;
u8 hw_status;
DPRINTK ("ENTER\n");
- if (rng_allocated) {
- printk (KERN_ERR PFX "this driver only supports one RNG\n");
- DPRINTK ("EXIT, returning -EBUSY\n");
- return -EBUSY;
- }
+ if (pci_enable_device (dev))
+ return -EIO;
/* XXX currently fails, investigate who has our mem region */
if (request_mem_region (RNG_ADDR, RNG_ADDR_LEN, RNG_MODULE_NAME))
@@ -728,7+743,7 @@ static int __init rng_init_one (struct pci_dev *dev, printk (KERN_ERR PFX "cannot ioremap RNG Memory\n");
DPRINTK ("EXIT, returning -EBUSY\n");
rc = -EBUSY;
- goto err_out;
+ goto err_out_free_res;
}
/* Check for Intel 82802 */
@@ -737,11+752,9 @@ static int __init rng_init_one (struct pci_dev *dev, printk (KERN_ERR PFX "RNG not detected\n");
DPRINTK ("EXIT, returning -ENODEV\n");
rc = -ENODEV;
- goto err_out;
+ goto err_out_free_map;
}
- rng_allocated = 1;
-
if (rng_entropy < 0 || rng_entropy > RNG_MAX_ENTROPY)
rng_entropy = RNG_MAX_ENTROPY;
@@ -749,10+762,11 @@ static int __init rng_init_one (struct pci_dev *dev, init_timer (&rng_timer);
rng_timer.function = rng_timer_tick;
+ /* turn RNG h/w off, if it's on */
rc = rng_enable (0);
if (rc) {
printk (KERN_ERR PFX "cannot disable RNG, aborting\n");
- goto err_out;
+ goto err_out_free_map;
}
/* add sysctls */
@@ -761,9+775,9 @@ static int __init rng_init_one (struct pci_dev *dev, DPRINTK ("EXIT, returning 0\n");
return 0;
-err_out:
- if (rng_mem)
- iounmap (rng_mem);
+err_out_free_map:
+ iounmap (rng_mem);
+err_out_free_res:
if (rng_have_mem_region)
release_mem_region (RNG_ADDR, RNG_ADDR_LEN);
return rc;
@@ -772,6+786,11 @@ err_out:
/*
* Data for PCI driver interface
+ *
+ * This data only exists for exporting the supported
+ * PCI ids via MODULE_DEVICE_TABLE. We do not actually
+ * register a pci_driver, because someone else might one day
+ * want to register another driver on the same PCI id.
*/
const static struct pci_device_id rng_pci_tbl[] __initdata = {
{ 0x8086, 0x2418, PCI_ANY_ID, PCI_ANY_ID, },
@@ -780,11+799,6 @@ const static struct pci_device_id rng_pci_tbl[] __initdata = { };
MODULE_DEVICE_TABLE (pci, rng_pci_tbl);
-static struct pci_driver rng_driver = {
- name: RNG_MODULE_NAME,
- id_table: rng_pci_tbl,
- probe: rng_init_one,
-};
MODULE_AUTHOR("Jeff Garzik, Matt Sottek");
MODULE_DESCRIPTION("Intel i8xx chipset Random Number Generator (RNG) driver");
@@ -813,23+827,36 @@ static struct miscdevice rng_miscdev = { static int __init rng_init (void)
{
int rc;
+ struct pci_dev *pdev;
DPRINTK ("ENTER\n");
- if (pci_register_driver (&rng_driver) < 1) {
- DPRINTK ("EXIT, returning -ENODEV\n");
+ init_MUTEX (&rng_open_sem);
+ init_waitqueue_head (&rng_open_wait);
+
+ pdev = pci_find_device (0x8086, 0x2418, NULL);
+ if (!pdev)
+ pdev = pci_find_device (0x8086, 0x2428, NULL);
+ if (!pdev)
return -ENODEV;
- }
+
+ rc = rng_init_one (pdev);
+ if (rc)
+ return rc;
rc = misc_register (&rng_miscdev);
if (rc) {
- pci_unregister_driver (&rng_driver);
+ iounmap (rng_mem);
+ if (rng_have_mem_region)
+ release_mem_region (RNG_ADDR, RNG_ADDR_LEN);
DPRINTK ("EXIT, returning %d\n", rc);
return rc;
}
printk (KERN_INFO RNG_DRIVER_NAME " loaded\n");
+ rng_pdev = pdev;
+
DPRINTK ("EXIT, returning 0\n");
return 0;
}
@@ -842,18+869,17 @@ static void __exit rng_cleanup (void) {
DPRINTK ("ENTER\n");
- del_timer_sync (&rng_timer);
+ assert (rng_timer_enabled == 0);
+ assert (rng_hw_enabled == 0);
+
+ misc_deregister (&rng_miscdev);
rng_sysctl (0);
- pci_unregister_driver (&rng_driver);
+ iounmap (rng_mem);
if (rng_have_mem_region)
release_mem_region (RNG_ADDR, RNG_ADDR_LEN);
- rng_hwstatus_set (rng_hwstatus() & ~RNG_ENABLED);
-
- misc_deregister (&rng_miscdev);
-
DPRINTK ("EXIT\n");
}
@@ -90,6+90,8 @@ int sci_debug = 0; MODULE_PARM(sci_debug, "i");
#endif
+#define dprintk(x...) do { if (sci_debug) printk(x); } while(0)
+
static void put_char(struct sci_port *port, char c)
{
unsigned long flags;
@@ -329,6+331,9 @@ static void sci_set_baud(struct sci_port *port, int baud) case 38400:
t = BPS_38400;
break;
+ case 57600:
+ t = BPS_57600;
+ break;
default:
printk(KERN_INFO "sci: unsupported baud rate: %d, using 115200 instead.\n", baud);
case 115200:
@@ -341,6+346,8 @@ static void sci_set_baud(struct sci_port *port, int baud) if(t >= 256) {
sci_out(port, SCSMR, (sci_in(port, SCSMR) & ~3) | 1);
t >>= 2;
+ } else {
+ sci_out(port, SCSMR, sci_in(port, SCSMR) & ~3);
}
sci_out(port, SCBRR, t);
udelay((1000000+(baud-1)) / baud); /* Wait one bit interval */
@@ -374,10+381,9 @@ static void sci_set_termios_cflag(struct sci_port *port, int cflag, int baud) if (cflag & CSTOPB)
smr_val |= 0x08;
sci_out(port, SCSMR, smr_val);
+ sci_set_baud(port, baud);
port->init_pins(port, cflag);
-
- sci_set_baud(port, baud);
sci_out(port, SCSCR, SCSCR_INIT(port));
}
@@ -528,13+534,28 @@ static inline void sci_receive_chars(struct sci_port *port) if (count == 0)
break;
- for (i=0; i<count; i++)
- tty->flip.char_buf_ptr[i] = sci_in(port, SCxRDR);
+ if (port->type == PORT_SCI) {
+ tty->flip.char_buf_ptr[0] = sci_in(port, SCxRDR);
+ tty->flip.flag_buf_ptr[0] = TTY_NORMAL;
+ } else {
+ for (i=0; i<count; i++) {
+ tty->flip.char_buf_ptr[i] = sci_in(port, SCxRDR);
+ status = sci_in(port, SCxSR);
+ if (status&SCxSR_FER(port)) {
+ tty->flip.flag_buf_ptr[i] = TTY_FRAME;
+ dprintk("sci: frame error\n");
+ } else if (status&SCxSR_PER(port)) {
+ tty->flip.flag_buf_ptr[i] = TTY_PARITY;
+ dprintk("sci: parity error\n");
+ } else {
+ tty->flip.flag_buf_ptr[i] = TTY_NORMAL;
+ }
+ }
+ }
+
sci_in(port, SCxSR); /* dummy read */
sci_out(port, SCxSR, SCxSR_RDxF_CLEAR(port));
- memset(tty->flip.flag_buf_ptr, TTY_NORMAL, count);
-
/* Update the kernel buffer end */
tty->flip.count += count;
tty->flip.char_buf_ptr += count;
@@ -549,6+570,82 @@ static inline void sci_receive_chars(struct sci_port *port) tty_flip_buffer_push(tty);
}
+static inline int sci_handle_errors(struct sci_port *port)
+{
+ int copied = 0;
+ unsigned short status = sci_in(port, SCxSR);
+ struct tty_struct *tty = port->gs.tty;
+
+ if (status&SCxSR_ORER(port) && tty->flip.count<TTY_FLIPBUF_SIZE) {
+ /* overrun error */
+ copied++;
+ *tty->flip.flag_buf_ptr++ = TTY_OVERRUN;
+ dprintk("sci: overrun error\n");
+ }
+
+ if (status&SCxSR_FER(port) && tty->flip.count<TTY_FLIPBUF_SIZE) {
+ if (sci_rxd_in(port) == 0) {
+ /* Notify of BREAK */
+ copied++;
+ *tty->flip.flag_buf_ptr++ = TTY_BREAK;
+ dprintk("sci: BREAK detected\n");
+ }
+ else {
+ /* frame error */
+ copied++;
+ *tty->flip.flag_buf_ptr++ = TTY_FRAME;
+ dprintk("sci: frame error\n");
+ }
+ }
+
+ if (status&SCxSR_PER(port) && tty->flip.count<TTY_FLIPBUF_SIZE) {
+ /* parity error */
+ copied++;
+ *tty->flip.flag_buf_ptr++ = TTY_PARITY;
+ dprintk("sci: parity error\n");
+ }
+
+ if (copied) {
+ tty->flip.count += copied;
+ tty_flip_buffer_push(tty);
+ }
+
+ return copied;
+}
+
+static inline int sci_handle_breaks(struct sci_port *port)
+{
+ int copied = 0;
+ unsigned short status = sci_in(port, SCxSR);
+ struct tty_struct *tty = port->gs.tty;
+
+ if (status&SCxSR_BRK(port) && tty->flip.count<TTY_FLIPBUF_SIZE) {
+ /* Notify of BREAK */
+ copied++;
+ *tty->flip.flag_buf_ptr++ = TTY_BREAK;
+ dprintk("sci: BREAK detected\n");
+ }
+
+#if defined(CONFIG_CPU_SUBTYPE_SH7750)
+ /* XXX: Handle SCIF overrun error */
+ if (port->type == PORT_SCIF && (ctrl_inw(SCLSR2) & SCIF_ORER) != 0) {
+ ctrl_outw(0, SCLSR2);
+ if(tty->flip.count<TTY_FLIPBUF_SIZE) {
+ copied++;
+ *tty->flip.flag_buf_ptr++ = TTY_OVERRUN;
+ dprintk("sci: overrun error\n");
+ }
+ }
+#endif
+
+ if (copied) {
+ tty->flip.count += copied;
+ tty_flip_buffer_push(tty);
+ }
+
+ return copied;
+}
+
static void sci_rx_interrupt(int irq, void *ptr, struct pt_regs *regs)
{
struct sci_port *port = ptr;
@@ -577,13+674,31 @@ static void sci_er_interrupt(int irq, void *ptr, struct pt_regs *regs) struct sci_port *port = ptr;
/* Handle errors */
- if (sci_in(port, SCxSR) & SCxSR_ERRORS(port))
- sci_out(port, SCxSR, SCxSR_ERROR_CLEAR(port));
+ if (port->type == PORT_SCI) {
+ if(sci_handle_errors(port)) {
+ /* discard character in rx buffer */
+ sci_in(port, SCxSR);
+ sci_out(port, SCxSR, SCxSR_RDxF_CLEAR(port));
+ }
+ }
+ else
+ sci_rx_interrupt(irq, ptr, regs);
+
+ sci_out(port, SCxSR, SCxSR_ERROR_CLEAR(port));
/* Kick the transmission */
sci_tx_interrupt(irq, ptr, regs);
}
+static void sci_br_interrupt(int irq, void *ptr, struct pt_regs *regs)
+{
+ struct sci_port *port = ptr;
+
+ /* Handle BREAKs */
+ sci_handle_breaks(port);
+ sci_out(port, SCxSR, SCxSR_BREAK_CLEAR(port));
+}
+
static void do_softint(void *private_)
{
struct sci_port *port = (struct sci_port *) private_;
@@ -983,8+1098,9 @@ int __init sci_init(void) {
struct sci_port *port;
int i, j;
- void (*handlers[3])(int irq, void *ptr, struct pt_regs *regs) = {
- sci_er_interrupt, sci_rx_interrupt, sci_tx_interrupt
+ void (*handlers[4])(int irq, void *ptr, struct pt_regs *regs) = {
+ sci_er_interrupt, sci_rx_interrupt, sci_tx_interrupt,
+ sci_br_interrupt,
};
printk("SuperH SCI(F) driver initialized\n");
@@ -993,7+1109,8 @@ int __init sci_init(void) port = &sci_ports[j];
printk("ttySC%d at 0x%08x is a %s\n", j, port->base,
(port->type == PORT_SCI) ? "SCI" : "SCIF");
- for (i=0; i<3; i++) {
+ for (i=0; i<4; i++) {
+ if (!port->irqs[i]) continue;
if (request_irq(port->irqs[i], handlers[i], SA_INTERRUPT,
"sci", port)) {
printk(KERN_ERR "sci: Cannot allocate irq.\n");
@@ -1001,7+1118,6 @@ int __init sci_init(void) }
}
}
- /* XXX: How about BRI interrupt?? */
sci_init_drivers();
#define SCIx_RXI_IRQ 1
#define SCIx_TXI_IRQ 2
-/* ERI, RXI, TXI, */
-#define SCI_IRQS { 23, 24, 25 }
-#define SH3_SCIF_IRQS { 56, 57, 59 }
-#define SH3_IRDA_IRQS { 52, 53, 55 }
-#define SH4_SCIF_IRQS { 40, 41, 43 }
+/* ERI, RXI, TXI, BRI */
+#define SCI_IRQS { 23, 24, 25, 0 }
+#define SH3_SCIF_IRQS { 56, 57, 59, 58 }
+#define SH3_IRDA_IRQS { 52, 53, 55, 54 }
+#define SH4_SCIF_IRQS { 40, 41, 43, 42 }
#if defined(CONFIG_CPU_SUBTYPE_SH7708)
# define SCI_NPORTS 1
# define SCSPTR1 0xffe0001c /* 8 bit SCI */
# define SCSPTR2 0xFFE80020 /* 16 bit SCIF */
# define SCLSR2 0xFFE80024 /* 16 bit SCIF */
+# define SCIF_ORER 0x0001 /* overrun error bit */
# define SCSCR_INIT(port) (((port)->type == PORT_SCI) ? \
0x30 /* TIE=0,RIE=0,TE=1,RE=1 */ : \
0x38 /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */ )
# define SCxSR_ERRORS(port) SCI_ERRORS
# define SCxSR_RDxF(port) SCI_RDRF
# define SCxSR_TDxE(port) SCI_TDRE
+# define SCxSR_ORER(port) SCI_ORER
+# define SCxSR_FER(port) SCI_FER
+# define SCxSR_PER(port) SCI_PER
+# define SCxSR_BRK(port) 0x00
# define SCxSR_RDxF_CLEAR(port) 0xbc
# define SCxSR_ERROR_CLEAR(port) 0xc4
# define SCxSR_TDxE_CLEAR(port) 0x78
+# define SCxSR_BREAK_CLEAR(port) 0xc4
#elif defined(SCIF_ONLY)
# define SCxSR_TEND(port) SCIF_TEND
# define SCxSR_ERRORS(port) SCIF_ERRORS
# define SCxSR_RDxF(port) SCIF_RDF
# define SCxSR_TDxE(port) SCIF_TDFE
+# define SCxSR_ORER(port) 0x0000
+# define SCxSR_FER(port) SCIF_FER
+# define SCxSR_PER(port) SCIF_PER
+# define SCxSR_BRK(port) SCIF_BRK
# define SCxSR_RDxF_CLEAR(port) 0x00fc
-# define SCxSR_ERROR_CLEAR(port) 0x0063
+# define SCxSR_ERROR_CLEAR(port) 0x0073
# define SCxSR_TDxE_CLEAR(port) 0x00df
+# define SCxSR_BREAK_CLEAR(port) 0x00e3
#else
# define SCxSR_TEND(port) (((port)->type == PORT_SCI) ? SCI_TEND : SCIF_TEND)
# define SCxSR_ERRORS(port) (((port)->type == PORT_SCI) ? SCI_ERRORS : SCIF_ERRORS)
# define SCxSR_RDxF(port) (((port)->type == PORT_SCI) ? SCI_RDRF : SCIF_RDF)
# define SCxSR_TDxE(port) (((port)->type == PORT_SCI) ? SCI_TDRE : SCIF_TDFE)
+# define SCxSR_ORER(port) (((port)->type == PORT_SCI) ? SCI_ORER : 0x0000)
+# define SCxSR_FER(port) (((port)->type == PORT_SCI) ? SCI_FER : SCIF_FER)
+# define SCxSR_PER(port) (((port)->type == PORT_SCI) ? SCI_PER : SCIF_PER)
+# define SCxSR_BRK(port) (((port)->type == PORT_SCI) ? 0x00 : SCIF_BRK)
# define SCxSR_RDxF_CLEAR(port) (((port)->type == PORT_SCI) ? 0xbc : 0x00fc)
-# define SCxSR_ERROR_CLEAR(port) (((port)->type == PORT_SCI) ? 0xc4 : 0x0063)
+# define SCxSR_ERROR_CLEAR(port) (((port)->type == PORT_SCI) ? 0xc4 : 0x0073)
# define SCxSR_TDxE_CLEAR(port) (((port)->type == PORT_SCI) ? 0x78 : 0x00df)
+# define SCxSR_BREAK_CLEAR(port) (((port)->type == PORT_SCI) ? 0xc4 : 0x00e3)
#endif
/* SCFCR */
@@ -169,7+185,7 @@ struct sci_port { struct gs_port gs;
int type;
unsigned int base;
- unsigned char irqs[3]; /* ERI, RXI, TXI */
+ unsigned char irqs[4]; /* ERI, RXI, TXI, BRI */
void (*init_pins)(struct sci_port* port, unsigned int cflag);
unsigned int old_cflag;
struct async_icount icount;
@@ -248,6+264,34 @@ SCIF_FNS(SCFDR, 0x0e, 16, 0x1C, 16) #define sci_in(port, reg) sci_##reg##_in(port)
#define sci_out(port, reg, value) sci_##reg##_out(port, value)
+#if defined(CONFIG_CPU_SUBTYPE_SH7708)
+static inline int sci_rxd_in(struct sci_port *port)
+{
+ if (port->base == 0xfffffe80)
+ return ctrl_inb(SCSPTR)&0x01 ? 1 : 0; /* SCI */
+ return 1;
+}
+#elif defined(CONFIG_CPU_SUBTYPE_SH7707) || defined(CONFIG_CPU_SUBTYPE_SH7709)
+static inline int sci_rxd_in(struct sci_port *port)
+{
+ if (port->base == 0xfffffe80)
+ return ctrl_inb(SCPDR)&0x01 ? 1 : 0; /* SCI */
+ if (port->base == 0xa4000150)
+ return ctrl_inb(SCPDR)&0x10 ? 1 : 0; /* SCIF */
+ if (port->base == 0xa4000140)
+ return ctrl_inb(SCPDR)&0x04 ? 1 : 0; /* IRDA */
+ return 1;
+}
+#elif defined(CONFIG_CPU_SUBTYPE_SH7750)
+static inline int sci_rxd_in(struct sci_port *port)
+{
+ if (port->base == 0xffe00000)
+ return ctrl_inb(SCSPTR1)&0x01 ? 1 : 0; /* SCI */
+ if (port->base == 0xffe80000)
+ return ctrl_inw(SCSPTR2)&0x0001 ? 1 : 0; /* SCIF */
+ return 1;
+}
+#endif
/*
* Values for the BitRate Register (SCBRR)
@@ -289,5+333,6 @@ SCIF_FNS(SCFDR, 0x0e, 16, 0x1C, 16) #define BPS_9600 SCBRR_VALUE(9600)
#define BPS_19200 SCBRR_VALUE(19200)
#define BPS_38400 SCBRR_VALUE(38400)
+#define BPS_57600 SCBRR_VALUE(57600)
#define BPS_115200 SCBRR_VALUE(115200)
@@ -493,6+493,7 @@ static void __init ide_setup_pci_device (struct pci_dev *dev, ide_pci_device_t * byte tmp = 0;
ide_hwif_t *hwif, *mate = NULL;
unsigned int class_rev;
+ int pci_class_ide;
#ifdef CONFIG_IDEDMA_AUTO
autodma = 1;
@@ -538,7+539,8 @@ check_if_enabled: * Can we trust the reported IRQ?
*/
pciirq = dev->irq;
- if ((dev->class & ~(0xff)) != (PCI_CLASS_STORAGE_IDE << 8)) {
+ pci_class_ide = ((dev->class >> 8) == PCI_CLASS_STORAGE_IDE);
+ if (!pci_class_ide) {
printk("%s: not 100%% native mode: will probe irqs later\n", d->name);
/*
* This allows offboard ide-pci cards the enable a BIOS,
@@ -548,11+550,17 @@ check_if_enabled: */
pciirq = (d->init_chipset) ? d->init_chipset(dev, d->name) : ide_special_settings(dev, d->name);
} else if (tried_config) {
- printk("%s: will probe irqs later\n", d->name);
+ printk(KERN_INFO "%s: will probe irqs later\n", d->name);
pciirq = 0;
} else if (!pciirq) {
- printk("%s: bad irq (%d): will probe later\n", d->name, pciirq);
- pciirq = 0;
+ if (pci_class_ide) {
+ /* this is the normal path for most IDE devices */
+ if (d->init_chipset)
+ pciirq = d->init_chipset(dev, d->name);
+ else
+ printk(KERN_INFO "%s standard IDE storage device detected\n", d->name);
+ } else
+ printk(KERN_WARNING "%s: bad irq (0): will probe later\n", d->name);
} else {
if (d->init_chipset)
(void) d->init_chipset(dev, d->name);
/*
- * $Id: via82cxxx.c,v 2.1b 2000/09/20 23:19:60 vojtech Exp $
+ * $Id: via82cxxx.c,v 2.1d 2000/10/01 10:01:00 vojtech Exp $
*
* Copyright (c) 2000 Vojtech Pavlik
*
@@ -192,7+192,7 @@ static int via_get_info(char *buffer, char **addr, off_t offset, int count)
via_print("----------VIA BusMastering IDE Configuration----------------");
- via_print("Driver Version: 2.1b");
+ via_print("Driver Version: 2.1d");
pci_read_config_byte(isa_dev, PCI_REVISION_ID, &t);
via_print("South Bridge: VIA %s rev %#x", via_isa_bridges[via_config].name, t);
@@ -334,7+334,7 @@ static int via_set_speed(ide_drive_t *drive, byte speed) */
switch(via_isa_bridges[via_config].speed) {
- case XFER_UDMA_2: t = via_timing[i].udma ? (0x60 | (FIT(via_timing[i].udma, 2, 5) - 2)) : 0x03; break;
+ case XFER_UDMA_2: t = via_timing[i].udma ? (0xe0 | (FIT(via_timing[i].udma, 2, 5) - 2)) : 0x03; break;
case XFER_UDMA_4: t = via_timing[i].udma ? (0xe8 | (FIT(via_timing[i].udma, 2, 9) - 2)) : 0x0f; break;
}
@@ -588,7+588,7 @@ unsigned int __init pci_init_via82cxxx(struct pci_dev *dev, const char *name)
unsigned int __init ata66_via82cxxx(ide_hwif_t *hwif)
{
- return ((via_enabled && via_ata66) >> hwif->channel) & 1;
+ return ((via_enabled & via_ata66) >> hwif->channel) & 1;
}
void __init ide_init_via82cxxx(ide_hwif_t *hwif)
mainmenu_option next_comment
comment 'Multi-device support (RAID and LVM)'
-tristate 'Multiple devices driver support (RAID and LVM)' CONFIG_BLK_DEV_MD
+bool 'Multiple devices driver support (RAID and LVM)' CONFIG_MD
+
+dep_tristate ' RAID support' CONFIG_BLK_DEV_MD $CONFIG_MD
dep_tristate ' Linear (append) mode' CONFIG_MD_LINEAR $CONFIG_BLK_DEV_MD
dep_tristate ' RAID-0 (striping) mode' CONFIG_MD_RAID0 $CONFIG_BLK_DEV_MD
dep_tristate ' RAID-1 (mirroring) mode' CONFIG_MD_RAID1 $CONFIG_BLK_DEV_MD
@@ -14,7+16,7 @@ if [ "$CONFIG_MD_LINEAR" = "y" -o "$CONFIG_MD_RAID0" = "y" -o "$CONFIG_MD_RAID1" bool ' Auto Detect support' CONFIG_AUTODETECT_RAID
fi
-dep_tristate 'Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM $CONFIG_BLK_DEV_MD
+dep_tristate ' Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM $CONFIG_MD
dep_mbool ' LVM information in proc filesystem' CONFIG_LVM_PROC_FS $CONFIG_BLK_DEV_LVM
endmenu
@@ -130,15+130,15 @@ static const char *invalid_pcb_msg = #define INVALID_PCB_MSG(len) \
printk(invalid_pcb_msg, (len),filename,__FUNCTION__,__LINE__)
-static char *search_msg __initdata = "%s: Looking for 3c505 adapter at address %#x...";
+static char search_msg[] __initdata = "%s: Looking for 3c505 adapter at address %#x...";
-static char *stilllooking_msg __initdata = "still looking...";
+static char stilllooking_msg[] __initdata = "still looking...";
-static char *found_msg __initdata = "found.\n";
+static char found_msg[] __initdata = "found.\n";
-static char *notfound_msg __initdata = "not found (reason = %d)\n";
+static char notfound_msg[] __initdata = "not found (reason = %d)\n";
-static char *couldnot_msg __initdata = "%s: 3c505 not found\n";
+static char couldnot_msg[] __initdata = "%s: 3c505 not found\n";
/*********************************************************
*
@@ -393,22+393,22 @@ static inline void netif_start_queue(struct net_device *dev) #define DEF_TRACE 0
#define DEF_STAT (2 * TICKS_PER_SEC)
-static int link[ACE_MAX_MOD_PARMS] = {0, };
-static int trace[ACE_MAX_MOD_PARMS] = {0, };
-static int tx_coal_tick[ACE_MAX_MOD_PARMS] = {0, };
-static int rx_coal_tick[ACE_MAX_MOD_PARMS] = {0, };
-static int max_tx_desc[ACE_MAX_MOD_PARMS] = {0, };
-static int max_rx_desc[ACE_MAX_MOD_PARMS] = {0, };
-static int tx_ratio[ACE_MAX_MOD_PARMS] = {0, };
+static int link[ACE_MAX_MOD_PARMS];
+static int trace[ACE_MAX_MOD_PARMS];
+static int tx_coal_tick[ACE_MAX_MOD_PARMS];
+static int rx_coal_tick[ACE_MAX_MOD_PARMS];
+static int max_tx_desc[ACE_MAX_MOD_PARMS];
+static int max_rx_desc[ACE_MAX_MOD_PARMS];
+static int tx_ratio[ACE_MAX_MOD_PARMS];
static int dis_pci_mem_inval[ACE_MAX_MOD_PARMS] = {1, 1, 1, 1, 1, 1, 1, 1};
-static const char __initdata *version =
+static char version[] __initdata =
"acenic.c: v0.47 09/18/2000 Jes Sorensen, linux-acenic@SunSITE.auc.dk\n"
" http://home.cern.ch/~jes/gige/acenic.html\n";
-static struct net_device *root_dev = NULL;
+static struct net_device *root_dev;
-static int probed __initdata = 0;
+static int probed __initdata;
#ifdef NEW_NETINIT
@@ -1742,9+1742,6 @@ static int hamachi_close(struct net_device *dev) hmp->rx_ring[i].status_n_length = 0;
hmp->rx_ring[i].addr = 0xBADF00D0; /* An invalid address. */
if (hmp->rx_skbuff[i]) {
-#if LINUX_VERSION_CODE < 0x20100
- hmp->rx_skbuff[i]->free = 1;
-#endif
dev_kfree_skb(hmp->rx_skbuff[i]);
}
hmp->rx_skbuff[i] = 0;
@@ -1777,10+1774,8 @@ static struct net_device_stats *hamachi_get_stats(struct net_device *dev) */
/* hmp->stats.tx_packets = readl(ioaddr + 0x000); */
-#if LINUX_VERSION_CODE >= 0x20119
hmp->stats.rx_bytes = readl(ioaddr + 0x330); /* Total Uni+Brd+Multi */
hmp->stats.tx_bytes = readl(ioaddr + 0x3B0); /* Total Uni+Brd+Multi */
-#endif
hmp->stats.multicast = readl(ioaddr + 0x320); /* Multicast Rx */
hmp->stats.rx_length_errors = readl(ioaddr + 0x368); /* Over+Undersized */
@@ -102,7+102,7 @@ static inline void netif_start_queue(struct net_device *dev) * stack will need to know about I/O vectors or something similar.
*/
-static const char __initdata *version = "rrunner.c: v0.22 03/01/2000 Jes Sorensen (Jes.Sorensen@cern.ch)\n";
+static char version[] __initdata = "rrunner.c: v0.22 03/01/2000 Jes Sorensen (Jes.Sorensen@cern.ch)\n";
static struct net_device *root_dev = NULL;
#define VERSION "0.86"
-#include <asm/uaccess.h>
-#include <asm/io.h>
-#include <asm/delay.h>
#include <linux/module.h>
#include <linux/version.h>
#include <linux/types.h>
#include <linux/proc_fs.h>
#include <linux/ioport.h>
#include <linux/init.h>
+#include <linux/delay.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
#include "comx.h"
#include "comxhw.h"
@@ -303,8+303,7 @@ static void chardev_channel_init(struct channel_data *chan); static char *chrdev_setup_rx(struct channel_data *channel, int size);
static int chrdev_rx_done(struct channel_data *channel);
static int chrdev_tx_done(struct channel_data *channel, int size);
-static long long cosa_lseek(struct file *file,
- long long offset, int origin);
+static loff_t cosa_lseek(struct file *file, loff_t offset, int origin);
static ssize_t cosa_read(struct file *file,
char *buf, size_t count, loff_t *ppos);
static ssize_t cosa_write(struct file *file,
@@ -783,8+782,7 @@ static void chardev_channel_init(struct channel_data *chan) init_MUTEX(&chan->wsem);
}
-static long long cosa_lseek(struct file * file,
- long long offset, int origin)
+static loff_t cosa_lseek(struct file * file, loff_t offset, int origin)
{
return -ESPIPE;
}
@@ -1212,7+1210,7 @@ static int cosa_sppp_ioctl(struct net_device *dev, struct ifreq *ifr, {
int rv;
struct channel_data *chan = (struct channel_data *)dev->priv;
- rv = cosa_ioctl_common(chan->cosa, chan, cmd, (int)ifr->ifr_data);
+ rv = cosa_ioctl_common(chan->cosa, chan, cmd, (unsigned long)ifr->ifr_data);
if (rv == -ENOIOCTLCMD) {
return sppp_do_ioctl(dev, ifr, cmd);
}
@@ -131,4+131,5 @@ pci_class_name(u32 class) return NULL;
}
-#endif
+#endif /* CONFIG_PCI_NAMES */
+
@@ -1231,6+1231,7 @@ static int hid_submit_out(struct hid_device *hid) hid->urbout.transfer_buffer_length = hid->out[hid->outtail].dr.length;
hid->urbout.transfer_buffer = hid->out[hid->outtail].buffer;
hid->urbout.setup_packet = (void *) &(hid->out[hid->outtail].dr);
+ hid->urbout.dev = hid->dev;
if (usb_submit_urb(&hid->urbout)) {
err("usb_submit_urb(out) failed");
@@ -1288,7+1289,9 @@ static int hid_open(struct input_dev *dev) if (hid->open++)
return 0;
- if (usb_submit_urb(&hid->urb))
+ hid->urb.dev = hid->dev;
+
+ if (usb_submit_urb(&hid->urb))
return -EIO;
return 0;
@@ -617,6+617,7 @@ static void read_bulk_callback( struct urb *urb ) pegasus->stats.rx_bytes += pkt_len;
goon:
+ pegasus->rx_urb.dev = pegasus->usb;
if ( (res = usb_submit_urb(&pegasus->rx_urb)) )
warn( __FUNCTION__ " failed submint rx_urb %d", res);
}
@@ -698,6+699,7 @@ static int pegasus_start_xmit( struct sk_buff *skb, struct net_device *net ) pegasus->tx_urb.transfer_buffer_length = count;
pegasus->tx_urb.transfer_flags |= USB_ASYNC_UNLINK;
+ pegasus->tx_urb.dev = pegasus->usb;
if ((res = usb_submit_urb(&pegasus->tx_urb))) {
warn("failed tx_urb %d", res);
pegasus->stats.tx_errors++;
@@ -737,9+739,11 @@ static int pegasus_open(struct net_device *net) err("can't start_net() - %d", res);
return -EIO;
}
+ pegasus->rx_urb.dev = pegasus->usb;
if ( (res = usb_submit_urb(&pegasus->rx_urb)) )
warn( __FUNCTION__ " failed rx_urb %d", res );
#ifdef PEGASUS_USE_INTR
+ pegasus->intr_urb.dev = pegasus->usb;
if ( (res = usb_submit_urb(&pegasus->intr_urb)) )
warn( __FUNCTION__ " failed intr_urb %d", res);
#endif
@@ -894,6+898,7 @@ static void * pegasus_probe( struct usb_device *dev, unsigned int ifnum ) init_MUTEX( &pegasus-> ctrl_sem );
init_waitqueue_head( &pegasus->ctrl_wait );
+ usb_inc_dev_use (dev);
pegasus->usb = dev;
pegasus->net = net;
@@ -951,11+956,7 @@ static void pegasus_disconnect( struct usb_device *dev, void *ptr ) netif_stop_queue( pegasus->net );
unregister_netdev( pegasus->net );
- usb_unlink_urb( &pegasus->rx_urb );
- usb_unlink_urb( &pegasus->tx_urb );
- usb_unlink_urb( &pegasus->ctrl_urb );
- usb_unlink_urb( &pegasus->intr_urb );
-
+ usb_dec_dev_use (pegasus->usb);
kfree( pegasus );
pegasus = NULL;
* (http://www.freecom.de/)
*/
-#include <linux/config.h>
#include "transport.h"
#include "protocol.h"
#include "usb.h"
@@ -221,7+221,6 @@ static int device_reset( Scsi_Cmnd *srb ) static int bus_reset( Scsi_Cmnd *srb )
{
struct us_data *us = (struct us_data *)srb->host->hostdata[0];
- int result;
int i;
/* we use the usb_reset_device() function to handle this for us */
@@ -705,7+705,7 @@ static void usb_find_drivers(struct usb_device *dev) dbg("unhandled interfaces on device");
if (!claimed) {
- warn("USB device %d (prod/vend 0x%x/0x%x) is not claimed by any active driver.",
+ warn("USB device %d (vend/prod 0x%x/0x%x) is not claimed by any active driver.",
dev->devnum,
dev->descriptor.idVendor,
dev->descriptor.idProduct);
@@ -1205,7+1205,7 @@ int usb_parse_configuration(struct usb_device *dev, struct usb_config_descriptor config->interface = (struct usb_interface *)
kmalloc(config->bNumInterfaces *
sizeof(struct usb_interface), GFP_KERNEL);
- dbg("kmalloc IF %p, numif %i",config->interface,config->bNumInterfaces);
+ dbg("kmalloc IF %p, numif %i", config->interface, config->bNumInterfaces);
if (!config->interface) {
err("out of memory");
return -1;
@@ -1467,7+1467,7 @@ void usb_connect(struct usb_device *dev) int devnum;
// FIXME needs locking for SMP!!
/* why? this is called only from the hub thread,
- * which hopefully doesn't run on multiple CPU's simulatenously 8-)
+ * which hopefully doesn't run on multiple CPU's simultaneously 8-)
*/
dev->descriptor.bMaxPacketSize0 = 8; /* Start off at 8 bytes */
devnum = find_next_zero_bit(dev->bus->devmap.devicemap, 128, 1);
@@ -1876,7+1876,8 @@ int usb_new_device(struct usb_device *dev)
err = usb_set_address(dev);
if (err < 0) {
- err("USB device not accepting new address (error=%d)", err);
+ err("USB device not accepting new address=%d (error=%d)",
+ dev->devnum, err);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
dev->devnum = -1;
return 1;
@@ -1889,7+1890,7 @@ int usb_new_device(struct usb_device *dev) if (err < 0)
err("USB device not responding, giving up (error=%d)", err);
else
- err("USB device descriptor short read (expected %i, got %i)",8,err);
+ err("USB device descriptor short read (expected %i, got %i)", 8, err);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
dev->devnum = -1;
return 1;
@@ -1902,7+1903,8 @@ int usb_new_device(struct usb_device *dev) if (err < 0)
err("unable to get device descriptor (error=%d)", err);
else
- err("USB device descriptor short read (expected %i, got %i)", sizeof(dev->descriptor), err);
+ err("USB device descriptor short read (expected %i, got %i)",
+ sizeof(dev->descriptor), err);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
dev->devnum = -1;
@@ -1911,7+1913,8 @@ int usb_new_device(struct usb_device *dev)
err = usb_get_configuration(dev);
if (err < 0) {
- err("unable to get configuration (error=%d)", err);
+ err("unable to get device %d configuration (error=%d)",
+ dev->devnum, err);
usb_destroy_configuration(dev);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
dev->devnum = -1;
@@ -1921,7+1924,8 @@ int usb_new_device(struct usb_device *dev) /* we set the default configuration here */
err = usb_set_configuration(dev, dev->config[0].bConfigurationValue);
if (err) {
- err("failed to set default configuration (error=%d)", err);
+ err("failed to set device %d default configuration (error=%d)",
+ dev->devnum, err);
clear_bit(dev->devnum, &dev->bus->devmap.devicemap);
dev->devnum = -1;
return 1;
@@ -17,7+17,7 @@ obj-y := open.o read_write.o devices.o file_table.o buffer.o \ filesystems.o
ifeq ($(CONFIG_QUOTA),y)
-obj=ADy += dquot.o
+obj-y += dquot.o
else
obj-y += noquot.o
endif
@@ -706,7+706,9 @@ void set_blocksize(kdev_t dev, int size) static void refill_freelist(int size)
{
if (!grow_buffers(size)) {
- try_to_free_pages(GFP_BUFFER);
+ wakeup_bdflush(1);
+ current->policy |= SCHED_YIELD;
+ schedule();
}
}
@@ -859,6+861,7 @@ repeat: int balance_dirty_state(kdev_t dev)
{
unsigned long dirty, tot, hard_dirty_limit, soft_dirty_limit;
+ int shortage;
dirty = size_buffers_type[BUF_DIRTY] >> PAGE_SHIFT;
tot = nr_free_buffer_pages();
@@ -869,21+872,21 @@ int balance_dirty_state(kdev_t dev)
/* First, check for the "real" dirty limit. */
if (dirty > soft_dirty_limit) {
- if (dirty > hard_dirty_limit || inactive_shortage())
+ if (dirty > hard_dirty_limit)
return 1;
return 0;
}
/*
- * Then, make sure the number of inactive pages won't overwhelm
- * page replacement ... this should avoid stalls.
+ * If we are about to get low on free pages and
+ * cleaning the inactive_dirty pages would help
+ * fix this, wake up bdflush.
*/
- if (nr_inactive_dirty_pages >
- nr_free_pages() + nr_inactive_clean_pages()) {
- if (free_shortage() > freepages.min)
- return 1;
+ shortage = free_shortage();
+ if (shortage && nr_inactive_dirty_pages > shortage &&
+ nr_inactive_dirty_pages > freepages.high)
return 0;
- }
+
return -1;
}
@@ -1807,9+1810,9 @@ int block_truncate_page(struct address_space *mapping, loff_t from, get_block_t if (Page_Uptodate(page))
set_bit(BH_Uptodate, &bh->b_state);
+ bh->b_end_io = end_buffer_io_sync;
if (!buffer_uptodate(bh)) {
err = -EIO;
- bh->b_end_io = end_buffer_io_sync;
ll_rw_block(READ, 1, &bh);
wait_on_buffer(bh);
/* Uhhuh. Read error. Complain and punt. */
@@ -2234,6+2237,7 @@ static int grow_buffers(int size) return 1;
no_buffer_head:
+ UnlockPage(page);
page_cache_release(page);
out:
return 0;
@@ -2663,9+2667,8 @@ int bdflush(void *sem) CHECK_EMERGENCY_SYNC
flushed = flush_dirty_buffers(0);
- if (nr_inactive_dirty_pages > nr_free_pages() +
- nr_inactive_clean_pages())
- flushed += page_launder(GFP_KSWAPD, 0);
+ if (free_shortage())
+ flushed += page_launder(GFP_BUFFER, 0);
/* If wakeup_bdflush will wakeup us
after our bdflush_done wakeup, then
#ifndef _ASM_PPC_ATOMIC_H_
#define _ASM_PPC_ATOMIC_H_
-#include <linux/config.h>
-
typedef struct { volatile int counter; } atomic_t;
#define ATOMIC_INIT(i) { (i) }
#include <linux/config.h>
-#ifdef CONFIG_SMP
typedef struct { volatile int counter; } atomic_t;
-#else
-typedef struct { int counter; } atomic_t;
-#endif
#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
@@ -23,19+19,12 @@ typedef struct { int counter; } atomic_t; #include <asm/system.h>
/*
- * Make sure gcc doesn't try to be clever and move things around
- * on us. We need to use _exactly_ the address the user gave us,
- * not some alias that contains the same information.
- */
-#define __atomic_fool_gcc(x) (*(volatile struct { int a[100]; } *)x)
-
-/*
* To get proper branch prediction for the main line, we must branch
* forward to code at the end of this object's .text section, then
* branch back to restart the operation.
*/
-extern __inline__ void atomic_add(int i, atomic_t * v)
+static __inline__ void atomic_add(int i, atomic_t * v)
{
unsigned long flags;
@@ -44,7+33,7 @@ extern __inline__ void atomic_add(int i, atomic_t * v) restore_flags(flags);
}
-extern __inline__ void atomic_sub(int i, atomic_t *v)
+static __inline__ void atomic_sub(int i, atomic_t *v)
{
unsigned long flags;
@@ -53,7+42,7 @@ extern __inline__ void atomic_sub(int i, atomic_t *v) restore_flags(flags);
}
-extern __inline__ int atomic_add_return(int i, atomic_t * v)
+static __inline__ int atomic_add_return(int i, atomic_t * v)
{
unsigned long temp, flags;
@@ -66,7+55,7 @@ extern __inline__ int atomic_add_return(int i, atomic_t * v) return temp;
}
-extern __inline__ int atomic_sub_return(int i, atomic_t * v)
+static __inline__ int atomic_sub_return(int i, atomic_t * v)
{
unsigned long temp, flags;
@@ -88,7+77,7 @@ extern __inline__ int atomic_sub_return(int i, atomic_t * v) #define atomic_inc(v) atomic_add(1,(v))
#define atomic_dec(v) atomic_sub(1,(v))
-extern __inline__ void atomic_clear_mask(unsigned int mask, atomic_t *v)
+static __inline__ void atomic_clear_mask(unsigned int mask, atomic_t *v)
{
unsigned long flags;
@@ -97,7+86,7 @@ extern __inline__ void atomic_clear_mask(unsigned int mask, atomic_t *v) restore_flags(flags);
}
-extern __inline__ void atomic_set_mask(unsigned int mask, atomic_t *v)
+static __inline__ void atomic_set_mask(unsigned int mask, atomic_t *v)
{
unsigned long flags;
/* For __swab32 */
#include <asm/byteorder.h>
-extern __inline__ void set_bit(int nr, volatile void * addr)
+static __inline__ void set_bit(int nr, volatile void * addr)
{
int mask;
volatile unsigned int *a = addr;
@@ -19,7+19,12 @@ extern __inline__ void set_bit(int nr, volatile void * addr) restore_flags(flags);
}
-extern __inline__ void clear_bit(int nr, volatile void * addr)
+/*
+ * clear_bit() doesn't provide any barrier for the compiler.
+ */
+#define smp_mb__before_clear_bit() barrier()
+#define smp_mb__after_clear_bit() barrier()
+static __inline__ void clear_bit(int nr, volatile void * addr)
{
int mask;
volatile unsigned int *a = addr;
@@ -32,7+37,7 @@ extern __inline__ void clear_bit(int nr, volatile void * addr) restore_flags(flags);
}
-extern __inline__ void change_bit(int nr, volatile void * addr)
+static __inline__ void change_bit(int nr, volatile void * addr)
{
int mask;
volatile unsigned int *a = addr;
@@ -45,7+50,7 @@ extern __inline__ void change_bit(int nr, volatile void * addr) restore_flags(flags);
}
-extern __inline__ int test_and_set_bit(int nr, volatile void * addr)
+static __inline__ int test_and_set_bit(int nr, volatile void * addr)
{
int mask, retval;
volatile unsigned int *a = addr;
@@ -61,7+66,7 @@ extern __inline__ int test_and_set_bit(int nr, volatile void * addr) return retval;
}
-extern __inline__ int test_and_clear_bit(int nr, volatile void * addr)
+static __inline__ int test_and_clear_bit(int nr, volatile void * addr)
{
int mask, retval;
volatile unsigned int *a = addr;
@@ -77,7+82,7 @@ extern __inline__ int test_and_clear_bit(int nr, volatile void * addr) return retval;
}
-extern __inline__ int test_and_change_bit(int nr, volatile void * addr)
+static __inline__ int test_and_change_bit(int nr, volatile void * addr)
{
int mask, retval;
volatile unsigned int *a = addr;
@@ -94,12+99,12 @@ extern __inline__ int test_and_change_bit(int nr, volatile void * addr) }
-extern __inline__ int test_bit(int nr, const volatile void *addr)
+static __inline__ int test_bit(int nr, const volatile void *addr)
{
return 1UL & (((const volatile unsigned int *) addr)[nr >> 5] >> (nr & 31));
}
-extern __inline__ unsigned long ffz(unsigned long word)
+static __inline__ unsigned long ffz(unsigned long word)
{
unsigned long result;
@@ -108,11+113,12 @@ extern __inline__ unsigned long ffz(unsigned long word) "bt/s 1b\n\t"
" add #1, %0"
: "=r" (result), "=r" (word)
- : "0" (~0L), "1" (word));
+ : "0" (~0L), "1" (word)
+ : "t");
return result;
}
-extern __inline__ int find_next_zero_bit(void *addr, int size, int offset)
+static __inline__ int find_next_zero_bit(void *addr, int size, int offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
unsigned long result = offset & ~31UL;
@@ -159,7+165,7 @@ found_middle: #define ext2_find_next_zero_bit(addr, size, offset) \
find_next_zero_bit((addr), (size), (offset))
#else
-extern __inline__ int ext2_set_bit(int nr, volatile void * addr)
+static __inline__ int ext2_set_bit(int nr, volatile void * addr)
{
int mask, retval;
unsigned long flags;
@@ -174,7+180,7 @@ extern __inline__ int ext2_set_bit(int nr, volatile void * addr) return retval;
}
-extern __inline__ int ext2_clear_bit(int nr, volatile void * addr)
+static __inline__ int ext2_clear_bit(int nr, volatile void * addr)
{
int mask, retval;
unsigned long flags;
@@ -189,7+195,7 @@ extern __inline__ int ext2_clear_bit(int nr, volatile void * addr) return retval;
}
-extern __inline__ int ext2_test_bit(int nr, const volatile void * addr)
+static __inline__ int ext2_test_bit(int nr, const volatile void * addr)
{
int mask;
const volatile unsigned char *ADDR = (const unsigned char *) addr;
@@ -202,7+208,7 @@ extern __inline__ int ext2_test_bit(int nr, const volatile void * addr) #define ext2_find_first_zero_bit(addr, size) \
ext2_find_next_zero_bit((addr), (size), 0)
-extern __inline__ unsigned long ext2_find_next_zero_bit(void *addr, unsigned long size, unsigned long offset)
+static __inline__ unsigned long ext2_find_next_zero_bit(void *addr, unsigned long size, unsigned long offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
unsigned long result = offset & ~31UL;
@@ -82,7+82,8 @@ static __inline__ unsigned int csum_fold(unsigned int sum) "add %1, %0\n\t"
"not %0, %0\n\t"
: "=r" (sum), "=&r" (__dummy)
- : "0" (sum));
+ : "0" (sum)
+ : "t");
return sum;
}
@@ -115,7+116,8 @@ static __inline__ unsigned short ip_fast_csum(unsigned char * iph, unsigned int are modified, we must also specify them as outputs, or gcc
will assume they contain their original values. */
: "=r" (sum), "=r" (iph), "=r" (ihl), "=&r" (__dummy0), "=&z" (__dummy1)
- : "1" (iph), "2" (ihl));
+ : "1" (iph), "2" (ihl)
+ : "t");
return csum_fold(sum);
}
@@ -138,7+140,8 @@ static __inline__ unsigned long csum_tcpudp_nofold(unsigned long saddr, "movt %0\n\t"
"add %1, %0"
: "=r" (sum), "=r" (len_proto)
- : "r" (daddr), "r" (saddr), "1" (len_proto), "0" (sum));
+ : "r" (daddr), "r" (saddr), "1" (len_proto), "0" (sum)
+ : "t");
return sum;
}
@@ -197,7+200,8 @@ static __inline__ unsigned short int csum_ipv6_magic(struct in6_addr *saddr, "add %1, %0\n"
: "=r" (sum), "=&r" (__dummy)
: "r" (saddr), "r" (daddr),
- "r" (htonl(len)), "r" (htonl(proto)), "0" (sum));
+ "r" (htonl(len)), "r" (htonl(proto)), "0" (sum)
+ : "t");
return csum_fold(sum);
}
@@ -15,7+15,8 @@ extern __inline__ void __delay(unsigned long loops) "bf/s 1b\n\t"
" dt %0"
: "=r" (loops)
- : "0" (loops));
+ : "0" (loops)
+ : "t");
}
extern __inline__ void __udelay(unsigned long usecs, unsigned long lps)
#define F_SETSIG 10 /* for sockets. */
#define F_GETSIG 11 /* for sockets. */
+#define F_GETLK64 12 /* using 'struct flock64' */
+#define F_SETLK64 13
+#define F_SETLKW64 14
+
/* for F_[GET|SET]FL */
#define FD_CLOEXEC 1 /* actually anything with low bit set goes */
@@ -70,6+74,14 @@ struct flock { pid_t l_pid;
};
+struct flock64 {
+ short l_type;
+ short l_whence;
+ loff_t l_start;
+ loff_t l_len;
+ pid_t l_pid;
+};
+
#define F_LINUX_SPECIFIC_BASE 1024
#endif /* __ASM_SH_FCNTL_H */
#include <asm/machvec.h>
#ifndef MAX_HWIFS
-#define MAX_HWIFS 1
+/* Should never have less than 2, ide-pci.c(ide_match_hwif) requires it */
+#define MAX_HWIFS 2
#endif
#define ide__sti() __sti()
@@ -62,6+62,10 @@ extern void hd64461_outsl(unsigned int port, const void *addr, unsigned long cou # define __writew generic_writew
# define __writel generic_writel
+# define __isa_port2addr generic_isa_port2addr
+# define __ioremap generic_ioremap
+# define __iounmap generic_iounmap
+
#endif
#endif /* _ASM_SH_IO_HD64461_H */
#if defined(CONFIG_CPU_SUBTYPE_SH7707) || defined(CONFIG_CPU_SUBTYPE_SH7709)
#define SCIF_ERI_IRQ 56
#define SCIF_RXI_IRQ 57
+#define SCIF_BRI_IRQ 58
#define SCIF_TXI_IRQ 59
#define SCIF_IPR_ADDR INTC_IPRE
#define SCIF_IPR_POS 1
#define IRDA_ERI_IRQ 52
#define IRDA_RXI_IRQ 53
+#define IRDA_BRI_IRQ 54
#define IRDA_TXI_IRQ 55
#define IRDA_IPR_ADDR INTC_IPRE
#define IRDA_IPR_POS 2
#elif defined(CONFIG_CPU_SUBTYPE_SH7750)
#define SCIF_ERI_IRQ 40
#define SCIF_RXI_IRQ 41
+#define SCIF_BRI_IRQ 42
#define SCIF_TXI_IRQ 43
#define SCIF_IPR_ADDR INTC_IPRC
#define SCIF_IPR_POS 1
[ P0/U0 (virtual) ] 0x00000000 <------ User space
[ P1 (fixed) cached ] 0x80000000 <------ Kernel space
[ P2 (fixed) non-cachable] 0xA0000000 <------ Physical access
- [ P3 (virtual) cached] 0xC0000000 <------ not used
+ [ P3 (virtual) cached] 0xC0000000 <------ vmalloced area
[ P4 control ] 0xE0000000
*/
#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
#define copy_page(to,from) memcpy((void *)(to), (void *)(from), PAGE_SIZE)
+
+#if defined(__sh3__)
#define clear_user_page(page, vaddr) clear_page(page)
#define copy_user_page(to, from, vaddr) copy_page(to, from)
+#elif defined(__SH4__)
+extern void clear_user_page(void *to, unsigned long address);
+extern void copy_user_page(void *to, void *from, unsigned long address);
+#endif
/*
* These are used to make use of C type-checking..
@@ -62,7+68,7 @@ typedef struct { unsigned long pgprot; } pgprot_t;
#define __MEMORY_START CONFIG_MEMORY_START
-#define PAGE_OFFSET (0x80000000)
+#define PAGE_OFFSET (0x80000000UL)
#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
#define virt_to_page(kaddr) (mem_map + ((__pa(kaddr)-__MEMORY_START) >> PAGE_SHIFT))
#define PCIBIOS_MIN_MEM 0x10000000
#endif
-extern inline void pcibios_set_master(struct pci_dev *dev)
+static inline void pcibios_set_master(struct pci_dev *dev)
{
/* No special bus mastering setup handling */
}
-extern inline void pcibios_penalize_isa_irq(int irq)
+static inline void pcibios_penalize_isa_irq(int irq)
{
/* We don't do dynamic PCI IRQ allocation */
}
@@ -67,7+67,7 @@ extern void pci_free_consistent(struct pci_dev *hwdev, size_t size, * Once the device is given the dma address, the device owns this memory
* until either pci_unmap_single or pci_dma_sync_single is performed.
*/
-extern inline dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr,
+static inline dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr,
size_t size,int directoin)
{
return virt_to_bus(ptr);
@@ -80,7+80,7 @@ extern inline dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr, * After this call, reads by the cpu to the buffer are guarenteed to see
* whatever the device wrote there.
*/
-extern inline void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
+static inline void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
size_t size,int direction)
{
/* Nothing to do */
@@ -101,7+101,7 @@ extern inline void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, * Device ownership issues as mentioned above for pci_map_single are
* the same here.
*/
-extern inline int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+static inline int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents,int direction)
{
return nents;
@@ -111,7+111,7 @@ extern inline int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, * Again, cpu read rules concerning calls here are the same as for
* pci_unmap_single() above.
*/
-extern inline void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+static inline void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents,int direction)
{
/* Nothing to do */
@@ -126,7+126,7 @@ extern inline void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, * next point you give the PCI dma address back to the card, the
* device again owns the buffer.
*/
-extern inline void pci_dma_sync_single(struct pci_dev *hwdev,
+static inline void pci_dma_sync_single(struct pci_dev *hwdev,
dma_addr_t dma_handle,
size_t size,int direction)
{
@@ -139,7+139,7 @@ extern inline void pci_dma_sync_single(struct pci_dev *hwdev, * The same as pci_dma_sync_single but for a scatter-gather list,
* same rules and usage.
*/
-extern inline void pci_dma_sync_sg(struct pci_dev *hwdev,
+static inline void pci_dma_sync_sg(struct pci_dev *hwdev,
struct scatterlist *sg,
int nelems,int direction)
{
@@ -92,45+92,44 @@ extern unsigned long empty_zero_page[1024]; #define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END P4SEG
-#define _PAGE_PRESENT 0x001 /* software: page is present */
-#define _PAGE_ACCESSED 0x002 /* software: page referenced */
+/* 0x001 WT-bit on SH-4, 0 on SH-3 */
+#define _PAGE_HW_SHARED 0x002 /* SH-bit : page is shared among processes */
#define _PAGE_DIRTY 0x004 /* D-bit : page changed */
#define _PAGE_CACHABLE 0x008 /* C-bit : cachable */
-/* 0x010 SZ-bit : size of page */
+/* 0x010 SZ0-bit : Size of page */
#define _PAGE_RW 0x020 /* PR0-bit : write access allowed */
#define _PAGE_USER 0x040 /* PR1-bit : user space access allowed */
-#define _PAGE_PROTNONE 0x080 /* software: if not present */
-/* 0x100 V-bit : page is valid */
-/* 0x200 can be used as software flag */
-/* 0x400 can be used as software flag */
-/* 0x800 can be used as software flag */
+/* 0x080 SZ1-bit : Size of page (on SH-4) */
+#define _PAGE_PRESENT 0x100 /* V-bit : page is valid */
+#define _PAGE_PROTNONE 0x200 /* software: if not present */
+#define _PAGE_ACCESSED 0x400 /* software: page referenced */
+#define _PAGE_U0_SHARED 0x800 /* software: page is shared in user space */
-#if defined(__sh3__)
/* Mask which drop software flags */
-#define _PAGE_FLAGS_HARDWARE_MASK 0x1ffff06c
-/* Flags defalult: SZ=1 (4k-byte), C=0 (non-cachable), SH=0 (not shared) */
-#define _PAGE_FLAGS_HARDWARE_DEFAULT 0x00000110
+#define _PAGE_FLAGS_HARDWARE_MASK 0x1ffff1ff
+/* Hardware flags: SZ=1 (4k-byte) */
+#define _PAGE_FLAGS_HARD 0x00000010
+
+#if defined(__sh3__)
+#define _PAGE_SHARED _PAGE_HW_SHARED
#elif defined(__SH4__)
-/* Mask which drops software flags */
-#define _PAGE_FLAGS_HARDWARE_MASK 0x1ffff06c
-/* Flags defalult: SZ=01 (4k-byte), C=0 (non-cachable), SH=0 (not shared), WT=0 */
-#define _PAGE_FLAGS_HARDWARE_DEFAULT 0x00000110
+#define _PAGE_SHARED _PAGE_U0_SHARED
#endif
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY)
#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
-#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_CACHABLE | _PAGE_DIRTY)
+#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_CACHABLE | _PAGE_DIRTY | _PAGE_SHARED)
-#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE |_PAGE_ACCESSED)
-#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_CACHABLE |_PAGE_ACCESSED)
-#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED)
-#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED)
-#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_CACHABLE | _PAGE_DIRTY | _PAGE_ACCESSED)
-#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | _PAGE_DIRTY | _PAGE_ACCESSED)
+#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE |_PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_CACHABLE |_PAGE_ACCESSED | _PAGE_SHARED | _PAGE_FLAGS_HARD)
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_CACHABLE | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
/*
* As i386 and MIPS, SuperH can't do page protection for execute, and
- * considers that the same are read. Also, write permissions imply
+ * considers that the same as a read. Also, write permissions imply
* read permissions. This is the closest we can get..
*/
@@ -184,6+183,7 @@ extern inline int pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_USER; } extern inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
extern inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
extern inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_RW; }
+extern inline int pte_shared(pte_t pte){ return pte_val(pte) & _PAGE_SHARED; }
extern inline pte_t pte_rdprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_USER)); return pte; }
extern inline pte_t pte_exprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_USER)); return pte; }
@@ -244,11+244,15 @@ extern void update_mmu_cache(struct vm_area_struct * vma, unsigned long address, pte_t pte);
/* Encode and de-code a swap entry */
-#define SWP_TYPE(x) (((x).val >> 1) & 0x3f)
-#define SWP_OFFSET(x) ((x).val >> 8)
-#define SWP_ENTRY(type, offset) ((swp_entry_t) { ((type) << 1) | ((offset) << 8) })
-#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
-#define swp_entry_to_pte(x) ((pte_t) { (x).val })
+/*
+ * NOTE: We should set ZEROs at the position of _PAGE_PRESENT
+ * and _PAGE_PROTONOE bits
+ */
+#define SWP_TYPE(x) ((x).val & 0xff)
+#define SWP_OFFSET(x) ((x).val >> 10)
+#define SWP_ENTRY(type, offset) ((swp_entry_t) { (type) | ((offset) << 10) })
+#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+#define swp_entry_to_pte(x) ((pte_t) { (x).val })
#define module_map vmalloc
#define module_unmap vfree
@@ -20,7+20,7 @@ extern __inline__ char *strcpy(char *__dest, const char *__src) " add #1, %0\n\t"
: "=r" (__dest), "=r" (__src), "=&z" (__dummy)
: "0" (__dest), "1" (__src)
- : "memory");
+ : "memory", "t");
return __xdest;
}
@@ -46,7+46,7 @@ extern __inline__ char *strncpy(char *__dest, const char *__src, size_t __n) "2:"
: "=r" (__dest), "=r" (__src), "=&z" (__dummy)
: "0" (__dest), "1" (__src), "r" (__src+__n)
- : "memory");
+ : "memory", "t");
return __xdest;
}
@@ -71,7+71,8 @@ extern __inline__ int strcmp(const char *__cs, const char *__ct) "sub %3, %2\n"
"2:"
: "=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
- : "0" (__cs), "1" (__ct));
+ : "0" (__cs), "1" (__ct)
+ : "t");
return __res;
}
@@ -82,6+83,9 @@ extern __inline__ int strncmp(const char *__cs, const char *__ct, size_t __n) register int __res;
unsigned long __dummy;
+ if (__n == 0)
+ return 0;
+
__asm__ __volatile__(
"mov.b @%1+, %3\n"
"1:\n\t"
@@ -99,7+103,8 @@ extern __inline__ int strncmp(const char *__cs, const char *__ct, size_t __n) "sub %3, %2\n"
"3:"
:"=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
- : "0" (__cs), "1" (__ct), "r" (__cs+__n));
+ : "0" (__cs), "1" (__ct), "r" (__cs+__n)
+ : "t");
return __res;
}
@@ -21,12+21,12 @@ typedef struct { #define prepare_to_switch() do { } while(0)
#define switch_to(prev,next,last) do { \
register struct task_struct *__last; \
- register unsigned long *__ts1 __asm__ ("$r1") = &prev->thread.sp; \
- register unsigned long *__ts2 __asm__ ("$r2") = &prev->thread.pc; \
- register unsigned long *__ts4 __asm__ ("$r4") = (unsigned long *)prev; \
- register unsigned long *__ts5 __asm__ ("$r5") = (unsigned long *)next; \
- register unsigned long *__ts6 __asm__ ("$r6") = &next->thread.sp; \
- register unsigned long __ts7 __asm__ ("$r7") = next->thread.pc; \
+ register unsigned long *__ts1 __asm__ ("r1") = &prev->thread.sp; \
+ register unsigned long *__ts2 __asm__ ("r2") = &prev->thread.pc; \
+ register unsigned long *__ts4 __asm__ ("r4") = (unsigned long *)prev; \
+ register unsigned long *__ts5 __asm__ ("r5") = (unsigned long *)next; \
+ register unsigned long *__ts6 __asm__ ("r6") = &next->thread.sp; \
+ register unsigned long __ts7 __asm__ ("r7") = next->thread.pc; \
__asm__ __volatile__ (".balign 4\n\t" \
"stc.l $gbr, @-$r15\n\t" \
"sts.l $pr, @-$r15\n\t" \
@@ -63,7+63,7 @@ typedef struct { :"0" (prev), \
"r" (__ts1), "r" (__ts2), \
"r" (__ts4), "r" (__ts5), "r" (__ts6), "r" (__ts7) \
- :"r3"); \
+ :"r3", "t"); \
last = __last; \
} while (0)
#endif
@@ -88,11+88,22 @@ extern void __xchg_called_with_bad_pointer(void); #define mb() __asm__ __volatile__ ("": : :"memory")
#define rmb() mb()
#define wmb() __asm__ __volatile__ ("": : :"memory")
+
+#ifdef CONFIG_SMP
+#define smp_mb() mb()
+#define smp_rmb() rmb()
+#define smp_wmb() wmb()
+#else
+#define smp_mb() barrier()
+#define smp_rmb() barrier()
+#define smp_wmb() barrier()
+#endif
+
#define set_mb(var, value) do { xchg(&var, value); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0)
/* Interrupt Control */
-extern __inline__ void __sti(void)
+static __inline__ void __sti(void)
{
unsigned long __dummy0, __dummy1;
@@ -106,7+117,7 @@ extern __inline__ void __sti(void) : "memory");
}
-extern __inline__ void __cli(void)
+static __inline__ void __cli(void)
{
unsigned long __dummy;
__asm__ __volatile__("stc $sr, %0\n\t"
@@ -205,7+216,7 @@ extern void __global_restore_flags(unsigned long);
#endif
-extern __inline__ unsigned long xchg_u32(volatile int * m, unsigned long val)
+static __inline__ unsigned long xchg_u32(volatile int * m, unsigned long val)
{
unsigned long flags, retval;
@@ -216,7+227,7 @@ extern __inline__ unsigned long xchg_u32(volatile int * m, unsigned long val) return retval;
}
-extern __inline__ unsigned long xchg_u8(volatile unsigned char * m, unsigned long val)
+static __inline__ unsigned long xchg_u8(volatile unsigned char * m, unsigned long val)
{
unsigned long flags, retval;
* sum := addr + size; carry? --> flag = true;
* if (sum >= addr_limit) flag = true;
*/
-#define __range_ok(addr,size) ({ \
- unsigned long flag,sum; \
- __asm__("clrt; addc %3, %1; movt %0; cmp/hi %4, %1; rotcl %0" \
- :"=&r" (flag), "=r" (sum) \
- :"1" (addr), "r" ((int)(size)), "r" (current->addr_limit.seg)); \
+#define __range_ok(addr,size) ({ \
+ unsigned long flag,sum; \
+ __asm__("clrt; addc %3, %1; movt %0; cmp/hi %4, %1; rotcl %0" \
+ :"=&r" (flag), "=r" (sum) \
+ :"1" (addr), "r" ((int)(size)), "r" (current->addr_limit.seg) \
+ :"t"); \
flag; })
#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
@@ -186,7+187,8 @@ __asm__ __volatile__( \ ".long 1b, 3b\n\t" \
".previous" \
:"=&r" (__pu_err) \
- :"r" (__pu_val), "m" (__m(__pu_addr)), "i" (-EFAULT)); })
+ :"r" (__pu_val), "m" (__m(__pu_addr)), "i" (-EFAULT) \
+ :"memory"); })
extern void __put_user_unknown(void);
\f
@@ -224,7+226,7 @@ __copy_user(void *__to, const void *__from, __kernel_size_t __n) ".previous"
: "=r" (res), "=&z" (__dummy), "=r" (_f), "=r" (_t)
: "2" (__from), "3" (__to), "0" (res)
- : "memory");
+ : "memory", "t");
return res;
}
@@ -284,7+286,8 @@ __clear_user(void *addr, __kernel_size_t size) " .long 1b,3b\n"
".previous"
: "=r" (size), "=r" (__a)
- : "0" (size), "1" (addr), "r" (0));
+ : "0" (size), "1" (addr), "r" (0)
+ : "memory", "t");
return size;
}
@@ -330,7+333,7 @@ __strncpy_from_user(unsigned long __dest, unsigned long __src, int __count) : "=r" (res), "=&z" (__dummy), "=r" (_s), "=r" (_d)
: "0" (__count), "2" (__src), "3" (__dest), "r" (__count),
"i" (-EFAULT)
- : "memory");
+ : "memory", "t");
return res;
}
@@ -376,7+379,8 @@ extern __inline__ long __strnlen_user(const char *__s, long __n) " .long 1b,3b\n"
".previous"
: "=z" (res), "=&r" (__dummy)
- : "0" (0), "r" (__s), "r" (__n), "i" (-EFAULT));
+ : "0" (0), "r" (__s), "r" (__n), "i" (-EFAULT)
+ : "t");
return res;
}
#define __NR_mincore 218
#define __NR_madvise 219
#define __NR_getdents64 220
+#define __NR_fcntl64 221
/* user-visible error numbers are in the range -1 - -125: see <asm-sh/errno.h> */
@@ -249,7+250,7 @@ do { \ #define _syscall0(type,name) \
type name(void) \
{ \
-register long __sc0 __asm__ ("$r3") = __NR_##name; \
+register long __sc0 __asm__ ("r3") = __NR_##name; \
__asm__ __volatile__ ("trapa #0x10" \
: "=z" (__sc0) \
: "0" (__sc0) \
@@ -260,8+261,8 @@ __syscall_return(type,__sc0); \ #define _syscall1(type,name,type1,arg1) \
type name(type1 arg1) \
{ \
-register long __sc0 __asm__ ("$r3") = __NR_##name; \
-register long __sc4 __asm__ ("$r4") = (long) arg1; \
+register long __sc0 __asm__ ("r3") = __NR_##name; \
+register long __sc4 __asm__ ("r4") = (long) arg1; \
__asm__ __volatile__ ("trapa #0x11" \
: "=z" (__sc0) \
: "0" (__sc0), "r" (__sc4) \
@@ -272,9+273,9 @@ __syscall_return(type,__sc0); \ #define _syscall2(type,name,type1,arg1,type2,arg2) \
type name(type1 arg1,type2 arg2) \
{ \
-register long __sc0 __asm__ ("$r3") = __NR_##name; \
-register long __sc4 __asm__ ("$r4") = (long) arg1; \
-register long __sc5 __asm__ ("$r5") = (long) arg2; \
+register long __sc0 __asm__ ("r3") = __NR_##name; \
+register long __sc4 __asm__ ("r4") = (long) arg1; \
+register long __sc5 __asm__ ("r5") = (long) arg2; \
__asm__ __volatile__ ("trapa #0x12" \
: "=z" (__sc0) \
: "0" (__sc0), "r" (__sc4), "r" (__sc5) \
@@ -285,10+286,10 @@ __syscall_return(type,__sc0); \ #define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
type name(type1 arg1,type2 arg2,type3 arg3) \
{ \
-register long __sc0 __asm__ ("$r3") = __NR_##name; \
-register long __sc4 __asm__ ("$r4") = (long) arg1; \
-register long __sc5 __asm__ ("$r5") = (long) arg2; \
-register long __sc6 __asm__ ("$r6") = (long) arg3; \
+register long __sc0 __asm__ ("r3") = __NR_##name; \
+register long __sc4 __asm__ ("r4") = (long) arg1; \
+register long __sc5 __asm__ ("r5") = (long) arg2; \
+register long __sc6 __asm__ ("r6") = (long) arg3; \
__asm__ __volatile__ ("trapa #0x13" \
: "=z" (__sc0) \
: "0" (__sc0), "r" (__sc4), "r" (__sc5), "r" (__sc6) \
@@ -299,11+300,11 @@ __syscall_return(type,__sc0); \ #define _syscall4(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4) \
type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4) \
{ \
-register long __sc0 __asm__ ("$r3") = __NR_##name; \
-register long __sc4 __asm__ ("$r4") = (long) arg1; \
-register long __sc5 __asm__ ("$r5") = (long) arg2; \
-register long __sc6 __asm__ ("$r6") = (long) arg3; \
-register long __sc7 __asm__ ("$r7") = (long) arg4; \
+register long __sc0 __asm__ ("r3") = __NR_##name; \
+register long __sc4 __asm__ ("r4") = (long) arg1; \
+register long __sc5 __asm__ ("r5") = (long) arg2; \
+register long __sc6 __asm__ ("r6") = (long) arg3; \
+register long __sc7 __asm__ ("r7") = (long) arg4; \
__asm__ __volatile__ ("trapa #0x14" \
: "=z" (__sc0) \
: "0" (__sc0), "r" (__sc4), "r" (__sc5), "r" (__sc6), \
@@ -315,12+316,12 @@ __syscall_return(type,__sc0); \ #define _syscall5(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5) \
type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5) \
{ \
-register long __sc3 __asm__ ("$r3") = __NR_##name; \
-register long __sc4 __asm__ ("$r4") = (long) arg1; \
-register long __sc5 __asm__ ("$r5") = (long) arg2; \
-register long __sc6 __asm__ ("$r6") = (long) arg3; \
-register long __sc7 __asm__ ("$r7") = (long) arg4; \
-register long __sc0 __asm__ ("$r0") = (long) arg5; \
+register long __sc3 __asm__ ("r3") = __NR_##name; \
+register long __sc4 __asm__ ("r4") = (long) arg1; \
+register long __sc5 __asm__ ("r5") = (long) arg2; \
+register long __sc6 __asm__ ("r6") = (long) arg3; \
+register long __sc7 __asm__ ("r7") = (long) arg4; \
+register long __sc0 __asm__ ("r0") = (long) arg5; \
__asm__ __volatile__ ("trapa #0x15" \
: "=z" (__sc0) \
: "0" (__sc0), "r" (__sc4), "r" (__sc5), "r" (__sc6), "r" (__sc7), \
@@ -345,7+346,6 @@ __syscall_return(type,__sc0); \ */
#define __NR__exit __NR_exit
static __inline__ _syscall0(int,pause)
-static __inline__ _syscall1(int,setup,int,magic)
static __inline__ _syscall0(int,sync)
static __inline__ _syscall0(pid_t,setsid)
static __inline__ _syscall3(int,write,int,fd,const char *,buf,off_t,count)
extern void inet_proto_init(struct net_proto *pro);
extern char *in_ntoa(__u32 in);
-extern char *in_ntoa2(__u32 in, char *buf);
extern __u32 in_aton(const char *str);
#endif
typedef struct kmem_cache_s kmem_cache_t;
-#include <linux/config.h>
#include <linux/mm.h>
#include <linux/cache.h>
@@ -91,6+91,7 @@ extern void age_page_up(struct page *); extern void age_page_up_nolock(struct page *);
extern void age_page_down(struct page *);
extern void age_page_down_nolock(struct page *);
+extern void age_page_down_ageonly(struct page *);
extern void deactivate_page(struct page *);
extern void deactivate_page_nolock(struct page *);
extern void activate_page(struct page *);
@@ -26,9+26,6 @@ extern void vmfree_area_pages(unsigned long address, unsigned long size); extern int vmalloc_area_pages(unsigned long address, unsigned long size,
int gfp_mask, pgprot_t prot);
-extern struct vm_struct * vmlist;
-
-
/*
* Allocate any pages
*/
#define version(a) Version_ ## a
#define version_string(a) version(a)
-int version_string(LINUX_VERSION_CODE) = 0;
+int version_string(LINUX_VERSION_CODE);
struct new_utsname system_utsname = {
UTS_SYSNAME, UTS_NODENAME, UTS_RELEASE, UTS_VERSION,
@@ -141,65+141,54 @@ static inline int goodness(struct task_struct * p, int this_cpu, struct mm_struc int weight;
/*
- * Realtime process, select the first one on the
- * runqueue (taking priorities within processes
- * into account).
+ * select the current process after every other
+ * runnable process, but before the idle thread.
+ * Also, dont trigger a counter recalculation.
*/
- if (p->policy != SCHED_OTHER) {
- weight = 1000 + p->rt_priority;
+ weight = -1;
+ if (p->policy & SCHED_YIELD)
goto out;
- }
/*
- * Give the process a first-approximation goodness value
- * according to the number of clock-ticks it has left.
- *
- * Don't do any other calculations if the time slice is
- * over..
+ * Non-RT process - normal case first.
*/
- weight = p->counter;
- if (!weight)
- goto out;
+ if (p->policy == SCHED_OTHER) {
+ /*
+ * Give the process a first-approximation goodness value
+ * according to the number of clock-ticks it has left.
+ *
+ * Don't do any other calculations if the time slice is
+ * over..
+ */
+ weight = p->counter;
+ if (!weight)
+ goto out;
#ifdef CONFIG_SMP
- /* Give a largish advantage to the same processor... */
- /* (this is equivalent to penalizing other processors) */
- if (p->processor == this_cpu)
- weight += PROC_CHANGE_PENALTY;
+ /* Give a largish advantage to the same processor... */
+ /* (this is equivalent to penalizing other processors) */
+ if (p->processor == this_cpu)
+ weight += PROC_CHANGE_PENALTY;
#endif
- /* .. and a slight advantage to the current MM */
- if (p->mm == this_mm || !p->mm)
- weight += 1;
- weight += 20 - p->nice;
+ /* .. and a slight advantage to the current MM */
+ if (p->mm == this_mm || !p->mm)
+ weight += 1;
+ weight += 20 - p->nice;
+ goto out;
+ }
+ /*
+ * Realtime process, select the first one on the
+ * runqueue (taking priorities within processes
+ * into account).
+ */
+ weight = 1000 + p->rt_priority;
out:
return weight;
}
/*
- * subtle. We want to discard a yielded process only if it's being
- * considered for a reschedule. Wakeup-time 'queries' of the scheduling
- * state do not count. Another optimization we do: sched_yield()-ed
- * processes are runnable (and thus will be considered for scheduling)
- * right when they are calling schedule(). So the only place we need
- * to care about SCHED_YIELD is when we calculate the previous process'
- * goodness ...
- */
-static inline int prev_goodness(struct task_struct * p, int this_cpu, struct mm_struct *this_mm)
-{
- if (p->policy & SCHED_YIELD) {
- /*
- * select the current process after every other
- * runnable process, but before the idle thread.
- * Also, dont trigger a counter recalculation.
- */
- return -1;
- }
- return goodness(p, this_cpu, this_mm);
-}
-
-/*
* the 'goodness value' of replacing a process on a given CPU.
* positive value means 'replace', zero or negative means 'dont'.
*/
@@ -678,7+667,7 @@ recalculate: goto repeat_schedule;
still_running:
- c = prev_goodness(prev, this_cpu, prev->active_mm);
+ c = goodness(prev, this_cpu, prev->active_mm);
next = prev;
goto still_running_back;
atomic_t page_cache_size = ATOMIC_INIT(0);
unsigned int page_hash_bits;
struct page **page_hash_table;
-struct list_head lru_cache;
spinlock_t pagecache_lock = SPIN_LOCK_UNLOCKED;
/*
@@ -258,13+258,13 @@ static struct page * __alloc_pages_limit(zonelist_t *zonelist, */
switch (limit) {
default:
- case 0:
+ case PAGES_MIN:
water_mark = z->pages_min;
break;
- case 1:
+ case PAGES_LOW:
water_mark = z->pages_low;
break;
- case 2:
+ case PAGES_HIGH:
water_mark = z->pages_high;
}
@@ -318,10+318,19 @@ struct page * __alloc_pages(zonelist_t *zonelist, unsigned long order) direct_reclaim = 1;
/*
- * Are we low on inactive pages?
+ * If we are about to get low on free pages and we also have
+ * an inactive page shortage, wake up kswapd.
*/
if (inactive_shortage() > inactive_target / 2 && free_shortage())
wakeup_kswapd(0);
+ /*
+ * If we are about to get low on free pages and cleaning
+ * the inactive_dirty pages would fix the situation,
+ * wake up bdflush.
+ */
+ else if (free_shortage() && nr_inactive_dirty_pages > free_shortage()
+ && nr_inactive_dirty_pages > freepages.high)
+ wakeup_bdflush(0);
try_again:
/*
@@ -378,8+387,23 @@ try_again: *
* We wake up kswapd, in the hope that kswapd will
* resolve this situation before memory gets tight.
+ *
+ * We also yield the CPU, because that:
+ * - gives kswapd a chance to do something
+ * - slows down allocations, in particular the
+ * allocations from the fast allocator that's
+ * causing the problems ...
+ * - ... which minimises the impact the "bad guys"
+ * have on the rest of the system
+ * - if we don't have __GFP_IO set, kswapd may be
+ * able to free some memory we can't free ourselves
*/
wakeup_kswapd(0);
+ if (gfp_mask & __GFP_WAIT) {
+ __set_current_state(TASK_RUNNING);
+ current->policy |= SCHED_YIELD;
+ schedule();
+ }
/*
* After waking up kswapd, we try to allocate a page
@@ -440,28+464,43 @@ try_again: * up again. After that we loop back to the start.
*
* We have to do this because something else might eat
- * the memory kswapd frees for us (interrupts, other
- * processes, etc).
+ * the memory kswapd frees for us and we need to be
+ * reliable. Note that we don't loop back for higher
+ * order allocations since it is possible that kswapd
+ * simply cannot free a large enough contiguous area
+ * of memory *ever*.
*/
- if (gfp_mask & __GFP_WAIT) {
- /*
- * Give other processes a chance to run:
- */
- if (current->need_resched) {
- __set_current_state(TASK_RUNNING);
- schedule();
- }
+ if ((gfp_mask & (__GFP_WAIT|__GFP_IO)) == (__GFP_WAIT|__GFP_IO)) {
+ wakeup_kswapd(1);
+ memory_pressure++;
+ if (!order)
+ goto try_again;
+ /*
+ * If __GFP_IO isn't set, we can't wait on kswapd because
+ * kswapd just might need some IO locks /we/ are holding ...
+ *
+ * SUBTLE: The scheduling point above makes sure that
+ * kswapd does get the chance to free memory we can't
+ * free ourselves...
+ */
+ } else if (gfp_mask & __GFP_WAIT) {
try_to_free_pages(gfp_mask);
memory_pressure++;
- goto try_again;
+ if (!order)
+ goto try_again;
}
+
}
/*
* Final phase: allocate anything we can!
*
- * This is basically reserved for PF_MEMALLOC and
- * GFP_ATOMIC allocations...
+ * Higher order allocations, GFP_ATOMIC allocations and
+ * recursive allocations (PF_MEMALLOC) end up here.
+ *
+ * Only recursive allocations can use the very last pages
+ * in the system, otherwise it would be just too easy to
+ * deadlock the system...
*/
zone = zonelist->zones;
for (;;) {
@@ -472,8+511,21 @@ try_again: if (!z->size)
BUG();
+ /*
+ * SUBTLE: direct_reclaim is only possible if the task
+ * becomes PF_MEMALLOC while looping above. This will
+ * happen when the OOM killer selects this task for
+ * instant execution...
+ */
if (direct_reclaim)
page = reclaim_page(z);
+ if (page)
+ return page;
+
+ /* XXX: is pages_min/4 a good amount to reserve for this? */
+ if (z->free_pages < z->pages_min / 4 &&
+ !(current->flags & PF_MEMALLOC))
+ continue;
if (!page)
page = rmqueue(z, order);
if (page)
@@ -481,8+533,7 @@ try_again: }
/* No luck.. */
- if (!order)
- show_free_areas();
+ printk(KERN_ERR "__alloc_pages: %lu-order allocation failed.\n", order);
return NULL;
}
@@ -572,6+623,13 @@ unsigned int nr_free_buffer_pages (void) sum = nr_free_pages();
sum += nr_inactive_clean_pages();
sum += nr_inactive_dirty_pages;
+
+ /*
+ * Keep our write behind queue filled, even if
+ * kswapd lags a bit right now.
+ */
+ if (sum < freepages.high + inactive_target)
+ sum = freepages.high + inactive_target;
/*
* We don't want dirty page writebehind to put too
* much pressure on the working set, but we want it
@@ -100,6+100,15 @@ void age_page_up_nolock(struct page * page) page->age = PAGE_AGE_MAX;
}
+/*
+ * We use this (minimal) function in the case where we
+ * know we can't deactivate the page (yet).
+ */
+void age_page_down_ageonly(struct page * page)
+{
+ page->age /= 2;
+}
+
void age_page_down_nolock(struct page * page)
{
/* The actual page aging bit */
@@ -155,30+164,39 @@ void age_page_down(struct page * page) */
void deactivate_page_nolock(struct page * page)
{
+ /*
+ * One for the cache, one for the extra reference the
+ * caller has and (maybe) one for the buffers.
+ *
+ * This isn't perfect, but works for just about everything.
+ * Besides, as long as we don't move unfreeable pages to the
+ * inactive_clean list it doesn't need to be perfect...
+ */
+ int maxcount = (page->buffers ? 3 : 2);
page->age = 0;
/*
* Don't touch it if it's not on the active list.
* (some pages aren't on any list at all)
*/
- if (PageActive(page) && (page_count(page) <= 2 || page->buffers) &&
+ if (PageActive(page) && page_count(page) <= maxcount &&
!page_ramdisk(page)) {
/*
* We can move the page to the inactive_dirty list
- * if we know there is backing store available.
+ * if we have the strong suspicion that they might
+ * become freeable in the near future.
*
- * We also move pages here that we cannot free yet,
- * but may be able to free later - because most likely
- * we're holding an extra reference on the page which
- * will be dropped right after deactivate_page().
+ * That is, the page has buffer heads attached (that
+ * need to be cleared away) and/or the function calling
+ * us has an extra reference count on the page.
*/
if (page->buffers || page_count(page) == 2) {
del_page_from_active_list(page);
add_page_to_inactive_dirty_list(page);
/*
- * If the page is clean and immediately reusable,
- * we can move it to the inactive_clean list.
+ * Only if we are SURE the page is clean and immediately
+ * reusable, we move it to the inactive_clean list.
*/
} else if (page->mapping && !PageDirty(page) &&
!PageLocked(page)) {
@@ -215,6+233,10 @@ void activate_page_nolock(struct page * page) * not to do anything.
*/
}
+
+ /* Make sure the page gets a fair chance at staying active. */
+ if (page->age < PAGE_AGE_START)
+ page->age = PAGE_AGE_START;
}
void activate_page(struct page * page)
@@ -74,7+74,8 @@ static int try_to_swap_out(struct mm_struct * mm, struct vm_area_struct* vma, un goto out_failed;
}
if (!onlist)
- age_page_down(page);
+ /* The page is still mapped, so it can't be freeable... */
+ age_page_down_ageonly(page);
/*
* If the page is in active use by us, or if the page
@@ -419,7+420,7 @@ static int swap_out(unsigned int priority, int gfp_mask, unsigned long idle_time continue;
/* Skip tasks which haven't slept long enough yet when idle-swapping. */
if (idle_time && !assign && (!(p->state & TASK_INTERRUPTIBLE) ||
- time_before(p->sleep_time + idle_time * HZ, jiffies)))
+ time_after(p->sleep_time + idle_time * HZ, jiffies)))
continue;
found_task++;
/* Refresh swap_cnt? */
@@ -536,6+537,7 @@ struct page * reclaim_page(zone_t * zone) found_page:
del_page_from_inactive_clean_list(page);
UnlockPage(page);
+ page->age = PAGE_AGE_START;
if (page_count(page) != 1)
printk("VM: reclaim_page, found page with count %d!\n",
page_count(page));
* This code is heavily inspired by the FreeBSD source code. Thanks
* go out to Matthew Dillon.
*/
-#define MAX_SYNC_LAUNDER (1 << page_cluster)
-#define MAX_LAUNDER (MAX_SYNC_LAUNDER * 4)
+#define MAX_LAUNDER (4 * (1 << page_cluster))
int page_launder(int gfp_mask, int sync)
{
- int synclaunder, launder_loop, maxscan, cleaned_pages, maxlaunder;
+ int launder_loop, maxscan, cleaned_pages, maxlaunder;
+ int can_get_io_locks;
struct list_head * page_lru;
struct page * page;
+ /*
+ * We can only grab the IO locks (eg. for flushing dirty
+ * buffers to disk) if __GFP_IO is set.
+ */
+ can_get_io_locks = gfp_mask & __GFP_IO;
+
launder_loop = 0;
- synclaunder = 0;
maxlaunder = 0;
cleaned_pages = 0;
- if (!(gfp_mask & __GFP_IO))
- return 0;
-
dirty_page_rescan:
spin_lock(&pagemap_lru_lock);
maxscan = nr_inactive_dirty_pages;
@@ -638,7+642,7 @@ dirty_page_rescan: spin_unlock(&pagemap_lru_lock);
/* Will we do (asynchronous) IO? */
- if (launder_loop && synclaunder-- > 0)
+ if (launder_loop && maxlaunder == 0 && sync)
wait = 2; /* Synchrounous IO */
else if (launder_loop && maxlaunder-- > 0)
wait = 1; /* Async IO */
@@ -725,10+729,11 @@ dirty_page_rescan: * loads, flush out the dirty pages before we have to wait on
* IO.
*/
- if (!launder_loop && free_shortage()) {
+ if (can_get_io_locks && !launder_loop && free_shortage()) {
launder_loop = 1;
- if (sync && !cleaned_pages)
- synclaunder = MAX_SYNC_LAUNDER;
+ /* If we cleaned pages, never do synchronous IO. */
+ if (cleaned_pages)
+ sync = 0;
/* We only do a few "out of order" flushes. */
maxlaunder = MAX_LAUNDER;
/* Kflushd takes care of the rest. */
@@ -774,8+779,23 @@ int refill_inactive_scan(unsigned int priority, int oneshot) age_page_up_nolock(page);
page_active = 1;
} else {
- age_page_down_nolock(page);
- page_active = 0;
+ age_page_down_ageonly(page);
+ /*
+ * Since we don't hold a reference on the page
+ * ourselves, we have to do our test a bit more
+ * strict then deactivate_page(). This is needed
+ * since otherwise the system could hang shuffling
+ * unfreeable pages from the active list to the
+ * inactive_dirty list and back again...
+ *
+ * SUBTLE: we can have buffer pages with count 1.
+ */
+ if (page_count(page) <= (page->buffers ? 2 : 1)) {
+ deactivate_page_nolock(page);
+ page_active = 0;
+ } else {
+ page_active = 1;
+ }
}
/*
* If the page is still on the active list, move it
@@ -805,14+825,11 @@ int free_shortage(void) pg_data_t *pgdat = pgdat_list;
int sum = 0;
int freeable = nr_free_pages() + nr_inactive_clean_pages();
+ int freetarget = freepages.high + inactive_target / 3;
- /* Are we low on truly free pages? */
- if (nr_free_pages() < freepages.min)
- return freepages.high - nr_free_pages();
-
- /* Are we low on free pages over-all? */
- if (freeable < freepages.high)
- return freepages.high - freeable;
+ /* Are we low on free pages globally? */
+ if (freeable < freetarget)
+ return freetarget - freeable;
/* If not, are we very low on any particular zone? */
do {
@@ -1043,14+1060,7 @@ int kswapd(void *unused) /* Do we need to do some synchronous flushing? */
if (waitqueue_active(&kswapd_done))
wait = 1;
- if (!do_try_to_free_pages(GFP_KSWAPD, wait)) {
- /*
- * if (out_of_memory()) {
- * try again a few times;
- * oom_kill();
- * }
- */
- }
+ do_try_to_free_pages(GFP_KSWAPD, wait);
}
/*
@@ -1087,6+1097,10 @@ int kswapd(void *unused) */
if (!free_shortage() || !inactive_shortage())
interruptible_sleep_on_timeout(&kswapd_wait, HZ);
+ /*
+ * TODO: insert out of memory check & oom killer
+ * invocation in an else branch here.
+ */
}
}
@@ -1121,25+1135,18 @@ void wakeup_kswapd(int block)
/*
* Called by non-kswapd processes when they want more
- * memory.
- *
- * In a perfect world, this should just wake up kswapd
- * and return. We don't actually want to swap stuff out
- * from user processes, because the locking issues are
- * nasty to the extreme (file write locks, and MM locking)
- *
- * One option might be to let kswapd do all the page-out
- * and VM page table scanning that needs locking, and this
- * process thread could do just the mmap shrink stage that
- * can be done by just dropping cached pages without having
- * any deadlock issues.
+ * memory but are unable to sleep on kswapd because
+ * they might be holding some IO locks ...
*/
int try_to_free_pages(unsigned int gfp_mask)
{
int ret = 1;
- if (gfp_mask & __GFP_WAIT)
+ if (gfp_mask & __GFP_WAIT) {
+ current->flags |= PF_MEMALLOC;
ret = do_try_to_free_pages(gfp_mask, 1);
+ current->flags &= ~PF_MEMALLOC;
+ }
return ret;
}
@@ -1099,15+1099,18 @@ static int arp_get_info(char *buffer, char **start, off_t offset, int length) struct net_device *dev = n->dev;
int hatype = dev ? dev->type : 0;
- size = sprintf(buffer+len,
- "%u.%u.%u.%u0x%-10x0x%-10x%s",
- NIPQUAD(*(u32*)n->key),
- hatype,
- ATF_PUBL|ATF_PERM,
- "00:00:00:00:00:00");
+ {
+ char tbuf[16];
+ sprintf(tbuf, "%u.%u.%u.%u", NIPQUAD(*(u32*)n->key));
+ size = sprintf(buffer+len, "%-16s 0x%-10x0x%-10x%s",
+ tbuf,
+ hatype,
+ ATF_PUBL|ATF_PERM,
+ "00:00:00:00:00:00");
+ }
size += sprintf(buffer+len+size,
- " %-17s %s\n",
- "*", dev ? dev->name : "*");
+ " * %-16s\n",
+ dev ? dev->name : "*");
len += size;
pos += size;
@@ -57,12+57,6 @@ char *in_ntoa(__u32 in) return(buff);
}
-char *in_ntoa2(__u32 in, char *buff)
-{
- sprintf(buff, "%d.%d.%d.%d", NIPQUAD(in));
- return buff;
-}
-
/*
* Convert an ASCII string to binary IP.
*/
#include <net/ipv6.h>
#include <net/protocol.h>
-struct inet6_protocol *inet6_protocol_base = NULL;
-struct inet6_protocol *inet6_protos[MAX_INET_PROTOS] =
-{
- NULL
-};
+struct inet6_protocol *inet6_protocol_base;
+struct inet6_protocol *inet6_protos[MAX_INET_PROTOS];
void inet6_add_protocol(struct inet6_protocol *prot)
{