D: Cobalt Networks (x86) support
D: This-and-That
+N: Mark M. Hoffman
+E: mhoffman@lightlink.com
+D: asb100, lm93 and smsc47b397 hardware monitoring drivers
+D: hwmon subsystem core
+D: hwmon subsystem maintainer
+D: i2c-sis96x and i2c-stub SMBus drivers
+S: USA
+
N: Dirk Hohndel
E: hohndel@suse.de
D: The XFree86[tm] Project
Datasheet: Publicly available at the Maxim website
http://www.maxim-ic.com/
* Microchip (TelCom) TCN75
- Prefix: 'lm75'
+ Prefix: 'tcn75'
Addresses scanned: none
Datasheet: Publicly available at the Microchip website
http://www.microchip.com/
Documentation:
http://www.diolan.com/i2c/u2c12.html
-Author: Guenter Roeck <guenter.roeck@ericsson.com>
+Author: Guenter Roeck <linux@roeck-us.net>
Description
-----------
or other driver-specific files in the
Documentation/watchdog/ directory.
+ workqueue.disable_numa
+ By default, all work items queued to unbound
+ workqueues are affine to the NUMA nodes they're
+ issued on, which results in better behavior in
+ general. If NUMA affinity needs to be disabled for
+ whatever reason, this option can be used. Note
+ that this also can be controlled per-workqueue for
+ workqueues visible under /sys/bus/workqueue/.
+
x2apic_phys [X86-64,APIC] Use x2apic physical mode instead of
default x2apic cluster mode on platforms
supporting x2apic.
enabled and the variable is automatically set to 2, otherwise
the strategy is disabled and the variable is set to 1.
+backup_only - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ If set, disable the director function while the server is
+ in backup mode to avoid packet loops for DR/TUN methods.
+
conntrack - BOOLEAN
0 - disabled (default)
not 0 - enabled
models depending on the codec chip. The list of available models
is found in HD-Audio-Models.txt
- The model name "genric" is treated as a special case. When this
+ The model name "generic" is treated as a special case. When this
model is given, the driver uses the generic codec parser without
"codec-patch". It's sometimes good for testing and debugging.
<H4>
7.2.4 Close Callback</H4>
The <TT>close</TT> callback is called when this device is closed by the
-applicaion. If any private data was allocated in open callback, it must
+application. If any private data was allocated in open callback, it must
be released in the close callback. The deletion of ALSA port should be
done here, too. This callback must not be NULL.
<H4>
F: drivers/platform/x86/asus*.c
F: drivers/platform/x86/eeepc*.c
-ASUS ASB100 HARDWARE MONITOR DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
-L: lm-sensors@lm-sensors.org
-S: Maintained
-F: drivers/hwmon/asb100.c
-
ASYNCHRONOUS TRANSFERS/TRANSFORMS (IOAT) API
M: Dan Williams <djbw@fb.com>
W: http://sourceforge.net/projects/xscaleiop
F: drivers/dma/at_hdmac_regs.h
F: include/linux/platform_data/dma-atmel.h
+ATMEL I2C DRIVER
+M: Ludovic Desroches <ludovic.desroches@atmel.com>
+L: linux-i2c@vger.kernel.org
+S: Supported
+F: drivers/i2c/busses/i2c-at91.c
+
ATMEL ISI DRIVER
M: Josh Wu <josh.wu@atmel.com>
L: linux-media@vger.kernel.org
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
M: Daniel Vetter <daniel.vetter@ffwll.ch>
-L: intel-gfx@lists.freedesktop.org (subscribers-only)
+L: intel-gfx@lists.freedesktop.org
L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~danvet/drm-intel
S: Supported
F: drivers/base/firmware*.c
F: include/linux/firmware.h
+FLASHSYSTEM DRIVER (IBM FlashSystem 70/80 PCI SSD Flash Card)
+M: Joshua Morris <josh.h.morris@us.ibm.com>
+M: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
+S: Maintained
+F: drivers/block/rsxx/
+
FLOPPY DRIVER
M: Jiri Kosina <jkosina@suse.cz>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/floppy.git
F: Documentation/i2c/busses/i2c-ismt
I2C/SMBUS STUB DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
+M: Jean Delvare <khali@linux-fr.org>
L: linux-i2c@vger.kernel.org
S: Maintained
F: drivers/i2c/i2c-stub.c
F: drivers/video/riva/
F: drivers/video/nvidia/
+NVM EXPRESS DRIVER
+M: Matthew Wilcox <willy@linux.intel.com>
+L: linux-nvme@lists.infradead.org
+T: git git://git.infradead.org/users/willy/linux-nvme.git
+S: Supported
+F: drivers/block/nvme.c
+F: include/linux/nvme.h
+
OMAP SUPPORT
M: Tony Lindgren <tony@atomide.com>
L: linux-omap@vger.kernel.org
F: arch/arm/*omap*/*clock*
OMAP POWER MANAGEMENT SUPPORT
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: arch/arm/*omap*/*pm*
OMAP GPIO DRIVER
M: Santosh Shilimkar <santosh.shilimkar@ti.com>
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-omap.c
F: drivers/power/
PNP SUPPORT
-M: Adam Belay <abelay@mit.edu>
+M: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
M: Bjorn Helgaas <bhelgaas@google.com>
S: Maintained
F: drivers/pnp/
F: Documentation/blockdev/ramdisk.txt
F: drivers/block/brd.c
-RAMSAM DRIVER (IBM RamSan 70/80 PCI SSD Flash Card)
-M: Joshua Morris <josh.h.morris@us.ibm.com>
-M: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
-S: Maintained
-F: drivers/block/rsxx/
-
RANDOM NUMBER DRIVER
M: Theodore Ts'o" <tytso@mit.edu>
S: Maintained
TI DAVINCI MACHINE SUPPORT
M: Sekhar Nori <nsekhar@ti.com>
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: davinci-linux-open-source@linux.davincidsp.com (moderated for non-subscribers)
T: git git://gitorious.org/linux-davinci/linux-davinci.git
Q: http://patchwork.kernel.org/project/linux-davinci/list/
S: Maintained
F: drivers/net/ethernet/sis/sis900.*
-SIS 96X I2C/SMBUS DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
-L: linux-i2c@vger.kernel.org
-S: Maintained
-F: Documentation/i2c/busses/i2c-sis96x
-F: drivers/i2c/busses/i2c-sis96x.c
-
SIS FRAMEBUFFER DRIVER
M: Thomas Winischhofer <thomas@winischhofer.net>
W: http://www.winischhofer.net/linuxsisvga.shtml
F: drivers/hwmon/sch5627.c
SMSC47B397 HARDWARE MONITOR DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
+M: Jean Delvare <khali@linux-fr.org>
L: lm-sensors@lm-sensors.org
S: Maintained
F: Documentation/hwmon/smsc47b397
SYNOPSYS ARC ARCHITECTURE
M: Vineet Gupta <vgupta@synopsys.com>
-L: linux-snps-arc@vger.kernel.org
S: Supported
F: arch/arc/
+F: Documentation/devicetree/bindings/arc/
+F: drivers/tty/serial/arc-uart.c
SYSV FILESYSTEM
M: Christoph Hellwig <hch@infradead.org>
VERSION = 3
PATCHLEVEL = 9
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc5
NAME = Unicycling Gorilla
# *DOCUMENTATION*
int i;
for_each_sg(sg, s, nents, i)
- sg->dma_address = dma_map_page(dev, sg_page(s), s->offset,
+ s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
*/
#define ELF_PLATFORM (NULL)
-#define SET_PERSONALITY(ex) \
- set_personality(PER_LINUX | (current->personality & (~PER_MASK)))
-
#endif
*-------------------------------------------------------------*/
.macro SAVE_ALL_EXCEPTION marker
- st \marker, [sp, 8]
+ st \marker, [sp, 8] /* orig_r8 */
st r0, [sp, 4] /* orig_r0, needed only for sys calls */
/* Restore r9 used to code the early prologue */
#ifdef CONFIG_KGDB
-#include <asm/user.h>
+#include <asm/ptrace.h>
/* to ensure compatibility with Linux 2.6.35, we don't implement the get/set
* register API yet */
};
#else
-static inline void kgdb_trap(struct pt_regs *regs, int param)
-{
-}
+#define kgdb_trap(regs, param)
#endif
#endif /* __ARC_KGDB_H__ */
#define orig_r8_IS_SCALL 0x0001
#define orig_r8_IS_SCALL_RESTARTED 0x0002
#define orig_r8_IS_BRKPT 0x0004
-#define orig_r8_IS_EXCPN 0x0004
+#define orig_r8_IS_EXCPN 0x0008
#define orig_r8_IS_IRQ1 0x0010
#define orig_r8_IS_IRQ2 0x0020
#include <linux/types.h>
int sys_clone_wrapper(int, int, int, int, int);
-int sys_fork_wrapper(void);
-int sys_vfork_wrapper(void);
int sys_cacheflush(uint32_t, uint32_t uint32_t);
int sys_arc_settls(void *);
int sys_arc_gettls(void);
*/
struct user_regs_struct {
- struct scratch {
+ struct {
long pad;
long bta, lp_start, lp_end, lp_count;
long status32, ret, blink, fp, gp;
long r12, r11, r10, r9, r8, r7, r6, r5, r4, r3, r2, r1, r0;
long sp;
} scratch;
- struct callee {
+ struct {
long pad;
long r25, r24, r23, r22, r21, r20;
long r19, r18, r17, r16, r15, r14, r13;
; using ERET won't work since next-PC has already committed
lr r12, [efa]
GET_CURR_TASK_FIELD_PTR TASK_THREAD, r11
- st r12, [r11, THREAD_FAULT_ADDR]
+ st r12, [r11, THREAD_FAULT_ADDR] ; thread.fault_address
; PRE Sys Call Ptrace hook
mov r0, sp ; pt_regs needed
;################### Special Sys Call Wrappers ##########################
-; TBD: call do_fork directly from here
-ARC_ENTRY sys_fork_wrapper
- SAVE_CALLEE_SAVED_USER
- bl @sys_fork
- DISCARD_CALLEE_SAVED_USER
-
- GET_CURR_THR_INFO_FLAGS r10
- btst r10, TIF_SYSCALL_TRACE
- bnz tracesys_exit
-
- b ret_from_system_call
-ARC_EXIT sys_fork_wrapper
-
-ARC_ENTRY sys_vfork_wrapper
- SAVE_CALLEE_SAVED_USER
- bl @sys_vfork
- DISCARD_CALLEE_SAVED_USER
-
- GET_CURR_THR_INFO_FLAGS r10
- btst r10, TIF_SYSCALL_TRACE
- bnz tracesys_exit
-
- b ret_from_system_call
-ARC_EXIT sys_vfork_wrapper
-
ARC_ENTRY sys_clone_wrapper
SAVE_CALLEE_SAVED_USER
bl @sys_clone
*/
#include <linux/kgdb.h>
+#include <linux/sched.h>
#include <asm/disasm.h>
#include <asm/cacheflush.h>
n += scnprintf(buf + n, len - n, "\n");
-#ifdef _ASM_GENERIC_UNISTD_H
n += scnprintf(buf + n, len - n,
- "OS ABI [v2]\t: asm-generic/{unistd,stat,fcntl}\n");
-#endif
+ "OS ABI [v3]\t: no-legacy-syscalls\n");
return buf;
}
#include <asm/syscalls.h>
#define sys_clone sys_clone_wrapper
-#define sys_fork sys_fork_wrapper
-#define sys_vfork sys_vfork_wrapper
#undef __SYSCALL
#define __SYSCALL(nr, call) [nr] = (call),
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16
- select VIRT_TO_BUS
select KTIME_SCALAR
select PERF_USE_VMALLOC
select RTC_LIB
select NEED_MACH_IO_H
select NEED_MACH_MEMORY_H
select NO_IOPORT
+ select VIRT_TO_BUS
help
On the Acorn Risc-PC, Linux can support the internal IDE disk and
CD-ROM interface, serial and parallel port, and the floppy drive.
select ISA_DMA
select NEED_MACH_MEMORY_H
select PCI
+ select VIRT_TO_BUS
select ZONE_DMA
help
Support for the StrongARM based Digital DNARD machine, also known
bool
config ARCH_MULTI_V6
- bool "ARMv6 based platforms (ARM11, Scorpion, ...)"
+ bool "ARMv6 based platforms (ARM11)"
select ARCH_MULTI_V6_V7
select CPU_V6
config ARCH_MULTI_V7
- bool "ARMv7 based platforms (Cortex-A, PJ4, Krait)"
+ bool "ARMv7 based platforms (Cortex-A, PJ4, Scorpion, Krait)"
default y
select ARCH_MULTI_V6_V7
select ARCH_VEXPRESS
bool
select ISA_DMA_API
-config ARCH_NO_VIRT_TO_BUS
- def_bool y
- depends on !ARCH_RPC && !ARCH_NETWINDER && !ARCH_SHARK
-
# Select ISA DMA interface
config ISA_DMA_API
bool
DEBUG_IMX53_UART || \
DEBUG_IMX6Q_UART
default 1
+ depends on ARCH_MXC
help
Choose UART port on which kernel low-level debug messages
should be output.
nand {
pinctrl_nand: nand-0 {
atmel,pins =
- <3 4 0x0 0x1 /* PD5 gpio RDY pin pull_up */
- 3 5 0x0 0x1>; /* PD4 gpio enable pin pull_up */
+ <3 0 0x1 0x0 /* PD0 periph A Read Enable */
+ 3 1 0x1 0x0 /* PD1 periph A Write Enable */
+ 3 2 0x1 0x0 /* PD2 periph A Address Latch Enable */
+ 3 3 0x1 0x0 /* PD3 periph A Command Latch Enable */
+ 3 4 0x0 0x1 /* PD4 gpio Chip Enable pin pull_up */
+ 3 5 0x0 0x1 /* PD5 gpio RDY/BUSY pin pull_up */
+ 3 6 0x1 0x0 /* PD6 periph A Data bit 0 */
+ 3 7 0x1 0x0 /* PD7 periph A Data bit 1 */
+ 3 8 0x1 0x0 /* PD8 periph A Data bit 2 */
+ 3 9 0x1 0x0 /* PD9 periph A Data bit 3 */
+ 3 10 0x1 0x0 /* PD10 periph A Data bit 4 */
+ 3 11 0x1 0x0 /* PD11 periph A Data bit 5 */
+ 3 12 0x1 0x0 /* PD12 periph A Data bit 6 */
+ 3 13 0x1 0x0>; /* PD13 periph A Data bit 7 */
+ };
+
+ pinctrl_nand_16bits: nand_16bits-0 {
+ atmel,pins =
+ <3 14 0x1 0x0 /* PD14 periph A Data bit 8 */
+ 3 15 0x1 0x0 /* PD15 periph A Data bit 9 */
+ 3 16 0x1 0x0 /* PD16 periph A Data bit 10 */
+ 3 17 0x1 0x0 /* PD17 periph A Data bit 11 */
+ 3 18 0x1 0x0 /* PD18 periph A Data bit 12 */
+ 3 19 0x1 0x0 /* PD19 periph A Data bit 13 */
+ 3 20 0x1 0x0 /* PD20 periph A Data bit 14 */
+ 3 21 0x1 0x0>; /* PD21 periph A Data bit 15 */
};
};
compatible = "arm,pl330", "arm,primecell";
reg = <0x12680000 0x1000>;
interrupts = <0 35 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
pdma1: pdma@12690000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x12690000 0x1000>;
interrupts = <0 36 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
mdma1: mdma@12850000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x12850000 0x1000>;
interrupts = <0 34 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <1>;
};
};
};
compatible = "arm,pl330", "arm,primecell";
reg = <0x120000 0x1000>;
interrupts = <0 34 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
pdma1: pdma@121B0000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x121000 0x1000>;
interrupts = <0 35 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
};
spi@7000d800 {
compatible = "nvidia,tegra20-slink";
- reg = <0x7000d480 0x200>;
+ reg = <0x7000d800 0x200>;
interrupts = <0 83 0x04>;
nvidia,dma-request-selector = <&apbdma 17>;
#address-cells = <1>;
spi@7000d800 {
compatible = "nvidia,tegra30-slink", "nvidia,tegra20-slink";
- reg = <0x7000d480 0x200>;
+ reg = <0x7000d800 0x200>;
interrupts = <0 83 0x04>;
nvidia,dma-request-selector = <&apbdma 17>;
#address-cells = <1>;
evt->features = CLOCK_EVT_FEAT_ONESHOT |
CLOCK_EVT_FEAT_PERIODIC |
CLOCK_EVT_FEAT_DUMMY;
- evt->rating = 400;
+ evt->rating = 100;
evt->mult = 1;
evt->set_mode = broadcast_timer_set_mode;
.text
.align 5
- .word 0
-
-1: subs r2, r2, #4 @ 1 do we have enough
- blt 5f @ 1 bytes to align with?
- cmp r3, #2 @ 1
- strltb r1, [ip], #1 @ 1
- strleb r1, [ip], #1 @ 1
- strb r1, [ip], #1 @ 1
- add r2, r2, r3 @ 1 (r2 = r2 - (4 - r3))
-/*
- * The pointer is now aligned and the length is adjusted. Try doing the
- * memset again.
- */
ENTRY(memset)
-/*
- * Preserve the contents of r0 for the return value.
- */
- mov ip, r0
- ands r3, ip, #3 @ 1 unaligned?
- bne 1b @ 1
+ ands r3, r0, #3 @ 1 unaligned?
+ mov ip, r0 @ preserve r0 as return value
+ bne 6f @ 1
/*
* we know that the pointer in ip is aligned to a word boundary.
*/
- orr r1, r1, r1, lsl #8
+1: orr r1, r1, r1, lsl #8
orr r1, r1, r1, lsl #16
mov r3, r1
cmp r2, #16
tst r2, #1
strneb r1, [ip], #1
mov pc, lr
+
+6: subs r2, r2, #4 @ 1 do we have enough
+ blt 5b @ 1 bytes to align with?
+ cmp r3, #2 @ 1
+ strltb r1, [ip], #1 @ 1
+ strleb r1, [ip], #1 @ 1
+ strb r1, [ip], #1 @ 1
+ add r2, r2, r3 @ 1 (r2 = r2 - (4 - r3))
+ b 1b
ENDPROC(memset)
extern void at91_gpio_suspend(void);
extern void at91_gpio_resume(void);
+#ifdef CONFIG_PINCTRL_AT91
+extern void at91_pinctrl_gpio_suspend(void);
+extern void at91_pinctrl_gpio_resume(void);
+#else
+static inline void at91_pinctrl_gpio_suspend(void) {}
+static inline void at91_pinctrl_gpio_resume(void) {}
+#endif
+
#endif /* __ASSEMBLY__ */
#endif
void at91_irq_suspend(void)
{
- int i = 0, bit;
+ int bit = -1;
if (has_aic5()) {
/* disable enabled irqs */
- while ((bit = find_next_bit(backups, n_irqs, i)) < n_irqs) {
+ while ((bit = find_next_bit(backups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IDCR, 1);
- i = bit;
}
/* enable wakeup irqs */
- i = 0;
- while ((bit = find_next_bit(wakeups, n_irqs, i)) < n_irqs) {
+ bit = -1;
+ while ((bit = find_next_bit(wakeups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IECR, 1);
- i = bit;
}
} else {
at91_aic_write(AT91_AIC_IDCR, *backups);
void at91_irq_resume(void)
{
- int i = 0, bit;
+ int bit = -1;
if (has_aic5()) {
/* disable wakeup irqs */
- while ((bit = find_next_bit(wakeups, n_irqs, i)) < n_irqs) {
+ while ((bit = find_next_bit(wakeups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IDCR, 1);
- i = bit;
}
/* enable irqs disabled for suspend */
- i = 0;
- while ((bit = find_next_bit(backups, n_irqs, i)) < n_irqs) {
+ bit = -1;
+ while ((bit = find_next_bit(backups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IECR, 1);
- i = bit;
}
} else {
at91_aic_write(AT91_AIC_IDCR, *wakeups);
static int at91_pm_enter(suspend_state_t state)
{
- at91_gpio_suspend();
+ if (of_have_populated_dt())
+ at91_pinctrl_gpio_suspend();
+ else
+ at91_gpio_suspend();
at91_irq_suspend();
pr_debug("AT91: PM - wake mask %08x, pm state %d\n",
error:
target_state = PM_SUSPEND_ON;
at91_irq_resume();
- at91_gpio_resume();
+ if (of_have_populated_dt())
+ at91_pinctrl_gpio_resume();
+ else
+ at91_gpio_resume();
return 0;
}
*/
int edma_alloc_slot(unsigned ctlr, int slot)
{
+ if (!edma_cc[ctlr])
+ return -EINVAL;
+
if (slot >= 0)
slot = EDMA_CHAN_SLOT(slot);
select ISA
select ISA_DMA
select PCI
+ select VIRT_TO_BUS
help
Say Y here if you intend to run this kernel on the Rebel.COM
NetWinder. Information about this machine can be found at:
clk_prepare_enable(clk[gpio3_gate]);
clk_prepare_enable(clk[iim_gate]);
clk_prepare_enable(clk[emi_gate]);
+ clk_prepare_enable(clk[max_gate]);
/*
* SCC is needed to boot via mmc after a watchdog reset. The clock code
NULL
};
+static void __init imx25_timer_init(void)
+{
+ mx25_clocks_init_dt();
+}
+
DT_MACHINE_START(IMX25_DT, "Freescale i.MX25 (Device Tree Support)")
.map_io = mx25_map_io,
.init_early = imx25_init_early,
*/
#include <linux/init.h>
+#include <linux/platform_device.h>
#include <linux/gpio.h>
#include <asm/mach/arch.h>
.lower_margin = 4,
.hsync_len = 1,
.vsync_len = 1,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
},
};
.lower_margin = 10,
.hsync_len = 10,
.vsync_len = 10,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
},
};
.lower_margin = 45,
.hsync_len = 1,
.vsync_len = 1,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT,
},
};
.lower_margin = 13,
.hsync_len = 48,
.vsync_len = 3,
- .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT |
- FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
},
};
.lower_margin = 0x15,
.hsync_len = 64,
.vsync_len = 4,
- .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT |
- FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
},
};
.lower_margin = 2,
.hsync_len = 15,
.vsync_len = 15,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT
},
};
mxsfb_pdata.mode_count = ARRAY_SIZE(mx23evk_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
static inline void enable_clk_enet_out(void)
mxsfb_pdata.mode_count = ARRAY_SIZE(mx28evk_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
mxs_saif_clkmux_select(MXS_DIGCTL_SAIF_CLKMUX_EXTMSTR0);
}
mxsfb_pdata.mode_count = ARRAY_SIZE(m28evk_video_modes);
mxsfb_pdata.default_bpp = 16;
mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT;
}
static void __init sc_sps1_init(void)
mxsfb_pdata.mode_count = ARRAY_SIZE(apx4devkit_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
#define ENET0_MDC__GPIO_4_0 MXS_GPIO_NR(4, 0)
mxsfb_pdata.mode_count = ARRAY_SIZE(cfa10049_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT;
}
static void __init cfa10037_init(void)
mxsfb_pdata.mode_count = ARRAY_SIZE(apf28dev_video_modes);
mxsfb_pdata.default_bpp = 16;
mxsfb_pdata.ld_intf_width = STMLCDIF_16BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
static void __init mxs_machine_init(void)
.name = "pcmcdclk",
};
-static struct clk dummy_apb_pclk = {
- .name = "apb_pclk",
- .id = -1,
-};
-
static struct clk *clkset_vpllsrc_list[] = {
[0] = &clk_fin_vpll,
[1] = &clk_sclk_hdmi27m,
static struct clk init_clocks_off[] = {
{
- .name = "dma",
- .devname = "dma-pl330.0",
- .parent = &clk_hclk_psys.clk,
- .enable = s5pv210_clk_ip0_ctrl,
- .ctrlbit = (1 << 3),
- }, {
- .name = "dma",
- .devname = "dma-pl330.1",
- .parent = &clk_hclk_psys.clk,
- .enable = s5pv210_clk_ip0_ctrl,
- .ctrlbit = (1 << 4),
- }, {
.name = "rot",
.parent = &clk_hclk_dsys.clk,
.enable = s5pv210_clk_ip0_ctrl,
.ctrlbit = (1<<19),
};
+static struct clk clk_pdma0 = {
+ .name = "pdma0",
+ .parent = &clk_hclk_psys.clk,
+ .enable = s5pv210_clk_ip0_ctrl,
+ .ctrlbit = (1 << 3),
+};
+
+static struct clk clk_pdma1 = {
+ .name = "pdma1",
+ .parent = &clk_hclk_psys.clk,
+ .enable = s5pv210_clk_ip0_ctrl,
+ .ctrlbit = (1 << 4),
+};
+
static struct clk *clkset_uart_list[] = {
[6] = &clk_mout_mpll.clk,
[7] = &clk_mout_epll.clk,
&clk_hsmmc1,
&clk_hsmmc2,
&clk_hsmmc3,
+ &clk_pdma0,
+ &clk_pdma1,
};
/* Clock initialisation code */
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
CLKDEV_INIT("s5pv210-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
CLKDEV_INIT("s5pv210-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("dma-pl330.0", "apb_pclk", &clk_pdma0),
+ CLKDEV_INIT("dma-pl330.1", "apb_pclk", &clk_pdma1),
};
void __init s5pv210_register_clocks(void)
for (ptr = 0; ptr < ARRAY_SIZE(clk_cdev); ptr++)
s3c_disable_clocks(clk_cdev[ptr], 1);
- s3c24xx_register_clock(&dummy_apb_pclk);
s3c_pwmclk_init();
}
.mux_id = 0,
.flags = V4L2_MBUS_PCLK_SAMPLE_FALLING |
V4L2_MBUS_VSYNC_ACTIVE_LOW,
- .bus_type = FIMC_BUS_TYPE_ITU_601,
+ .fimc_bus_type = FIMC_BUS_TYPE_ITU_601,
.board_info = &noon010pc30_board_info,
.i2c_bus_num = 0,
.clk_frequency = 16000000UL,
#include <linux/smsc911x.h>
#include <linux/spi/spi.h>
#include <linux/spi/sh_hspi.h>
+#include <linux/mmc/host.h>
#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mfd/tmio.h>
#include <linux/usb/otg.h>
/* x = ((*(frame + k)) & 0xf) << 2; */
ctx->seen |= SEEN_X | SEEN_DATA | SEEN_CALL;
/* the interpreter should deal with the negative K */
- if (k < 0)
+ if ((int)k < 0)
return -1;
/* offset in r1: we might have to take the slow path */
emit_mov_i(r_off, k, ctx);
select CLONE_BACKWARDS
select COMMON_CLK
select GENERIC_CLOCKEVENTS
- select GENERIC_HARDIRQS_NO_DEPRECATED
select GENERIC_IOMAP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
bool
default y
-config DEBUG_ERRORS
- bool "Verbose kernel error messages"
- depends on DEBUG_KERNEL
- help
- This option controls verbose debugging information which can be
- printed when the kernel detects an internal error. This debugging
- information is useful to kernel hackers when tracking down problems,
- but mostly meaningless to other people. It's safe to say Y unless
- you are concerned with the code size or don't want to see these
- messages.
-
config DEBUG_STACK_USAGE
bool "Enable stack utilization instrumentation"
depends on DEBUG_KERNEL
CONFIG_DEBUG_INFO=y
# CONFIG_FTRACE is not set
CONFIG_ATOMIC64_SELFTEST=y
-CONFIG_DEBUG_ERRORS=y
stack_t uc_stack;
sigset_t uc_sigmask;
/* glibc uses a 1024-bit sigset_t */
- __u8 __unused[(1024 - sizeof(sigset_t)) / 8];
+ __u8 __unused[1024 / 8 - sizeof(sigset_t)];
/* last for future expansion */
struct sigcontext uc_mcontext;
};
EXPORT_SYMBOL(__clear_user);
/* bitops */
+#ifdef CONFIG_SMP
EXPORT_SYMBOL(__atomic_hash);
+#endif
/* physical memory */
EXPORT_SYMBOL(memstart_addr);
sigset_t *set, struct pt_regs *regs)
{
struct compat_rt_sigframe __user *frame;
- compat_stack_t stack;
int err = 0;
frame = compat_get_sigframe(ka, regs, sizeof(*frame));
void __iomem * __init early_io_map(phys_addr_t phys, unsigned long virt)
{
unsigned long size, mask;
- bool page64k = IS_ENABLED(ARM64_64K_PAGES);
+ bool page64k = IS_ENABLED(CONFIG_ARM64_64K_PAGES);
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
}
if (!need_resched()) {
- void (*idle)(void);
#ifdef CONFIG_SMP
min_xtp();
#endif
if (mark_idle)
(*mark_idle)(1);
- if (!idle)
- idle = default_idle;
- (*idle)();
+ default_idle();
if (mark_idle)
(*mark_idle)(0);
#ifdef CONFIG_SMP
config PPC
bool
default y
+ select BINFMT_ELF
select OF
select OF_EARLY_FLATTREE
select HAVE_FTRACE_MCOUNT_RECORD
/*
* VSID allocation (256MB segment)
*
- * We first generate a 38-bit "proto-VSID". For kernel addresses this
- * is equal to the ESID | 1 << 37, for user addresses it is:
- * (context << USER_ESID_BITS) | (esid & ((1U << USER_ESID_BITS) - 1)
+ * We first generate a 37-bit "proto-VSID". Proto-VSIDs are generated
+ * from mmu context id and effective segment id of the address.
*
- * This splits the proto-VSID into the below range
- * 0 - (2^(CONTEXT_BITS + USER_ESID_BITS) - 1) : User proto-VSID range
- * 2^(CONTEXT_BITS + USER_ESID_BITS) - 2^(VSID_BITS) : Kernel proto-VSID range
- *
- * We also have CONTEXT_BITS + USER_ESID_BITS = VSID_BITS - 1
- * That is, we assign half of the space to user processes and half
- * to the kernel.
+ * For user processes max context id is limited to ((1ul << 19) - 5)
+ * for kernel space, we use the top 4 context ids to map address as below
+ * NOTE: each context only support 64TB now.
+ * 0x7fffc - [ 0xc000000000000000 - 0xc0003fffffffffff ]
+ * 0x7fffd - [ 0xd000000000000000 - 0xd0003fffffffffff ]
+ * 0x7fffe - [ 0xe000000000000000 - 0xe0003fffffffffff ]
+ * 0x7ffff - [ 0xf000000000000000 - 0xf0003fffffffffff ]
*
* The proto-VSIDs are then scrambled into real VSIDs with the
* multiplicative hash:
* VSID_MULTIPLIER is prime, so in particular it is
* co-prime to VSID_MODULUS, making this a 1:1 scrambling function.
* Because the modulus is 2^n-1 we can compute it efficiently without
- * a divide or extra multiply (see below).
- *
- * This scheme has several advantages over older methods:
- *
- * - We have VSIDs allocated for every kernel address
- * (i.e. everything above 0xC000000000000000), except the very top
- * segment, which simplifies several things.
+ * a divide or extra multiply (see below). The scramble function gives
+ * robust scattering in the hash table (at least based on some initial
+ * results).
*
- * - We allow for USER_ESID_BITS significant bits of ESID and
- * CONTEXT_BITS bits of context for user addresses.
- * i.e. 64T (46 bits) of address space for up to half a million contexts.
+ * We also consider VSID 0 special. We use VSID 0 for slb entries mapping
+ * bad address. This enables us to consolidate bad address handling in
+ * hash_page.
*
- * - The scramble function gives robust scattering in the hash
- * table (at least based on some initial results). The previous
- * method was more susceptible to pathological cases giving excessive
- * hash collisions.
+ * We also need to avoid the last segment of the last context, because that
+ * would give a protovsid of 0x1fffffffff. That will result in a VSID 0
+ * because of the modulo operation in vsid scramble. But the vmemmap
+ * (which is what uses region 0xf) will never be close to 64TB in size
+ * (it's 56 bytes per page of system memory).
*/
+#define CONTEXT_BITS 19
+#define ESID_BITS 18
+#define ESID_BITS_1T 6
+
+/*
+ * 256MB segment
+ * The proto-VSID space has 2^(CONTEX_BITS + ESID_BITS) - 1 segments
+ * available for user + kernel mapping. The top 4 contexts are used for
+ * kernel mapping. Each segment contains 2^28 bytes. Each
+ * context maps 2^46 bytes (64TB) so we can support 2^19-1 contexts
+ * (19 == 37 + 28 - 46).
+ */
+#define MAX_USER_CONTEXT ((ASM_CONST(1) << CONTEXT_BITS) - 5)
+
/*
* This should be computed such that protovosid * vsid_mulitplier
* doesn't overflow 64 bits. It should also be co-prime to vsid_modulus
*/
#define VSID_MULTIPLIER_256M ASM_CONST(12538073) /* 24-bit prime */
-#define VSID_BITS_256M 38
+#define VSID_BITS_256M (CONTEXT_BITS + ESID_BITS)
#define VSID_MODULUS_256M ((1UL<<VSID_BITS_256M)-1)
#define VSID_MULTIPLIER_1T ASM_CONST(12538073) /* 24-bit prime */
-#define VSID_BITS_1T 26
+#define VSID_BITS_1T (CONTEXT_BITS + ESID_BITS_1T)
#define VSID_MODULUS_1T ((1UL<<VSID_BITS_1T)-1)
-#define CONTEXT_BITS 19
-#define USER_ESID_BITS 18
-#define USER_ESID_BITS_1T 6
-#define USER_VSID_RANGE (1UL << (USER_ESID_BITS + SID_SHIFT))
+#define USER_VSID_RANGE (1UL << (ESID_BITS + SID_SHIFT))
/*
* This macro generates asm code to compute the VSID scramble
srdi rx,rt,VSID_BITS_##size; \
clrldi rt,rt,(64-VSID_BITS_##size); \
add rt,rt,rx; /* add high and low bits */ \
- /* Now, r3 == VSID (mod 2^36-1), and lies between 0 and \
+ /* NOTE: explanation based on VSID_BITS_##size = 36 \
+ * Now, r3 == VSID (mod 2^36-1), and lies between 0 and \
* 2^36-1+2^28-1. That in particular means that if r3 >= \
* 2^36-1, then r3+1 has the 2^36 bit set. So, if r3+1 has \
* the bit clear, r3 already has the answer we want, if it \
})
#endif /* 1 */
-/*
- * This is only valid for addresses >= PAGE_OFFSET
- * The proto-VSID space is divided into two class
- * User: 0 to 2^(CONTEXT_BITS + USER_ESID_BITS) -1
- * kernel: 2^(CONTEXT_BITS + USER_ESID_BITS) to 2^(VSID_BITS) - 1
- *
- * With KERNEL_START at 0xc000000000000000, the proto vsid for
- * the kernel ends up with 0xc00000000 (36 bits). With 64TB
- * support we need to have kernel proto-VSID in the
- * [2^37 to 2^38 - 1] range due to the increased USER_ESID_BITS.
- */
-static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
-{
- unsigned long proto_vsid;
- /*
- * We need to make sure proto_vsid for the kernel is
- * >= 2^(CONTEXT_BITS + USER_ESID_BITS[_1T])
- */
- if (ssize == MMU_SEGSIZE_256M) {
- proto_vsid = ea >> SID_SHIFT;
- proto_vsid |= (1UL << (CONTEXT_BITS + USER_ESID_BITS));
- return vsid_scramble(proto_vsid, 256M);
- }
- proto_vsid = ea >> SID_SHIFT_1T;
- proto_vsid |= (1UL << (CONTEXT_BITS + USER_ESID_BITS_1T));
- return vsid_scramble(proto_vsid, 1T);
-}
-
/* Returns the segment size indicator for a user address */
static inline int user_segment_size(unsigned long addr)
{
return MMU_SEGSIZE_256M;
}
-/* This is only valid for user addresses (which are below 2^44) */
static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
int ssize)
{
+ /*
+ * Bad address. We return VSID 0 for that
+ */
+ if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
+ return 0;
+
if (ssize == MMU_SEGSIZE_256M)
- return vsid_scramble((context << USER_ESID_BITS)
+ return vsid_scramble((context << ESID_BITS)
| (ea >> SID_SHIFT), 256M);
- return vsid_scramble((context << USER_ESID_BITS_1T)
+ return vsid_scramble((context << ESID_BITS_1T)
| (ea >> SID_SHIFT_1T), 1T);
}
+/*
+ * This is only valid for addresses >= PAGE_OFFSET
+ *
+ * For kernel space, we use the top 4 context ids to map address as below
+ * 0x7fffc - [ 0xc000000000000000 - 0xc0003fffffffffff ]
+ * 0x7fffd - [ 0xd000000000000000 - 0xd0003fffffffffff ]
+ * 0x7fffe - [ 0xe000000000000000 - 0xe0003fffffffffff ]
+ * 0x7ffff - [ 0xf000000000000000 - 0xf0003fffffffffff ]
+ */
+static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
+{
+ unsigned long context;
+
+ /*
+ * kernel take the top 4 context from the available range
+ */
+ context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1;
+ return get_vsid(context, ea, ssize);
+}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_MMU_HASH64_H_ */
.cpu_features = CPU_FTRS_PPC970,
.cpu_user_features = COMMON_USER_POWER4 |
PPC_FEATURE_HAS_ALTIVEC_COMP,
- .mmu_features = MMU_FTR_HPTE_TABLE,
+ .mmu_features = MMU_FTRS_PPC970,
.icache_bsize = 128,
.dcache_bsize = 128,
.num_pmcs = 8,
#include <asm/code-patching.h>
#include <asm/machdep.h>
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
extern void epapr_ev_idle(void);
extern u32 epapr_ev_idle_start[];
+#endif
bool epapr_paravirt_enabled;
for (i = 0; i < (len / 4); i++) {
patch_instruction(epapr_hypercall_start + i, insts[i]);
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
patch_instruction(epapr_ev_idle_start + i, insts[i]);
+#endif
}
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
if (of_get_property(hyper_node, "has-idle", NULL))
ppc_md.power_save = epapr_ev_idle;
+#endif
epapr_paravirt_enabled = true;
#endif /* __DISABLED__ */
-/*
- * r13 points to the PACA, r9 contains the saved CR,
- * r12 contain the saved SRR1, SRR0 is still ready for return
- * r3 has the faulting address
- * r9 - r13 are saved in paca->exslb.
- * r3 is saved in paca->slb_r3
- * We assume we aren't going to take any exceptions during this procedure.
- */
-_GLOBAL(slb_miss_realmode)
- mflr r10
-#ifdef CONFIG_RELOCATABLE
- mtctr r11
-#endif
-
- stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
- std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
-
- bl .slb_allocate_realmode
-
- /* All done -- return from exception. */
-
- ld r10,PACA_EXSLB+EX_LR(r13)
- ld r3,PACA_EXSLB+EX_R3(r13)
- lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
-
- mtlr r10
-
- andi. r10,r12,MSR_RI /* check for unrecoverable exception */
- beq- 2f
-
-.machine push
-.machine "power4"
- mtcrf 0x80,r9
- mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
-.machine pop
-
- RESTORE_PPR_PACA(PACA_EXSLB, r9)
- ld r9,PACA_EXSLB+EX_R9(r13)
- ld r10,PACA_EXSLB+EX_R10(r13)
- ld r11,PACA_EXSLB+EX_R11(r13)
- ld r12,PACA_EXSLB+EX_R12(r13)
- ld r13,PACA_EXSLB+EX_R13(r13)
- rfid
- b . /* prevent speculative execution */
-
-2: mfspr r11,SPRN_SRR0
- ld r10,PACAKBASE(r13)
- LOAD_HANDLER(r10,unrecov_slb)
- mtspr SPRN_SRR0,r10
- ld r10,PACAKMSR(r13)
- mtspr SPRN_SRR1,r10
- rfid
- b .
-
-unrecov_slb:
- EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
- DISABLE_INTS
- bl .save_nvgprs
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- bl .unrecoverable_exception
- b 1b
-
-
-#ifdef CONFIG_PPC_970_NAP
-power4_fixup_nap:
- andc r9,r9,r10
- std r9,TI_LOCAL_FLAGS(r11)
- ld r10,_LINK(r1) /* make idle task do the */
- std r10,_NIP(r1) /* equivalent of a blr */
- blr
-#endif
-
.align 7
.globl alignment_common
alignment_common:
#endif /* CONFIG_PPC_POWERNV */
+/*
+ * r13 points to the PACA, r9 contains the saved CR,
+ * r12 contain the saved SRR1, SRR0 is still ready for return
+ * r3 has the faulting address
+ * r9 - r13 are saved in paca->exslb.
+ * r3 is saved in paca->slb_r3
+ * We assume we aren't going to take any exceptions during this procedure.
+ */
+_GLOBAL(slb_miss_realmode)
+ mflr r10
+#ifdef CONFIG_RELOCATABLE
+ mtctr r11
+#endif
+
+ stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
+ std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
+
+ bl .slb_allocate_realmode
+
+ /* All done -- return from exception. */
+
+ ld r10,PACA_EXSLB+EX_LR(r13)
+ ld r3,PACA_EXSLB+EX_R3(r13)
+ lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
+
+ mtlr r10
+
+ andi. r10,r12,MSR_RI /* check for unrecoverable exception */
+ beq- 2f
+
+.machine push
+.machine "power4"
+ mtcrf 0x80,r9
+ mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
+.machine pop
+
+ RESTORE_PPR_PACA(PACA_EXSLB, r9)
+ ld r9,PACA_EXSLB+EX_R9(r13)
+ ld r10,PACA_EXSLB+EX_R10(r13)
+ ld r11,PACA_EXSLB+EX_R11(r13)
+ ld r12,PACA_EXSLB+EX_R12(r13)
+ ld r13,PACA_EXSLB+EX_R13(r13)
+ rfid
+ b . /* prevent speculative execution */
+
+2: mfspr r11,SPRN_SRR0
+ ld r10,PACAKBASE(r13)
+ LOAD_HANDLER(r10,unrecov_slb)
+ mtspr SPRN_SRR0,r10
+ ld r10,PACAKMSR(r13)
+ mtspr SPRN_SRR1,r10
+ rfid
+ b .
+
+unrecov_slb:
+ EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
+ DISABLE_INTS
+ bl .save_nvgprs
+1: addi r3,r1,STACK_FRAME_OVERHEAD
+ bl .unrecoverable_exception
+ b 1b
+
+
+#ifdef CONFIG_PPC_970_NAP
+power4_fixup_nap:
+ andc r9,r9,r10
+ std r9,TI_LOCAL_FLAGS(r11)
+ ld r10,_LINK(r1) /* make idle task do the */
+ std r10,_NIP(r1) /* equivalent of a blr */
+ blr
+#endif
+
/*
* Hash table stuff
*/
_GLOBAL(do_stab_bolted)
stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
std r11,PACA_EXSLB+EX_SRR0(r13) /* save SRR0 in exc. frame */
+ mfspr r11,SPRN_DAR /* ea */
+ /*
+ * check for bad kernel/user address
+ * (ea & ~REGION_MASK) >= PGTABLE_RANGE
+ */
+ rldicr. r9,r11,4,(63 - 46 - 4)
+ li r9,0 /* VSID = 0 for bad address */
+ bne- 0f
+
+ /*
+ * Calculate VSID:
+ * This is the kernel vsid, we take the top for context from
+ * the range. context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1
+ * Here we know that (ea >> 60) == 0xc
+ */
+ lis r9,(MAX_USER_CONTEXT + 1)@ha
+ addi r9,r9,(MAX_USER_CONTEXT + 1)@l
+
+ srdi r10,r11,SID_SHIFT
+ rldimi r10,r9,ESID_BITS,0 /* proto vsid */
+ ASM_VSID_SCRAMBLE(r10, r9, 256M)
+ rldic r9,r10,12,16 /* r9 = vsid << 12 */
+
+0:
/* Hash to the primary group */
ld r10,PACASTABVIRT(r13)
- mfspr r11,SPRN_DAR
- srdi r11,r11,28
+ srdi r11,r11,SID_SHIFT
rldimi r10,r11,7,52 /* r10 = first ste of the group */
- /* Calculate VSID */
- /* This is a kernel address, so protovsid = ESID | 1 << 37 */
- li r9,0x1
- rldimi r11,r9,(CONTEXT_BITS + USER_ESID_BITS),0
- ASM_VSID_SCRAMBLE(r11, r9, 256M)
- rldic r9,r11,12,16 /* r9 = vsid << 12 */
-
/* Search the primary group for a free entry */
1: ld r11,0(r10) /* Test valid bit of the current ste */
andi. r11,r11,0x80
{
}
#else
-static void __reloc_toc(void *tocstart, unsigned long offset,
- unsigned long nr_entries)
+static void __reloc_toc(unsigned long offset, unsigned long nr_entries)
{
unsigned long i;
- unsigned long *toc_entry = (unsigned long *)tocstart;
+ unsigned long *toc_entry;
+
+ /* Get the start of the TOC by using r2 directly. */
+ asm volatile("addi %0,2,-0x8000" : "=b" (toc_entry));
for (i = 0; i < nr_entries; i++) {
*toc_entry = *toc_entry + offset;
unsigned long nr_entries =
(__prom_init_toc_end - __prom_init_toc_start) / sizeof(long);
- /* Need to add offset to get at __prom_init_toc_start */
- __reloc_toc(__prom_init_toc_start + offset, offset, nr_entries);
+ __reloc_toc(offset, nr_entries);
mb();
}
mb();
- /* __prom_init_toc_start has been relocated, no need to add offset */
- __reloc_toc(__prom_init_toc_start, -offset, nr_entries);
+ __reloc_toc(-offset, nr_entries);
}
#endif
#endif
brk.address = bp_info->addr & ~7UL;
brk.type = HW_BRK_TYPE_TRANSLATE;
+ brk.len = 8;
if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ)
brk.type |= HW_BRK_TYPE_READ;
if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_WRITE)
vcpu3s->context_id[0] = err;
vcpu3s->proto_vsid_max = ((vcpu3s->context_id[0] + 1)
- << USER_ESID_BITS) - 1;
- vcpu3s->proto_vsid_first = vcpu3s->context_id[0] << USER_ESID_BITS;
+ << ESID_BITS) - 1;
+ vcpu3s->proto_vsid_first = vcpu3s->context_id[0] << ESID_BITS;
vcpu3s->proto_vsid_next = vcpu3s->proto_vsid_first;
kvmppc_mmu_hpte_init(vcpu);
unsigned long vpn = hpt_vpn(vaddr, vsid, ssize);
unsigned long tprot = prot;
+ /*
+ * If we hit a bad address return error.
+ */
+ if (!vsid)
+ return -1;
/* Make kernel text executable */
if (overlaps_kernel_text(vaddr, vaddr + step))
tprot &= ~HPTE_R_N;
/* Initialize stab / SLB management */
if (mmu_has_feature(MMU_FTR_SLB))
slb_initialize();
+ else
+ stab_initialize(get_paca()->stab_real);
}
#ifdef CONFIG_SMP
DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n",
ea, access, trap);
- if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) {
- DBG_LOW(" out of pgtable range !\n");
- return 1;
- }
-
/* Get region & vsid */
switch (REGION_ID(ea)) {
case USER_REGION_ID:
}
DBG_LOW(" mm=%p, mm->pgdir=%p, vsid=%016lx\n", mm, mm->pgd, vsid);
+ /* Bad address. */
+ if (!vsid) {
+ DBG_LOW("Bad address!\n");
+ return 1;
+ }
/* Get pgdir */
pgdir = mm->pgd;
if (pgdir == NULL)
/* Get VSID */
ssize = user_segment_size(ea);
vsid = get_vsid(mm->context.id, ea, ssize);
+ if (!vsid)
+ return;
/* Hash doesn't like irqs */
local_irq_save(flags);
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
hpteg = ((hash & htab_hash_mask) * HPTES_PER_GROUP);
+ /* Don't create HPTE entries for bad address */
+ if (!vsid)
+ return;
ret = ppc_md.hpte_insert(hpteg, vpn, __pa(vaddr),
mode, HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
static DEFINE_SPINLOCK(mmu_context_lock);
static DEFINE_IDA(mmu_context_ida);
-/*
- * 256MB segment
- * The proto-VSID space has 2^(CONTEX_BITS + USER_ESID_BITS) - 1 segments
- * available for user mappings. Each segment contains 2^28 bytes. Each
- * context maps 2^46 bytes (64TB) so we can support 2^19-1 contexts
- * (19 == 37 + 28 - 46).
- */
-#define MAX_CONTEXT ((1UL << CONTEXT_BITS) - 1)
-
int __init_new_context(void)
{
int index;
else if (err)
return err;
- if (index > MAX_CONTEXT) {
+ if (index > MAX_USER_CONTEXT) {
spin_lock(&mmu_context_lock);
ida_remove(&mmu_context_ida, index);
spin_unlock(&mmu_context_lock);
#endif
#ifdef CONFIG_PPC_STD_MMU_64
-#if TASK_SIZE_USER64 > (1UL << (USER_ESID_BITS + SID_SHIFT))
+#if TASK_SIZE_USER64 > (1UL << (ESID_BITS + SID_SHIFT))
#error TASK_SIZE_USER64 exceeds user VSID range
#endif
#endif
* No other registers are examined or changed.
*/
_GLOBAL(slb_allocate_realmode)
- /* r3 = faulting address */
+ /*
+ * check for bad kernel/user address
+ * (ea & ~REGION_MASK) >= PGTABLE_RANGE
+ */
+ rldicr. r9,r3,4,(63 - 46 - 4)
+ bne- 8f
srdi r9,r3,60 /* get region */
- srdi r10,r3,28 /* get esid */
+ srdi r10,r3,SID_SHIFT /* get esid */
cmpldi cr7,r9,0xc /* cmp PAGE_OFFSET for later use */
/* r3 = address, r10 = esid, cr7 = <> PAGE_OFFSET */
*/
_GLOBAL(slb_miss_kernel_load_linear)
li r11,0
- li r9,0x1
/*
- * for 1T we shift 12 bits more. slb_finish_load_1T will do
- * the necessary adjustment
+ * context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1
+ * r9 = region id.
*/
- rldimi r10,r9,(CONTEXT_BITS + USER_ESID_BITS),0
+ addis r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@ha
+ addi r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@l
+
+
BEGIN_FTR_SECTION
b slb_finish_load
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
_GLOBAL(slb_miss_kernel_load_io)
li r11,0
6:
- li r9,0x1
/*
- * for 1T we shift 12 bits more. slb_finish_load_1T will do
- * the necessary adjustment
+ * context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1
+ * r9 = region id.
*/
- rldimi r10,r9,(CONTEXT_BITS + USER_ESID_BITS),0
+ addis r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@ha
+ addi r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@l
+
BEGIN_FTR_SECTION
b slb_finish_load
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
b slb_finish_load_1T
-0: /* user address: proto-VSID = context << 15 | ESID. First check
- * if the address is within the boundaries of the user region
- */
- srdi. r9,r10,USER_ESID_BITS
- bne- 8f /* invalid ea bits set */
-
-
+0:
/* when using slices, we extract the psize off the slice bitmaps
* and then we need to get the sllp encoding off the mmu_psize_defs
* array.
ld r9,PACACONTEXTID(r13)
BEGIN_FTR_SECTION
cmpldi r10,0x1000
-END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
- rldimi r10,r9,USER_ESID_BITS,0
-BEGIN_FTR_SECTION
bge slb_finish_load_1T
END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
b slb_finish_load
8: /* invalid EA */
li r10,0 /* BAD_VSID */
+ li r9,0 /* BAD_VSID */
li r11,SLB_VSID_USER /* flags don't much matter */
b slb_finish_load
/* get context to calculate proto-VSID */
ld r9,PACACONTEXTID(r13)
- rldimi r10,r9,USER_ESID_BITS,0
-
/* fall through slb_finish_load */
#endif /* __DISABLED__ */
/*
* Finish loading of an SLB entry and return
*
- * r3 = EA, r10 = proto-VSID, r11 = flags, clobbers r9, cr7 = <> PAGE_OFFSET
+ * r3 = EA, r9 = context, r10 = ESID, r11 = flags, clobbers r9, cr7 = <> PAGE_OFFSET
*/
slb_finish_load:
+ rldimi r10,r9,ESID_BITS,0
ASM_VSID_SCRAMBLE(r10,r9,256M)
/*
* bits above VSID_BITS_256M need to be ignored from r10
/*
* Finish loading of a 1T SLB entry (for the kernel linear mapping) and return.
*
- * r3 = EA, r10 = proto-VSID, r11 = flags, clobbers r9
+ * r3 = EA, r9 = context, r10 = ESID(256MB), r11 = flags, clobbers r9
*/
slb_finish_load_1T:
- srdi r10,r10,40-28 /* get 1T ESID */
+ srdi r10,r10,(SID_SHIFT_1T - SID_SHIFT) /* get 1T ESID */
+ rldimi r10,r9,ESID_BITS_1T,0
ASM_VSID_SCRAMBLE(r10,r9,1T)
/*
* bits above VSID_BITS_1T need to be ignored from r10
if (!is_kernel_addr(addr)) {
ssize = user_segment_size(addr);
vsid = get_vsid(mm->context.id, addr, ssize);
- WARN_ON(vsid == 0);
} else {
vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
ssize = mmu_kernel_ssize;
}
+ WARN_ON(vsid == 0);
vpn = hpt_vpn(addr, vsid, ssize);
rpte = __real_pte(__pte(pte), ptep);
.attrs = power7_events_attr,
};
+PMU_FORMAT_ATTR(event, "config:0-19");
+
+static struct attribute *power7_pmu_format_attr[] = {
+ &format_attr_event.attr,
+ NULL,
+};
+
+struct attribute_group power7_pmu_format_group = {
+ .name = "format",
+ .attrs = power7_pmu_format_attr,
+};
+
static const struct attribute_group *power7_pmu_attr_groups[] = {
+ &power7_pmu_format_group,
&power7_pmu_events_group,
NULL,
};
return IRQ_HANDLED;
};
-static int __devinit gpio_halt_probe(struct platform_device *pdev)
+static int gpio_halt_probe(struct platform_device *pdev)
{
enum of_gpio_flags flags;
struct device_node *node = pdev->dev.of_node;
return 0;
}
-static int __devexit gpio_halt_remove(struct platform_device *pdev)
+static int gpio_halt_remove(struct platform_device *pdev)
{
if (halt_node) {
int gpio = of_get_gpio(halt_node, 0);
.of_match_table = gpio_halt_match,
},
.probe = gpio_halt_probe,
- .remove = __devexit_p(gpio_halt_remove),
+ .remove = gpio_halt_remove,
};
module_platform_driver(gpio_halt_driver);
select PPC_HAVE_PMU_SUPPORT
config POWER3
- bool
depends on PPC64 && PPC_BOOK3S
- default y if !POWER4_ONLY
+ def_bool y
config POWER4
depends on PPC64 && PPC_BOOK3S
but somewhat slower on other machines. This option only changes
the scheduling of instructions, not the selection of instructions
itself, so the resulting kernel will keep running on all other
- machines. When building a kernel that is supposed to run only
- on Cell, you should also select the POWER4_ONLY option.
+ machines.
# this is temp to handle compat with arch=ppc
config 8xx
u32 reserved[4];
} __packed;
+#define EQC_WR_PROHIBIT 22
+
struct msb {
u8 fmt:4;
u8 oc:4;
#define OP_STATE_TEMP_ERR 2
#define OP_STATE_PERM_ERR 3
+enum scm_event {SCM_CHANGE, SCM_AVAIL};
+
struct scm_driver {
struct device_driver drv;
int (*probe) (struct scm_device *scmdev);
int (*remove) (struct scm_device *scmdev);
- void (*notify) (struct scm_device *scmdev);
+ void (*notify) (struct scm_device *scmdev, enum scm_event event);
void (*handler) (struct scm_device *scmdev, void *data, int error);
};
static inline void __tlb_flush_mm(struct mm_struct * mm)
{
- if (unlikely(cpumask_empty(mm_cpumask(mm))))
- return;
/*
* If the machine has IDTE we prefer to do a per mm flush
* on all cpus instead of doing a local flush if the mm
UPDATE_VTIME %r14,%r15,__LC_MCCK_ENTER_TIMER
mcck_skip:
SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+32,__LC_PANIC_STACK,PAGE_SHIFT
- mvc __PT_R0(64,%r11),__LC_GPREGS_SAVE_AREA
+ stm %r0,%r7,__PT_R0(%r11)
+ mvc __PT_R8(32,%r11),__LC_GPREGS_SAVE_AREA+32
stm %r8,%r9,__PT_PSW(%r11)
xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15)
l %r1,BASED(.Ldo_machine_check)
UPDATE_VTIME %r14,__LC_MCCK_ENTER_TIMER
LAST_BREAK %r14
mcck_skip:
- lghi %r14,__LC_GPREGS_SAVE_AREA
- mvc __PT_R0(128,%r11),0(%r14)
+ lghi %r14,__LC_GPREGS_SAVE_AREA+64
+ stmg %r0,%r7,__PT_R0(%r11)
+ mvc __PT_R8(64,%r11),0(%r14)
stmg %r8,%r9,__PT_PSW(%r11)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
lgr %r2,%r11 # pass pointer to pt_regs
/* Split remaining virtual space between 1:1 mapping & vmemmap array */
tmp = VMALLOC_START / (PAGE_SIZE + sizeof(struct page));
+ /* vmemmap contains a multiple of PAGES_PER_SECTION struct pages */
+ tmp = SECTION_ALIGN_UP(tmp);
tmp = VMALLOC_START - tmp * sizeof(struct page);
tmp &= ~((vmax >> 11) - 1); /* align to page table level */
tmp = min(tmp, 1UL << MAX_PHYSMEM_BITS);
default "arch/sparc/configs/sparc32_defconfig" if SPARC32
default "arch/sparc/configs/sparc64_defconfig" if SPARC64
-# CONFIG_BITS can be used at source level to get 32/64 bits
-config BITS
- int
- default 32 if SPARC32
- default 64 if SPARC64
-
config IOMMU_HELPER
bool
default y if SPARC64
config GENERIC_HWEIGHT
bool
- default y if !ULTRA_HAS_POPULATION_COUNT
+ default y
config GENERIC_CALIBRATE_DELAY
bool
#define SUN4V_CHIP_NIAGARA3 0x03
#define SUN4V_CHIP_NIAGARA4 0x04
#define SUN4V_CHIP_NIAGARA5 0x05
+#define SUN4V_CHIP_SPARC64X 0x8a
#define SUN4V_CHIP_UNKNOWN 0xff
#ifndef __ASSEMBLY__
sparc_pmu_type = "niagara5";
break;
+ case SUN4V_CHIP_SPARC64X:
+ sparc_cpu_type = "SPARC64-X";
+ sparc_fpu_type = "SPARC64-X integrated FPU";
+ sparc_pmu_type = "sparc64-x";
+ break;
+
default:
printk(KERN_WARNING "CPU: Unknown sun4v cpu type [%s]\n",
prom_cpu_compatible);
.asciz "SUNW,UltraSPARC-T"
prom_sparc_prefix:
.asciz "SPARC-"
+prom_sparc64x_prefix:
+ .asciz "SPARC64-X"
.align 4
prom_root_compatible:
.skip 64
cmp %g2, 'T'
be,pt %xcc, 70f
cmp %g2, 'M'
- bne,pn %xcc, 4f
+ bne,pn %xcc, 49f
nop
70: ldub [%g1 + 7], %g2
cmp %g2, '5'
be,pt %xcc, 5f
mov SUN4V_CHIP_NIAGARA5, %g4
- ba,pt %xcc, 4f
+ ba,pt %xcc, 49f
nop
91: sethi %hi(prom_cpu_compatible), %g1
mov SUN4V_CHIP_NIAGARA2, %g4
4:
+ /* Athena */
+ sethi %hi(prom_cpu_compatible), %g1
+ or %g1, %lo(prom_cpu_compatible), %g1
+ sethi %hi(prom_sparc64x_prefix), %g7
+ or %g7, %lo(prom_sparc64x_prefix), %g7
+ mov 9, %g3
+41: ldub [%g7], %g2
+ ldub [%g1], %g4
+ cmp %g2, %g4
+ bne,pn %icc, 49f
+ add %g7, 1, %g7
+ subcc %g3, 1, %g3
+ bne,pt %xcc, 41b
+ add %g1, 1, %g1
+ mov SUN4V_CHIP_SPARC64X, %g4
+ ba,pt %xcc, 5f
+ nop
+
+49:
mov SUN4V_CHIP_UNKNOWN, %g4
5: sethi %hi(sun4v_chip_type), %g2
or %g2, %lo(sun4v_chip_type), %g2
#define CAP9_IOMAP_OFS 0x20
#define CAP9_BARSIZE_OFS 0x24
+#define TGT 256
+
struct grpci2_priv {
struct leon_pci_info info; /* must be on top of this structure */
struct grpci2_regs *regs;
if (where & 0x3)
return -EINVAL;
- if (bus == 0 && PCI_SLOT(devfn) != 0)
- devfn += (0x8 * 6);
+ if (bus == 0) {
+ devfn += (0x8 * 6); /* start at AD16=Device0 */
+ } else if (bus == TGT) {
+ bus = 0;
+ devfn = 0; /* special case: bridge controller itself */
+ }
/* Select bus */
spin_lock_irqsave(&grpci2_dev_lock, flags);
if (where & 0x3)
return -EINVAL;
- if (bus == 0 && PCI_SLOT(devfn) != 0)
- devfn += (0x8 * 6);
+ if (bus == 0) {
+ devfn += (0x8 * 6); /* start at AD16=Device0 */
+ } else if (bus == TGT) {
+ bus = 0;
+ devfn = 0; /* special case: bridge controller itself */
+ }
/* Select bus */
spin_lock_irqsave(&grpci2_dev_lock, flags);
unsigned int busno = bus->number;
int ret;
- if (PCI_SLOT(devfn) > 15 || (PCI_SLOT(devfn) == 0 && busno == 0)) {
+ if (PCI_SLOT(devfn) > 15 || busno > 255) {
*val = ~0;
return 0;
}
struct grpci2_priv *priv = grpci2priv;
unsigned int busno = bus->number;
- if (PCI_SLOT(devfn) > 15 || (PCI_SLOT(devfn) == 0 && busno == 0))
+ if (PCI_SLOT(devfn) > 15 || busno > 255)
return 0;
#ifdef GRPCI2_DEBUG_CFGACCESS
REGSTORE(regs->ahbmst_map[i], priv->pci_area);
/* Get the GRPCI2 Host PCI ID */
- grpci2_cfg_r32(priv, 0, 0, PCI_VENDOR_ID, &priv->pciid);
+ grpci2_cfg_r32(priv, TGT, 0, PCI_VENDOR_ID, &priv->pciid);
/* Get address to first (always defined) capability structure */
- grpci2_cfg_r8(priv, 0, 0, PCI_CAPABILITY_LIST, &capptr);
+ grpci2_cfg_r8(priv, TGT, 0, PCI_CAPABILITY_LIST, &capptr);
/* Enable/Disable Byte twisting */
- grpci2_cfg_r32(priv, 0, 0, capptr+CAP9_IOMAP_OFS, &io_map);
+ grpci2_cfg_r32(priv, TGT, 0, capptr+CAP9_IOMAP_OFS, &io_map);
io_map = (io_map & ~0x1) | (priv->bt_enabled ? 1 : 0);
- grpci2_cfg_w32(priv, 0, 0, capptr+CAP9_IOMAP_OFS, io_map);
+ grpci2_cfg_w32(priv, TGT, 0, capptr+CAP9_IOMAP_OFS, io_map);
/* Setup the Host's PCI Target BARs for other peripherals to access,
* and do DMA to the host's memory. The target BARs can be sized and
pciadr = 0;
}
}
- grpci2_cfg_w32(priv, 0, 0, capptr+CAP9_BARSIZE_OFS+i*4, bar_sz);
- grpci2_cfg_w32(priv, 0, 0, PCI_BASE_ADDRESS_0+i*4, pciadr);
- grpci2_cfg_w32(priv, 0, 0, capptr+CAP9_BAR_OFS+i*4, ahbadr);
+ grpci2_cfg_w32(priv, TGT, 0, capptr+CAP9_BARSIZE_OFS+i*4,
+ bar_sz);
+ grpci2_cfg_w32(priv, TGT, 0, PCI_BASE_ADDRESS_0+i*4, pciadr);
+ grpci2_cfg_w32(priv, TGT, 0, capptr+CAP9_BAR_OFS+i*4, ahbadr);
printk(KERN_INFO " TGT BAR[%d]: 0x%08x (PCI)-> 0x%08x\n",
i, pciadr, ahbadr);
}
/* set as bus master and enable pci memory responses */
- grpci2_cfg_r32(priv, 0, 0, PCI_COMMAND, &data);
+ grpci2_cfg_r32(priv, TGT, 0, PCI_COMMAND, &data);
data |= (PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
- grpci2_cfg_w32(priv, 0, 0, PCI_COMMAND, data);
+ grpci2_cfg_w32(priv, TGT, 0, PCI_COMMAND, data);
/* Enable Error respone (CPU-TRAP) on illegal memory access. */
REGSTORE(regs->ctrl, CTRL_ER | CTRL_PE);
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
-CONFIG_MULTICORE_RAID456=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
-CONFIG_MULTICORE_RAID456=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
* a post_handler or break_handler).
*/
int boostable;
+ bool if_modifier;
};
struct arch_optimized_insn {
gpa_t time;
struct pvclock_vcpu_time_info hv_clock;
unsigned int hw_tsc_khz;
- unsigned int time_offset;
- struct page *time_page;
+ struct gfn_to_hva_cache pv_time;
+ bool pv_time_enabled;
/* set guest stopped flag in pvclock flags field */
bool pvclock_set_guest_stopped_request;
return _hypercall3(int, console_io, cmd, count, str);
}
-extern int __must_check HYPERVISOR_physdev_op_compat(int, void *);
+extern int __must_check xen_physdev_op_compat(int, void *);
static inline int
HYPERVISOR_physdev_op(int cmd, void *arg)
{
int rc = _hypercall2(int, physdev_op, cmd, arg);
if (unlikely(rc == -ENOSYS))
- rc = HYPERVISOR_physdev_op_compat(cmd, arg);
+ rc = xen_physdev_op_compat(cmd, arg);
return rc;
}
#define SNB_C1_AUTO_UNDEMOTE (1UL << 27)
#define SNB_C3_AUTO_UNDEMOTE (1UL << 28)
+#define MSR_PLATFORM_INFO 0x000000ce
#define MSR_MTRRcap 0x000000fe
#define MSR_IA32_BBL_CR_CTL 0x00000119
#define MSR_IA32_BBL_CR_CTL3 0x0000011e
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+ INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), /* CYCLE_ACTIVITY.CYCLES_NO_DISPATCH */
+ INTEL_UEVENT_CONSTRAINT(0x05a3, 0xf), /* CYCLE_ACTIVITY.STALLS_L2_PENDING */
+ INTEL_UEVENT_CONSTRAINT(0x02a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */
+ INTEL_UEVENT_CONSTRAINT(0x06a3, 0x4), /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */
INTEL_EVENT_CONSTRAINT(0x48, 0x4), /* L1D_PEND_MISS.PENDING */
INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
else
p->ainsn.boostable = -1;
+ /* Check whether the instruction modifies Interrupt Flag or not */
+ p->ainsn.if_modifier = is_IF_modifier(p->ainsn.insn);
+
/* Also, displacement change doesn't affect the first byte */
p->opcode = p->ainsn.insn[0];
}
__this_cpu_write(current_kprobe, p);
kcb->kprobe_saved_flags = kcb->kprobe_old_flags
= (regs->flags & (X86_EFLAGS_TF | X86_EFLAGS_IF));
- if (is_IF_modifier(p->ainsn.insn))
+ if (p->ainsn.if_modifier)
kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
}
struct microcode_intel ***mc_saved;
mc_saved = (struct microcode_intel ***)
- __pa_symbol(&mc_saved_data->mc_saved);
+ __pa_nodebug(&mc_saved_data->mc_saved);
for (i = 0; i < mc_saved_data->mc_saved_count; i++) {
struct microcode_intel *p;
p = *(struct microcode_intel **)
- __pa(mc_saved_data->mc_saved + i);
- mc_saved_tmp[i] = (struct microcode_intel *)__pa(p);
+ __pa_nodebug(mc_saved_data->mc_saved + i);
+ mc_saved_tmp[i] = (struct microcode_intel *)__pa_nodebug(p);
}
}
#endif
struct cpio_data cd;
long offset = 0;
#ifdef CONFIG_X86_32
- char *p = (char *)__pa_symbol(ucode_name);
+ char *p = (char *)__pa_nodebug(ucode_name);
#else
char *p = ucode_name;
#endif
if (mc_intel == NULL)
return;
- delay_ucode_info_p = (int *)__pa_symbol(&delay_ucode_info);
- current_mc_date_p = (int *)__pa_symbol(¤t_mc_date);
+ delay_ucode_info_p = (int *)__pa_nodebug(&delay_ucode_info);
+ current_mc_date_p = (int *)__pa_nodebug(¤t_mc_date);
*delay_ucode_info_p = 1;
*current_mc_date_p = mc_intel->hdr.date;
}
#endif
-static int apply_microcode_early(struct mc_saved_data *mc_saved_data,
- struct ucode_cpu_info *uci)
+static int __cpuinit apply_microcode_early(struct mc_saved_data *mc_saved_data,
+ struct ucode_cpu_info *uci)
{
struct microcode_intel *mc_intel;
unsigned int val[2];
#ifdef CONFIG_X86_32
struct boot_params *boot_params_p;
- boot_params_p = (struct boot_params *)__pa_symbol(&boot_params);
+ boot_params_p = (struct boot_params *)__pa_nodebug(&boot_params);
ramdisk_image = boot_params_p->hdr.ramdisk_image;
ramdisk_size = boot_params_p->hdr.ramdisk_size;
initrd_start_early = ramdisk_image;
initrd_end_early = initrd_start_early + ramdisk_size;
_load_ucode_intel_bsp(
- (struct mc_saved_data *)__pa_symbol(&mc_saved_data),
- (unsigned long *)__pa_symbol(&mc_saved_in_initrd),
+ (struct mc_saved_data *)__pa_nodebug(&mc_saved_data),
+ (unsigned long *)__pa_nodebug(&mc_saved_in_initrd),
initrd_start_early, initrd_end_early, &uci);
#else
ramdisk_image = boot_params.hdr.ramdisk_image;
unsigned long *initrd_start_p;
mc_saved_in_initrd_p =
- (unsigned long *)__pa_symbol(mc_saved_in_initrd);
- mc_saved_data_p = (struct mc_saved_data *)__pa_symbol(&mc_saved_data);
- initrd_start_p = (unsigned long *)__pa_symbol(&initrd_start);
- initrd_start_addr = (unsigned long)__pa_symbol(*initrd_start_p);
+ (unsigned long *)__pa_nodebug(mc_saved_in_initrd);
+ mc_saved_data_p = (struct mc_saved_data *)__pa_nodebug(&mc_saved_data);
+ initrd_start_p = (unsigned long *)__pa_nodebug(&initrd_start);
+ initrd_start_addr = (unsigned long)__pa_nodebug(*initrd_start_p);
#else
mc_saved_data_p = &mc_saved_data;
mc_saved_in_initrd_p = mc_saved_in_initrd;
unsigned long flags, this_tsc_khz;
struct kvm_vcpu_arch *vcpu = &v->arch;
struct kvm_arch *ka = &v->kvm->arch;
- void *shared_kaddr;
s64 kernel_ns, max_kernel_ns;
u64 tsc_timestamp, host_tsc;
- struct pvclock_vcpu_time_info *guest_hv_clock;
+ struct pvclock_vcpu_time_info guest_hv_clock;
u8 pvclock_flags;
bool use_master_clock;
kernel_ns = 0;
host_tsc = 0;
- /* Keep irq disabled to prevent changes to the clock */
- local_irq_save(flags);
- this_tsc_khz = __get_cpu_var(cpu_tsc_khz);
- if (unlikely(this_tsc_khz == 0)) {
- local_irq_restore(flags);
- kvm_make_request(KVM_REQ_CLOCK_UPDATE, v);
- return 1;
- }
-
/*
* If the host uses TSC clock, then passthrough TSC as stable
* to the guest.
kernel_ns = ka->master_kernel_ns;
}
spin_unlock(&ka->pvclock_gtod_sync_lock);
+
+ /* Keep irq disabled to prevent changes to the clock */
+ local_irq_save(flags);
+ this_tsc_khz = __get_cpu_var(cpu_tsc_khz);
+ if (unlikely(this_tsc_khz == 0)) {
+ local_irq_restore(flags);
+ kvm_make_request(KVM_REQ_CLOCK_UPDATE, v);
+ return 1;
+ }
if (!use_master_clock) {
host_tsc = native_read_tsc();
kernel_ns = get_kernel_ns();
local_irq_restore(flags);
- if (!vcpu->time_page)
+ if (!vcpu->pv_time_enabled)
return 0;
/*
*/
vcpu->hv_clock.version += 2;
- shared_kaddr = kmap_atomic(vcpu->time_page);
-
- guest_hv_clock = shared_kaddr + vcpu->time_offset;
+ if (unlikely(kvm_read_guest_cached(v->kvm, &vcpu->pv_time,
+ &guest_hv_clock, sizeof(guest_hv_clock))))
+ return 0;
/* retain PVCLOCK_GUEST_STOPPED if set in guest copy */
- pvclock_flags = (guest_hv_clock->flags & PVCLOCK_GUEST_STOPPED);
+ pvclock_flags = (guest_hv_clock.flags & PVCLOCK_GUEST_STOPPED);
if (vcpu->pvclock_set_guest_stopped_request) {
pvclock_flags |= PVCLOCK_GUEST_STOPPED;
vcpu->hv_clock.flags = pvclock_flags;
- memcpy(shared_kaddr + vcpu->time_offset, &vcpu->hv_clock,
- sizeof(vcpu->hv_clock));
-
- kunmap_atomic(shared_kaddr);
-
- mark_page_dirty(v->kvm, vcpu->time >> PAGE_SHIFT);
+ kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
+ &vcpu->hv_clock,
+ sizeof(vcpu->hv_clock));
return 0;
}
static void kvmclock_reset(struct kvm_vcpu *vcpu)
{
- if (vcpu->arch.time_page) {
- kvm_release_page_dirty(vcpu->arch.time_page);
- vcpu->arch.time_page = NULL;
- }
+ vcpu->arch.pv_time_enabled = false;
}
static void accumulate_steal_time(struct kvm_vcpu *vcpu)
break;
case MSR_KVM_SYSTEM_TIME_NEW:
case MSR_KVM_SYSTEM_TIME: {
+ u64 gpa_offset;
kvmclock_reset(vcpu);
vcpu->arch.time = data;
if (!(data & 1))
break;
- /* ...but clean it before doing the actual write */
- vcpu->arch.time_offset = data & ~(PAGE_MASK | 1);
+ gpa_offset = data & ~(PAGE_MASK | 1);
- vcpu->arch.time_page =
- gfn_to_page(vcpu->kvm, data >> PAGE_SHIFT);
+ /* Check that the address is 32-byte aligned. */
+ if (gpa_offset & (sizeof(struct pvclock_vcpu_time_info) - 1))
+ break;
- if (is_error_page(vcpu->arch.time_page))
- vcpu->arch.time_page = NULL;
+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ &vcpu->arch.pv_time, data & ~1ULL))
+ vcpu->arch.pv_time_enabled = false;
+ else
+ vcpu->arch.pv_time_enabled = true;
break;
}
*/
static int kvm_set_guest_paused(struct kvm_vcpu *vcpu)
{
- if (!vcpu->arch.time_page)
+ if (!vcpu->arch.pv_time_enabled)
return -EINVAL;
vcpu->arch.pvclock_set_guest_stopped_request = true;
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
goto fail_free_wbinvd_dirty_mask;
vcpu->arch.ia32_tsc_adjust_msr = 0x0;
+ vcpu->arch.pv_time_enabled = false;
kvm_async_pf_hash_reset(vcpu);
kvm_pmu_init(vcpu);
char c;
unsigned zero_len;
- for (; len; --len) {
+ for (; len; --len, to++) {
if (__get_user_nocheck(c, from++, sizeof(char)))
break;
- if (__put_user_nocheck(c, to++, sizeof(char)))
+ if (__put_user_nocheck(c, to, sizeof(char)))
break;
}
__xen_write_cr3(true, cr3);
xen_mc_issue(PARAVIRT_LAZY_CPU); /* interrupts restored */
-
- pv_mmu_ops.write_cr3 = &xen_write_cr3;
}
#endif
#endif
#ifdef CONFIG_X86_64
+ pv_mmu_ops.write_cr3 = &xen_write_cr3;
SetPagePinned(virt_to_page(level3_user_vsyscall));
#endif
xen_mark_init_mm_pinned();
* copied from blk_rq_pos(rq).
*/
if (error_sector)
- *error_sector = bio->bi_sector;
+ *error_sector = bio->bi_sector;
if (!bio_flagged(bio, BIO_UPTODATE))
ret = -EIO;
hd_struct_put(part);
}
+EXPORT_SYMBOL(delete_partition);
static ssize_t whole_disk_show(struct device *dev,
struct device_attribute *attr, char *buf)
return rc;
data_len = estatus->data_length;
gdata = (struct acpi_hest_generic_data *)(estatus + 1);
- while (data_len > sizeof(*gdata)) {
+ while (data_len >= sizeof(*gdata)) {
gedata_len = gdata->error_data_length;
if (gedata_len > data_len - sizeof(*gdata))
return -EINVAL;
static void handle_root_bridge_removal(struct acpi_device *device)
{
+ acpi_status status;
struct acpi_eject_event *ej_event;
ej_event = kmalloc(sizeof(*ej_event), GFP_KERNEL);
ej_event->device = device;
ej_event->event = ACPI_NOTIFY_EJECT_REQUEST;
- acpi_bus_hot_remove_device(ej_event);
+ status = acpi_os_hotplug_execute(acpi_bus_hot_remove_device, ej_event);
+ if (ACPI_FAILURE(status))
+ kfree(ej_event);
}
static void _handle_hotplug_event_root(struct work_struct *work)
handle = hp_work->handle;
type = hp_work->type;
- root = acpi_pci_find_root(handle);
+ acpi_scan_lock_acquire();
+ root = acpi_pci_find_root(handle);
acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer);
switch (type) {
break;
}
+ acpi_scan_lock_release();
kfree(hp_work); /* allocated in handle_hotplug_event_bridge */
kfree(buffer.pointer);
}
},
{
.callback = init_nvs_nosave,
+ .ident = "Sony Vaio VGN-FW21M",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FW21M"),
+ },
+ },
+ {
+ .callback = init_nvs_nosave,
.ident = "Sony Vaio VPCEB17FX",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
EXPORT_SYMBOL(tegra_ahb_enable_smmu);
#endif
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_PM
static int tegra_ahb_suspend(struct device *dev)
{
int i;
option libata.noacpi=1
config SATA_ZPODD
- bool "SATA Zero Power ODD Support"
+ bool "SATA Zero Power Optical Disc Drive (ZPODD) support"
depends on ATA_ACPI
default n
help
- This option adds support for SATA ZPODD. It requires both
- ODD and the platform support, and if enabled, will automatically
- power on/off the ODD when certain condition is satisfied. This
- does not impact user's experience of the ODD, only power is saved
- when ODD is not in use(i.e. no disc inside).
+ This option adds support for SATA Zero Power Optical Disc
+ Drive (ZPODD). It requires both the ODD and the platform
+ support, and if enabled, will automatically power on/off the
+ ODD when certain condition is satisfied. This does not impact
+ end user's experience of the ODD, only power is saved when
+ the ODD is not in use (i.e. no disc inside).
If unsure, say N.
{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
+ { PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
{ PCI_VDEVICE(INTEL, 0x8d04), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d06), board_ahci }, /* Wellsburg RAID */
static int prefer_ms_hyperv = 1;
module_param(prefer_ms_hyperv, int, 0);
+MODULE_PARM_DESC(prefer_ms_hyperv,
+ "Prefer Hyper-V paravirtualization drivers instead of ATA, "
+ "0 - Use ATA drivers, "
+ "1 (Default) - Use the paravirtualization drivers.");
static void piix_ignore_devices_quirk(struct ata_host *host)
{
handle = ata_dev_acpi_handle(dev);
if (handle)
- acpi_dev_pm_remove_dependent(handle, &sdev->sdev_gendev);
+ acpi_dev_pm_add_dependent(handle, &sdev->sdev_gendev);
}
static void ata_acpi_unregister_power_resource(struct ata_device *dev)
},
};
-static int __init pata_s3c_init(void)
-{
- return platform_driver_probe(&pata_s3c_driver, pata_s3c_probe);
-}
-
-static void __exit pata_s3c_exit(void)
-{
- platform_driver_unregister(&pata_s3c_driver);
-}
-
-module_init(pata_s3c_init);
-module_exit(pata_s3c_exit);
+module_platform_driver_probe(pata_s3c_driver, pata_s3c_probe);
MODULE_AUTHOR("Abhilash Kesavan, <a.kesavan@samsung.com>");
MODULE_DESCRIPTION("low-level driver for Samsung PATA controller");
if (hcr_base)
iounmap(hcr_base);
- if (host_priv)
- kfree(host_priv);
+ kfree(host_priv);
return retval;
}
extern int platform_bus_init(void);
extern void cpu_dev_init(void);
+struct kobject *virtual_device_parent(struct device *dev);
+
extern int bus_add_device(struct device *dev);
extern void bus_probe_device(struct device *dev);
extern void bus_remove_device(struct device *dev);
{
kfree(dev);
}
-/**
- * subsys_system_register - register a subsystem at /sys/devices/system/
- * @subsys: system subsystem
- * @groups: default attributes for the root device
- *
- * All 'system' subsystems have a /sys/devices/system/<name> root device
- * with the name of the subsystem. The root device can carry subsystem-
- * wide attributes. All registered devices are below this single root
- * device and are named after the subsystem with a simple enumeration
- * number appended. The registered devices are not explicitely named;
- * only 'id' in the device needs to be set.
- *
- * Do not use this interface for anything new, it exists for compatibility
- * with bad ideas only. New subsystems should use plain subsystems; and
- * add the subsystem-wide attributes should be added to the subsystem
- * directory itself and not some create fake root-device placed in
- * /sys/devices/system/<name>.
- */
-int subsys_system_register(struct bus_type *subsys,
- const struct attribute_group **groups)
+
+static int subsys_register(struct bus_type *subsys,
+ const struct attribute_group **groups,
+ struct kobject *parent_of_root)
{
struct device *dev;
int err;
if (err < 0)
goto err_name;
- dev->kobj.parent = &system_kset->kobj;
+ dev->kobj.parent = parent_of_root;
dev->groups = groups;
dev->release = system_root_device_release;
bus_unregister(subsys);
return err;
}
+
+/**
+ * subsys_system_register - register a subsystem at /sys/devices/system/
+ * @subsys: system subsystem
+ * @groups: default attributes for the root device
+ *
+ * All 'system' subsystems have a /sys/devices/system/<name> root device
+ * with the name of the subsystem. The root device can carry subsystem-
+ * wide attributes. All registered devices are below this single root
+ * device and are named after the subsystem with a simple enumeration
+ * number appended. The registered devices are not explicitely named;
+ * only 'id' in the device needs to be set.
+ *
+ * Do not use this interface for anything new, it exists for compatibility
+ * with bad ideas only. New subsystems should use plain subsystems; and
+ * add the subsystem-wide attributes should be added to the subsystem
+ * directory itself and not some create fake root-device placed in
+ * /sys/devices/system/<name>.
+ */
+int subsys_system_register(struct bus_type *subsys,
+ const struct attribute_group **groups)
+{
+ return subsys_register(subsys, groups, &system_kset->kobj);
+}
EXPORT_SYMBOL_GPL(subsys_system_register);
+/**
+ * subsys_virtual_register - register a subsystem at /sys/devices/virtual/
+ * @subsys: virtual subsystem
+ * @groups: default attributes for the root device
+ *
+ * All 'virtual' subsystems have a /sys/devices/system/<name> root device
+ * with the name of the subystem. The root device can carry subsystem-wide
+ * attributes. All registered devices are below this single root device.
+ * There's no restriction on device naming. This is for kernel software
+ * constructs which need sysfs interface.
+ */
+int subsys_virtual_register(struct bus_type *subsys,
+ const struct attribute_group **groups)
+{
+ struct kobject *virtual_dir;
+
+ virtual_dir = virtual_device_parent(NULL);
+ if (!virtual_dir)
+ return -ENOMEM;
+
+ return subsys_register(subsys, groups, virtual_dir);
+}
+
int __init buses_init(void)
{
bus_kset = kset_create_and_add("bus", &bus_uevent_ops, NULL);
set_dev_node(dev, -1);
}
-static struct kobject *virtual_device_parent(struct device *dev)
+struct kobject *virtual_device_parent(struct device *dev)
{
static struct kobject *virtual_dir = NULL;
If unsure, say N.
config BLK_DEV_RSXX
- tristate "RamSam PCIe Flash SSD Device Driver"
+ tristate "IBM FlashSystem 70/80 PCIe SSD Device Driver"
depends on PCI
help
Device driver for IBM's high speed PCIe SSD
- storage devices: RamSan-70 and RamSan-80.
+ storage devices: FlashSystem-70 and FlashSystem-80.
To compile this driver as a module, choose M here: the
module will be called rsxx.
if (rc)
return rc;
h->cfgtable = remap_pci_mem(pci_resource_start(h->pdev,
- cfg_base_addr_index) + cfg_offset, sizeof(h->cfgtable));
+ cfg_base_addr_index) + cfg_offset, sizeof(*h->cfgtable));
if (!h->cfgtable)
return -ENOMEM;
rc = write_driver_ver_to_cfgtable(h->cfgtable);
lo->lo_state = Lo_unbound;
/* This is safe: open() is still holding a reference. */
module_put(THIS_MODULE);
- if (lo->lo_flags & LO_FLAGS_PARTSCAN && bdev)
- ioctl_by_bdev(bdev, BLKRRPART, 0);
lo->lo_flags = 0;
if (!part_shift)
lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
mutex_unlock(&lo->lo_ctl_mutex);
+
+ /*
+ * Remove all partitions, since BLKRRPART won't remove user
+ * added partitions when max_part=0
+ */
+ if (bdev) {
+ struct disk_part_iter piter;
+ struct hd_struct *part;
+
+ mutex_lock_nested(&bdev->bd_mutex, 1);
+ invalidate_partition(bdev->bd_disk, 0);
+ disk_part_iter_init(&piter, bdev->bd_disk,
+ DISK_PITER_INCL_EMPTY);
+ while ((part = disk_part_iter_next(&piter)))
+ delete_partition(bdev->bd_disk, part->partno);
+ disk_part_iter_exit(&piter);
+ mutex_unlock(&bdev->bd_mutex);
+ }
+
/*
* Need not hold lo_ctl_mutex to fput backing file.
* Calling fput holding lo_ctl_mutex triggers a circular
goto out_free_dev;
i = err;
+ err = -ENOMEM;
lo->lo_queue = blk_alloc_queue(GFP_KERNEL);
if (!lo->lo_queue)
goto out_free_dev;
gpio_direction_output(host->rst, 1);
/* reset out pin */
- if (!(prv_data->dev_attr & MG_DEV_MASK))
+ if (!(prv_data->dev_attr & MG_DEV_MASK)) {
+ err = -EINVAL;
goto probe_err_3a;
+ }
if (prv_data->dev_attr != MG_BOOT_DEV) {
rsc = platform_get_resource_byname(plat_dev, IORESOURCE_IO,
dd->isr_workq = create_workqueue(dd->workq_name);
if (!dd->isr_workq) {
dev_warn(&pdev->dev, "Can't create wq %d\n", dd->instance);
+ rv = -ENOMEM;
goto block_initialize_err;
}
INIT_WORK(&dd->work[7].work, mtip_workq_sdbf7);
pci_set_master(pdev);
- if (pci_enable_msi(pdev)) {
+ rv = pci_enable_msi(pdev);
+ if (rv) {
dev_warn(&pdev->dev,
"Unable to enable MSI interrupt.\n");
goto block_initialize_err;
BUILD_BUG_ON(sizeof(struct nvme_id_ctrl) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
+ BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
}
typedef void (*nvme_completion_fn)(struct nvme_dev *, void *,
*fn = special_completion;
return CMD_CTX_INVALID;
}
- *fn = info[cmdid].fn;
+ if (fn)
+ *fn = info[cmdid].fn;
ctx = info[cmdid].ctx;
info[cmdid].fn = special_completion;
info[cmdid].ctx = CMD_CTX_COMPLETED;
iod->offset = offsetof(struct nvme_iod, sg[nseg]);
iod->npages = -1;
iod->length = nbytes;
+ iod->nents = 0;
}
return iod;
struct bio *bio = iod->private;
u16 status = le16_to_cpup(&cqe->status) >> 1;
- dma_unmap_sg(&dev->pci_dev->dev, iod->sg, iod->nents,
+ if (iod->nents)
+ dma_unmap_sg(&dev->pci_dev->dev, iod->sg, iod->nents,
bio_data_dir(bio) ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
nvme_free_iod(dev, iod);
if (status) {
result = nvme_map_bio(nvmeq->q_dmadev, iod, bio, dma_dir, psegs);
if (result < 0)
- goto free_iod;
+ goto free_cmdid;
length = result;
cmnd->rw.command_id = cmdid;
return 0;
+ free_cmdid:
+ free_cmdid(nvmeq, cmdid, NULL);
free_iod:
nvme_free_iod(nvmeq->dev, iod);
nomem:
return nvme_submit_admin_cmd(dev, &c, NULL);
}
-static int nvme_get_features(struct nvme_dev *dev, unsigned fid,
- unsigned nsid, dma_addr_t dma_addr)
+static int nvme_get_features(struct nvme_dev *dev, unsigned fid, unsigned nsid,
+ dma_addr_t dma_addr, u32 *result)
{
struct nvme_command c;
c.features.prp1 = cpu_to_le64(dma_addr);
c.features.fid = cpu_to_le32(fid);
- return nvme_submit_admin_cmd(dev, &c, NULL);
+ return nvme_submit_admin_cmd(dev, &c, result);
}
static int nvme_set_features(struct nvme_dev *dev, unsigned fid,
spin_lock_irq(&nvmeq->q_lock);
nvme_cancel_ios(nvmeq, false);
+ while (bio_list_peek(&nvmeq->sq_cong)) {
+ struct bio *bio = bio_list_pop(&nvmeq->sq_cong);
+ bio_endio(bio, -EIO);
+ }
spin_unlock_irq(&nvmeq->q_lock);
irq_set_affinity_hint(vector, NULL);
if (length != cmd.data_len)
status = -ENOMEM;
else
- status = nvme_submit_admin_cmd(dev, &c, NULL);
+ status = nvme_submit_admin_cmd(dev, &c, &cmd.result);
if (cmd.data_len) {
nvme_unmap_user_pages(dev, cmd.opcode & 1, iod);
nvme_free_iod(dev, iod);
}
+
+ if (!status && copy_to_user(&ucmd->result, &cmd.result,
+ sizeof(cmd.result)))
+ status = -EFAULT;
+
return status;
}
continue;
res = nvme_get_features(dev, NVME_FEAT_LBA_RANGE, i,
- dma_addr + 4096);
+ dma_addr + 4096, NULL);
if (res)
- continue;
+ memset(mem + 4096, 0, 4096);
ns = nvme_alloc_ns(dev, i, mem, mem + 4096);
if (ns)
return atomic_read(&obj_request->done) != 0;
}
+static void
+rbd_img_obj_request_read_callback(struct rbd_obj_request *obj_request)
+{
+ dout("%s: obj %p img %p result %d %llu/%llu\n", __func__,
+ obj_request, obj_request->img_request, obj_request->result,
+ obj_request->xferred, obj_request->length);
+ /*
+ * ENOENT means a hole in the image. We zero-fill the
+ * entire length of the request. A short read also implies
+ * zero-fill to the end of the request. Either way we
+ * update the xferred count to indicate the whole request
+ * was satisfied.
+ */
+ BUG_ON(obj_request->type != OBJ_REQUEST_BIO);
+ if (obj_request->result == -ENOENT) {
+ zero_bio_chain(obj_request->bio_list, 0);
+ obj_request->result = 0;
+ obj_request->xferred = obj_request->length;
+ } else if (obj_request->xferred < obj_request->length &&
+ !obj_request->result) {
+ zero_bio_chain(obj_request->bio_list, obj_request->xferred);
+ obj_request->xferred = obj_request->length;
+ }
+ obj_request_done_set(obj_request);
+}
+
static void rbd_obj_request_complete(struct rbd_obj_request *obj_request)
{
dout("%s: obj %p cb %p\n", __func__, obj_request,
{
dout("%s: obj %p result %d %llu/%llu\n", __func__, obj_request,
obj_request->result, obj_request->xferred, obj_request->length);
- /*
- * ENOENT means a hole in the object. We zero-fill the
- * entire length of the request. A short read also implies
- * zero-fill to the end of the request. Either way we
- * update the xferred count to indicate the whole request
- * was satisfied.
- */
- if (obj_request->result == -ENOENT) {
- zero_bio_chain(obj_request->bio_list, 0);
- obj_request->result = 0;
- obj_request->xferred = obj_request->length;
- } else if (obj_request->xferred < obj_request->length &&
- !obj_request->result) {
- zero_bio_chain(obj_request->bio_list, obj_request->xferred);
- obj_request->xferred = obj_request->length;
- }
- obj_request_done_set(obj_request);
+ if (obj_request->img_request)
+ rbd_img_obj_request_read_callback(obj_request);
+ else
+ obj_request_done_set(obj_request);
}
static void rbd_osd_write_callback(struct rbd_obj_request *obj_request)
obj-$(CONFIG_BLK_DEV_RSXX) += rsxx.o
-rsxx-y := config.o core.o cregs.o dev.o dma.o
+rsxx-objs := config.o core.o cregs.o dev.o dma.o
#include "rsxx_priv.h"
#include "rsxx_cfg.h"
-static void initialize_config(void *config)
+static void initialize_config(struct rsxx_card_cfg *cfg)
{
- struct rsxx_card_cfg *cfg = config;
-
cfg->hdr.version = RSXX_CFG_VERSION;
cfg->data.block_size = RSXX_HW_BLK_SIZE;
cfg->data.stripe_size = RSXX_HW_BLK_SIZE;
- cfg->data.vendor_id = RSXX_VENDOR_ID_TMS_IBM;
+ cfg->data.vendor_id = RSXX_VENDOR_ID_IBM;
cfg->data.cache_order = (-1);
cfg->data.intr_coal.mode = RSXX_INTR_COAL_DISABLED;
cfg->data.intr_coal.count = 0;
} else {
dev_info(CARD_TO_DEV(card),
"Initializing card configuration.\n");
- initialize_config(card);
+ initialize_config(&card->config);
st = rsxx_save_config(card);
if (st)
return st;
#include <linux/reboot.h>
#include <linux/slab.h>
#include <linux/bitops.h>
+#include <linux/delay.h>
#include <linux/genhd.h>
#include <linux/idr.h>
#define NO_LEGACY 0
-MODULE_DESCRIPTION("IBM RamSan PCIe Flash SSD Device Driver");
-MODULE_AUTHOR("IBM <support@ramsan.com>");
+MODULE_DESCRIPTION("IBM FlashSystem 70/80 PCIe SSD Device Driver");
+MODULE_AUTHOR("Joshua Morris/Philip Kelleher, IBM");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRIVER_VERSION);
static DEFINE_SPINLOCK(rsxx_ida_lock);
/*----------------- Interrupt Control & Handling -------------------*/
+
+static void rsxx_mask_interrupts(struct rsxx_cardinfo *card)
+{
+ card->isr_mask = 0;
+ card->ier_mask = 0;
+}
+
static void __enable_intr(unsigned int *mask, unsigned int intr)
{
*mask |= intr;
*/
void rsxx_enable_ier(struct rsxx_cardinfo *card, unsigned int intr)
{
- if (unlikely(card->halt))
+ if (unlikely(card->halt) ||
+ unlikely(card->eeh_state))
return;
__enable_intr(&card->ier_mask, intr);
void rsxx_disable_ier(struct rsxx_cardinfo *card, unsigned int intr)
{
+ if (unlikely(card->eeh_state))
+ return;
+
__disable_intr(&card->ier_mask, intr);
iowrite32(card->ier_mask, card->regmap + IER);
}
void rsxx_enable_ier_and_isr(struct rsxx_cardinfo *card,
unsigned int intr)
{
- if (unlikely(card->halt))
+ if (unlikely(card->halt) ||
+ unlikely(card->eeh_state))
return;
__enable_intr(&card->isr_mask, intr);
void rsxx_disable_ier_and_isr(struct rsxx_cardinfo *card,
unsigned int intr)
{
+ if (unlikely(card->eeh_state))
+ return;
+
__disable_intr(&card->isr_mask, intr);
__disable_intr(&card->ier_mask, intr);
iowrite32(card->ier_mask, card->regmap + IER);
do {
reread_isr = 0;
+ if (unlikely(card->eeh_state))
+ break;
+
isr = ioread32(card->regmap + ISR);
if (isr == 0xffffffff) {
/*
}
/*----------------- Card Event Handler -------------------*/
-static char *rsxx_card_state_to_str(unsigned int state)
+static const char * const rsxx_card_state_to_str(unsigned int state)
{
- static char *state_strings[] = {
+ static const char * const state_strings[] = {
"Unknown", "Shutdown", "Starting", "Formatting",
"Uninitialized", "Good", "Shutting Down",
"Fault", "Read Only Fault", "dStroying"
return 0;
}
+static int rsxx_eeh_frozen(struct pci_dev *dev)
+{
+ struct rsxx_cardinfo *card = pci_get_drvdata(dev);
+ int i;
+ int st;
+
+ dev_warn(&dev->dev, "IBM FlashSystem PCI: preparing for slot reset.\n");
+
+ card->eeh_state = 1;
+ rsxx_mask_interrupts(card);
+
+ /*
+ * We need to guarantee that the write for eeh_state and masking
+ * interrupts does not become reordered. This will prevent a possible
+ * race condition with the EEH code.
+ */
+ wmb();
+
+ pci_disable_device(dev);
+
+ st = rsxx_eeh_save_issued_dmas(card);
+ if (st)
+ return st;
+
+ rsxx_eeh_save_issued_creg(card);
+
+ for (i = 0; i < card->n_targets; i++) {
+ if (card->ctrl[i].status.buf)
+ pci_free_consistent(card->dev, STATUS_BUFFER_SIZE8,
+ card->ctrl[i].status.buf,
+ card->ctrl[i].status.dma_addr);
+ if (card->ctrl[i].cmd.buf)
+ pci_free_consistent(card->dev, COMMAND_BUFFER_SIZE8,
+ card->ctrl[i].cmd.buf,
+ card->ctrl[i].cmd.dma_addr);
+ }
+
+ return 0;
+}
+
+static void rsxx_eeh_failure(struct pci_dev *dev)
+{
+ struct rsxx_cardinfo *card = pci_get_drvdata(dev);
+ int i;
+
+ dev_err(&dev->dev, "IBM FlashSystem PCI: disabling failed card.\n");
+
+ card->eeh_state = 1;
+
+ for (i = 0; i < card->n_targets; i++)
+ del_timer_sync(&card->ctrl[i].activity_timer);
+
+ rsxx_eeh_cancel_dmas(card);
+}
+
+static int rsxx_eeh_fifo_flush_poll(struct rsxx_cardinfo *card)
+{
+ unsigned int status;
+ int iter = 0;
+
+ /* We need to wait for the hardware to reset */
+ while (iter++ < 10) {
+ status = ioread32(card->regmap + PCI_RECONFIG);
+
+ if (status & RSXX_FLUSH_BUSY) {
+ ssleep(1);
+ continue;
+ }
+
+ if (status & RSXX_FLUSH_TIMEOUT)
+ dev_warn(CARD_TO_DEV(card), "HW: flash controller timeout\n");
+ return 0;
+ }
+
+ /* Hardware failed resetting itself. */
+ return -1;
+}
+
+static pci_ers_result_t rsxx_error_detected(struct pci_dev *dev,
+ enum pci_channel_state error)
+{
+ int st;
+
+ if (dev->revision < RSXX_EEH_SUPPORT)
+ return PCI_ERS_RESULT_NONE;
+
+ if (error == pci_channel_io_perm_failure) {
+ rsxx_eeh_failure(dev);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ st = rsxx_eeh_frozen(dev);
+ if (st) {
+ dev_err(&dev->dev, "Slot reset setup failed\n");
+ rsxx_eeh_failure(dev);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+static pci_ers_result_t rsxx_slot_reset(struct pci_dev *dev)
+{
+ struct rsxx_cardinfo *card = pci_get_drvdata(dev);
+ unsigned long flags;
+ int i;
+ int st;
+
+ dev_warn(&dev->dev,
+ "IBM FlashSystem PCI: recovering from slot reset.\n");
+
+ st = pci_enable_device(dev);
+ if (st)
+ goto failed_hw_setup;
+
+ pci_set_master(dev);
+
+ st = rsxx_eeh_fifo_flush_poll(card);
+ if (st)
+ goto failed_hw_setup;
+
+ rsxx_dma_queue_reset(card);
+
+ for (i = 0; i < card->n_targets; i++) {
+ st = rsxx_hw_buffers_init(dev, &card->ctrl[i]);
+ if (st)
+ goto failed_hw_buffers_init;
+ }
+
+ if (card->config_valid)
+ rsxx_dma_configure(card);
+
+ /* Clears the ISR register from spurious interrupts */
+ st = ioread32(card->regmap + ISR);
+
+ card->eeh_state = 0;
+
+ st = rsxx_eeh_remap_dmas(card);
+ if (st)
+ goto failed_remap_dmas;
+
+ spin_lock_irqsave(&card->irq_lock, flags);
+ if (card->n_targets & RSXX_MAX_TARGETS)
+ rsxx_enable_ier_and_isr(card, CR_INTR_ALL_G);
+ else
+ rsxx_enable_ier_and_isr(card, CR_INTR_ALL_C);
+ spin_unlock_irqrestore(&card->irq_lock, flags);
+
+ rsxx_kick_creg_queue(card);
+
+ for (i = 0; i < card->n_targets; i++) {
+ spin_lock(&card->ctrl[i].queue_lock);
+ if (list_empty(&card->ctrl[i].queue)) {
+ spin_unlock(&card->ctrl[i].queue_lock);
+ continue;
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+
+ queue_work(card->ctrl[i].issue_wq,
+ &card->ctrl[i].issue_dma_work);
+ }
+
+ dev_info(&dev->dev, "IBM FlashSystem PCI: recovery complete.\n");
+
+ return PCI_ERS_RESULT_RECOVERED;
+
+failed_hw_buffers_init:
+failed_remap_dmas:
+ for (i = 0; i < card->n_targets; i++) {
+ if (card->ctrl[i].status.buf)
+ pci_free_consistent(card->dev,
+ STATUS_BUFFER_SIZE8,
+ card->ctrl[i].status.buf,
+ card->ctrl[i].status.dma_addr);
+ if (card->ctrl[i].cmd.buf)
+ pci_free_consistent(card->dev,
+ COMMAND_BUFFER_SIZE8,
+ card->ctrl[i].cmd.buf,
+ card->ctrl[i].cmd.dma_addr);
+ }
+failed_hw_setup:
+ rsxx_eeh_failure(dev);
+ return PCI_ERS_RESULT_DISCONNECT;
+
+}
+
/*----------------- Driver Initialization & Setup -------------------*/
/* Returns: 0 if the driver is compatible with the device
-1 if the driver is NOT compatible with the device */
spin_lock_init(&card->irq_lock);
card->halt = 0;
+ card->eeh_state = 0;
spin_lock_irq(&card->irq_lock);
rsxx_disable_ier_and_isr(card, CR_INTR_ALL);
rsxx_disable_ier_and_isr(card, CR_INTR_EVENT);
spin_unlock_irqrestore(&card->irq_lock, flags);
- /* Prevent work_structs from re-queuing themselves. */
- card->halt = 1;
-
cancel_work_sync(&card->event_work);
rsxx_destroy_dev(card);
spin_lock_irqsave(&card->irq_lock, flags);
rsxx_disable_ier_and_isr(card, CR_INTR_ALL);
spin_unlock_irqrestore(&card->irq_lock, flags);
+
+ /* Prevent work_structs from re-queuing themselves. */
+ card->halt = 1;
+
free_irq(dev->irq, card);
if (!force_legacy)
card_shutdown(card);
}
+static const struct pci_error_handlers rsxx_err_handler = {
+ .error_detected = rsxx_error_detected,
+ .slot_reset = rsxx_slot_reset,
+};
+
static DEFINE_PCI_DEVICE_TABLE(rsxx_pci_ids) = {
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS70_FLASH)},
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS70D_FLASH)},
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS80_FLASH)},
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS81_FLASH)},
+ {PCI_DEVICE(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_FS70_FLASH)},
+ {PCI_DEVICE(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_FS80_FLASH)},
{0,},
};
.remove = rsxx_pci_remove,
.suspend = rsxx_pci_suspend,
.shutdown = rsxx_pci_shutdown,
+ .err_handler = &rsxx_err_handler,
};
static int __init rsxx_core_init(void)
#error Unknown endianess!!! Aborting...
#endif
-static void copy_to_creg_data(struct rsxx_cardinfo *card,
+static int copy_to_creg_data(struct rsxx_cardinfo *card,
int cnt8,
void *buf,
unsigned int stream)
int i = 0;
u32 *data = buf;
+ if (unlikely(card->eeh_state))
+ return -EIO;
+
for (i = 0; cnt8 > 0; i++, cnt8 -= 4) {
/*
* Firmware implementation makes it necessary to byte swap on
else
iowrite32(data[i], card->regmap + CREG_DATA(i));
}
+
+ return 0;
}
-static void copy_from_creg_data(struct rsxx_cardinfo *card,
+static int copy_from_creg_data(struct rsxx_cardinfo *card,
int cnt8,
void *buf,
unsigned int stream)
int i = 0;
u32 *data = buf;
+ if (unlikely(card->eeh_state))
+ return -EIO;
+
for (i = 0; cnt8 > 0; i++, cnt8 -= 4) {
/*
* Firmware implementation makes it necessary to byte swap on
else
data[i] = ioread32(card->regmap + CREG_DATA(i));
}
-}
-
-static struct creg_cmd *pop_active_cmd(struct rsxx_cardinfo *card)
-{
- struct creg_cmd *cmd;
- /*
- * Spin lock is needed because this can be called in atomic/interrupt
- * context.
- */
- spin_lock_bh(&card->creg_ctrl.lock);
- cmd = card->creg_ctrl.active_cmd;
- card->creg_ctrl.active_cmd = NULL;
- spin_unlock_bh(&card->creg_ctrl.lock);
-
- return cmd;
+ return 0;
}
static void creg_issue_cmd(struct rsxx_cardinfo *card, struct creg_cmd *cmd)
{
+ int st;
+
+ if (unlikely(card->eeh_state))
+ return;
+
iowrite32(cmd->addr, card->regmap + CREG_ADD);
iowrite32(cmd->cnt8, card->regmap + CREG_CNT);
if (cmd->op == CREG_OP_WRITE) {
- if (cmd->buf)
- copy_to_creg_data(card, cmd->cnt8,
- cmd->buf, cmd->stream);
+ if (cmd->buf) {
+ st = copy_to_creg_data(card, cmd->cnt8,
+ cmd->buf, cmd->stream);
+ if (st)
+ return;
+ }
}
- /*
- * Data copy must complete before initiating the command. This is
- * needed for weakly ordered processors (i.e. PowerPC), so that all
- * neccessary registers are written before we kick the hardware.
- */
- wmb();
+ if (unlikely(card->eeh_state))
+ return;
/* Setting the valid bit will kick off the command. */
iowrite32(cmd->op, card->regmap + CREG_CMD);
cmd->cb_private = cb_private;
cmd->status = 0;
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
list_add_tail(&cmd->list, &card->creg_ctrl.queue);
card->creg_ctrl.q_depth++;
creg_kick_queue(card);
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
return 0;
}
struct rsxx_cardinfo *card = (struct rsxx_cardinfo *) data;
struct creg_cmd *cmd;
- cmd = pop_active_cmd(card);
+ spin_lock(&card->creg_ctrl.lock);
+ cmd = card->creg_ctrl.active_cmd;
+ card->creg_ctrl.active_cmd = NULL;
+ spin_unlock(&card->creg_ctrl.lock);
+
if (cmd == NULL) {
card->creg_ctrl.creg_stats.creg_timeout++;
dev_warn(CARD_TO_DEV(card),
if (del_timer_sync(&card->creg_ctrl.cmd_timer) == 0)
card->creg_ctrl.creg_stats.failed_cancel_timer++;
- cmd = pop_active_cmd(card);
+ spin_lock_bh(&card->creg_ctrl.lock);
+ cmd = card->creg_ctrl.active_cmd;
+ card->creg_ctrl.active_cmd = NULL;
+ spin_unlock_bh(&card->creg_ctrl.lock);
+
if (cmd == NULL) {
dev_err(CARD_TO_DEV(card),
"Spurious creg interrupt!\n");
goto creg_done;
}
- copy_from_creg_data(card, cnt8, cmd->buf, cmd->stream);
+ st = copy_from_creg_data(card, cnt8, cmd->buf, cmd->stream);
}
creg_done:
kmem_cache_free(creg_cmd_pool, cmd);
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
card->creg_ctrl.active = 0;
creg_kick_queue(card);
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
}
static void creg_reset(struct rsxx_cardinfo *card)
"Resetting creg interface for recovery\n");
/* Cancel outstanding commands */
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
list_for_each_entry_safe(cmd, tmp, &card->creg_ctrl.queue, list) {
list_del(&cmd->list);
card->creg_ctrl.q_depth--;
card->creg_ctrl.active = 0;
}
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
card->creg_ctrl.reset = 0;
spin_lock_irqsave(&card->irq_lock, flags);
return st;
/*
- * This timeout is neccessary for unresponsive hardware. The additional
+ * This timeout is necessary for unresponsive hardware. The additional
* 20 seconds to used to guarantee that each cregs requests has time to
* complete.
*/
- timeout = msecs_to_jiffies((CREG_TIMEOUT_MSEC *
- card->creg_ctrl.q_depth) + 20000);
+ timeout = msecs_to_jiffies(CREG_TIMEOUT_MSEC *
+ card->creg_ctrl.q_depth + 20000);
/*
* The creg interface is guaranteed to complete. It has a timeout
return 0;
}
+void rsxx_eeh_save_issued_creg(struct rsxx_cardinfo *card)
+{
+ struct creg_cmd *cmd = NULL;
+
+ cmd = card->creg_ctrl.active_cmd;
+ card->creg_ctrl.active_cmd = NULL;
+
+ if (cmd) {
+ del_timer_sync(&card->creg_ctrl.cmd_timer);
+
+ spin_lock_bh(&card->creg_ctrl.lock);
+ list_add(&cmd->list, &card->creg_ctrl.queue);
+ card->creg_ctrl.q_depth++;
+ card->creg_ctrl.active = 0;
+ spin_unlock_bh(&card->creg_ctrl.lock);
+ }
+}
+
+void rsxx_kick_creg_queue(struct rsxx_cardinfo *card)
+{
+ spin_lock_bh(&card->creg_ctrl.lock);
+ if (!list_empty(&card->creg_ctrl.queue))
+ creg_kick_queue(card);
+ spin_unlock_bh(&card->creg_ctrl.lock);
+}
+
/*------------ Initialization & Setup --------------*/
int rsxx_creg_setup(struct rsxx_cardinfo *card)
{
int cnt = 0;
/* Cancel outstanding commands */
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
list_for_each_entry_safe(cmd, tmp, &card->creg_ctrl.queue, list) {
list_del(&cmd->list);
if (cmd->cb)
"Canceled active creg command\n");
kmem_cache_free(creg_cmd_pool, cmd);
}
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
cancel_work_sync(&card->creg_ctrl.done_work);
}
struct rsxx_dma {
struct list_head list;
u8 cmd;
- unsigned int laddr; /* Logical address on the ramsan */
+ unsigned int laddr; /* Logical address */
struct {
u32 off;
u32 cnt;
HW_STATUS_FAULT = 0x08,
};
-#define STATUS_BUFFER_SIZE8 4096
-#define COMMAND_BUFFER_SIZE8 4096
-
static struct kmem_cache *rsxx_dma_pool;
struct dma_tracker {
return tgt;
}
-static void rsxx_dma_queue_reset(struct rsxx_cardinfo *card)
+void rsxx_dma_queue_reset(struct rsxx_cardinfo *card)
{
/* Reset all DMA Command/Status Queues */
iowrite32(DMA_QUEUE_RESET, card->regmap + RESET);
u32 q_depth = 0;
u32 intr_coal;
- if (card->config.data.intr_coal.mode != RSXX_INTR_COAL_AUTO_TUNE)
+ if (card->config.data.intr_coal.mode != RSXX_INTR_COAL_AUTO_TUNE ||
+ unlikely(card->eeh_state))
return;
for (i = 0; i < card->n_targets; i++)
}
/*----------------- RSXX DMA Handling -------------------*/
-static void rsxx_complete_dma(struct rsxx_cardinfo *card,
+static void rsxx_complete_dma(struct rsxx_dma_ctrl *ctrl,
struct rsxx_dma *dma,
unsigned int status)
{
if (status & DMA_SW_ERR)
- printk_ratelimited(KERN_ERR
- "SW Error in DMA(cmd x%02x, laddr x%08x)\n",
- dma->cmd, dma->laddr);
+ ctrl->stats.dma_sw_err++;
if (status & DMA_HW_FAULT)
- printk_ratelimited(KERN_ERR
- "HW Fault in DMA(cmd x%02x, laddr x%08x)\n",
- dma->cmd, dma->laddr);
+ ctrl->stats.dma_hw_fault++;
if (status & DMA_CANCELLED)
- printk_ratelimited(KERN_ERR
- "DMA Cancelled(cmd x%02x, laddr x%08x)\n",
- dma->cmd, dma->laddr);
+ ctrl->stats.dma_cancelled++;
if (dma->dma_addr)
- pci_unmap_page(card->dev, dma->dma_addr, get_dma_size(dma),
+ pci_unmap_page(ctrl->card->dev, dma->dma_addr,
+ get_dma_size(dma),
dma->cmd == HW_CMD_BLK_WRITE ?
PCI_DMA_TODEVICE :
PCI_DMA_FROMDEVICE);
if (dma->cb)
- dma->cb(card, dma->cb_data, status ? 1 : 0);
+ dma->cb(ctrl->card, dma->cb_data, status ? 1 : 0);
kmem_cache_free(rsxx_dma_pool, dma);
}
if (requeue_cmd)
rsxx_requeue_dma(ctrl, dma);
else
- rsxx_complete_dma(ctrl->card, dma, status);
+ rsxx_complete_dma(ctrl, dma, status);
}
static void dma_engine_stalled(unsigned long data)
{
struct rsxx_dma_ctrl *ctrl = (struct rsxx_dma_ctrl *)data;
- if (atomic_read(&ctrl->stats.hw_q_depth) == 0)
+ if (atomic_read(&ctrl->stats.hw_q_depth) == 0 ||
+ unlikely(ctrl->card->eeh_state))
return;
if (ctrl->cmd.idx != ioread32(ctrl->regmap + SW_CMD_IDX)) {
ctrl = container_of(work, struct rsxx_dma_ctrl, issue_dma_work);
hw_cmd_buf = ctrl->cmd.buf;
- if (unlikely(ctrl->card->halt))
+ if (unlikely(ctrl->card->halt) ||
+ unlikely(ctrl->card->eeh_state))
return;
while (1) {
*/
if (unlikely(ctrl->card->dma_fault)) {
push_tracker(ctrl->trackers, tag);
- rsxx_complete_dma(ctrl->card, dma, DMA_CANCELLED);
+ rsxx_complete_dma(ctrl, dma, DMA_CANCELLED);
continue;
}
/* Let HW know we've queued commands. */
if (cmds_pending) {
- /*
- * We must guarantee that the CPU writes to 'ctrl->cmd.buf'
- * (which is in PCI-consistent system-memory) from the loop
- * above make it into the coherency domain before the
- * following PIO "trigger" updating the cmd.idx. A WMB is
- * sufficient. We need not explicitly CPU cache-flush since
- * the memory is a PCI-consistent (ie; coherent) mapping.
- */
- wmb();
-
atomic_add(cmds_pending, &ctrl->stats.hw_q_depth);
mod_timer(&ctrl->activity_timer,
jiffies + DMA_ACTIVITY_TIMEOUT);
+
+ if (unlikely(ctrl->card->eeh_state)) {
+ del_timer_sync(&ctrl->activity_timer);
+ return;
+ }
+
iowrite32(ctrl->cmd.idx, ctrl->regmap + SW_CMD_IDX);
}
}
hw_st_buf = ctrl->status.buf;
if (unlikely(ctrl->card->halt) ||
- unlikely(ctrl->card->dma_fault))
+ unlikely(ctrl->card->dma_fault) ||
+ unlikely(ctrl->card->eeh_state))
return;
count = le16_to_cpu(hw_st_buf[ctrl->status.idx].count);
if (status)
rsxx_handle_dma_error(ctrl, dma, status);
else
- rsxx_complete_dma(ctrl->card, dma, 0);
+ rsxx_complete_dma(ctrl, dma, 0);
push_tracker(ctrl->trackers, tag);
/*----------------- DMA Engine Initialization & Setup -------------------*/
+int rsxx_hw_buffers_init(struct pci_dev *dev, struct rsxx_dma_ctrl *ctrl)
+{
+ ctrl->status.buf = pci_alloc_consistent(dev, STATUS_BUFFER_SIZE8,
+ &ctrl->status.dma_addr);
+ ctrl->cmd.buf = pci_alloc_consistent(dev, COMMAND_BUFFER_SIZE8,
+ &ctrl->cmd.dma_addr);
+ if (ctrl->status.buf == NULL || ctrl->cmd.buf == NULL)
+ return -ENOMEM;
+
+ memset(ctrl->status.buf, 0xac, STATUS_BUFFER_SIZE8);
+ iowrite32(lower_32_bits(ctrl->status.dma_addr),
+ ctrl->regmap + SB_ADD_LO);
+ iowrite32(upper_32_bits(ctrl->status.dma_addr),
+ ctrl->regmap + SB_ADD_HI);
+
+ memset(ctrl->cmd.buf, 0x83, COMMAND_BUFFER_SIZE8);
+ iowrite32(lower_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_LO);
+ iowrite32(upper_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_HI);
+
+ ctrl->status.idx = ioread32(ctrl->regmap + HW_STATUS_CNT);
+ if (ctrl->status.idx > RSXX_MAX_OUTSTANDING_CMDS) {
+ dev_crit(&dev->dev, "Failed reading status cnt x%x\n",
+ ctrl->status.idx);
+ return -EINVAL;
+ }
+ iowrite32(ctrl->status.idx, ctrl->regmap + HW_STATUS_CNT);
+ iowrite32(ctrl->status.idx, ctrl->regmap + SW_STATUS_CNT);
+
+ ctrl->cmd.idx = ioread32(ctrl->regmap + HW_CMD_IDX);
+ if (ctrl->cmd.idx > RSXX_MAX_OUTSTANDING_CMDS) {
+ dev_crit(&dev->dev, "Failed reading cmd cnt x%x\n",
+ ctrl->status.idx);
+ return -EINVAL;
+ }
+ iowrite32(ctrl->cmd.idx, ctrl->regmap + HW_CMD_IDX);
+ iowrite32(ctrl->cmd.idx, ctrl->regmap + SW_CMD_IDX);
+
+ return 0;
+}
+
static int rsxx_dma_ctrl_init(struct pci_dev *dev,
struct rsxx_dma_ctrl *ctrl)
{
int i;
+ int st;
memset(&ctrl->stats, 0, sizeof(ctrl->stats));
- ctrl->status.buf = pci_alloc_consistent(dev, STATUS_BUFFER_SIZE8,
- &ctrl->status.dma_addr);
- ctrl->cmd.buf = pci_alloc_consistent(dev, COMMAND_BUFFER_SIZE8,
- &ctrl->cmd.dma_addr);
- if (ctrl->status.buf == NULL || ctrl->cmd.buf == NULL)
- return -ENOMEM;
-
ctrl->trackers = vmalloc(DMA_TRACKER_LIST_SIZE8);
if (!ctrl->trackers)
return -ENOMEM;
INIT_WORK(&ctrl->issue_dma_work, rsxx_issue_dmas);
INIT_WORK(&ctrl->dma_done_work, rsxx_dma_done);
- memset(ctrl->status.buf, 0xac, STATUS_BUFFER_SIZE8);
- iowrite32(lower_32_bits(ctrl->status.dma_addr),
- ctrl->regmap + SB_ADD_LO);
- iowrite32(upper_32_bits(ctrl->status.dma_addr),
- ctrl->regmap + SB_ADD_HI);
-
- memset(ctrl->cmd.buf, 0x83, COMMAND_BUFFER_SIZE8);
- iowrite32(lower_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_LO);
- iowrite32(upper_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_HI);
-
- ctrl->status.idx = ioread32(ctrl->regmap + HW_STATUS_CNT);
- if (ctrl->status.idx > RSXX_MAX_OUTSTANDING_CMDS) {
- dev_crit(&dev->dev, "Failed reading status cnt x%x\n",
- ctrl->status.idx);
- return -EINVAL;
- }
- iowrite32(ctrl->status.idx, ctrl->regmap + HW_STATUS_CNT);
- iowrite32(ctrl->status.idx, ctrl->regmap + SW_STATUS_CNT);
-
- ctrl->cmd.idx = ioread32(ctrl->regmap + HW_CMD_IDX);
- if (ctrl->cmd.idx > RSXX_MAX_OUTSTANDING_CMDS) {
- dev_crit(&dev->dev, "Failed reading cmd cnt x%x\n",
- ctrl->status.idx);
- return -EINVAL;
- }
- iowrite32(ctrl->cmd.idx, ctrl->regmap + HW_CMD_IDX);
- iowrite32(ctrl->cmd.idx, ctrl->regmap + SW_CMD_IDX);
-
- wmb();
+ st = rsxx_hw_buffers_init(dev, ctrl);
+ if (st)
+ return st;
return 0;
}
return 0;
}
-static int rsxx_dma_configure(struct rsxx_cardinfo *card)
+int rsxx_dma_configure(struct rsxx_cardinfo *card)
{
u32 intr_coal;
}
}
+int rsxx_eeh_save_issued_dmas(struct rsxx_cardinfo *card)
+{
+ int i;
+ int j;
+ int cnt;
+ struct rsxx_dma *dma;
+ struct list_head *issued_dmas;
+
+ issued_dmas = kzalloc(sizeof(*issued_dmas) * card->n_targets,
+ GFP_KERNEL);
+ if (!issued_dmas)
+ return -ENOMEM;
+
+ for (i = 0; i < card->n_targets; i++) {
+ INIT_LIST_HEAD(&issued_dmas[i]);
+ cnt = 0;
+ for (j = 0; j < RSXX_MAX_OUTSTANDING_CMDS; j++) {
+ dma = get_tracker_dma(card->ctrl[i].trackers, j);
+ if (dma == NULL)
+ continue;
+
+ if (dma->cmd == HW_CMD_BLK_WRITE)
+ card->ctrl[i].stats.writes_issued--;
+ else if (dma->cmd == HW_CMD_BLK_DISCARD)
+ card->ctrl[i].stats.discards_issued--;
+ else
+ card->ctrl[i].stats.reads_issued--;
+
+ list_add_tail(&dma->list, &issued_dmas[i]);
+ push_tracker(card->ctrl[i].trackers, j);
+ cnt++;
+ }
+
+ spin_lock(&card->ctrl[i].queue_lock);
+ list_splice(&issued_dmas[i], &card->ctrl[i].queue);
+
+ atomic_sub(cnt, &card->ctrl[i].stats.hw_q_depth);
+ card->ctrl[i].stats.sw_q_depth += cnt;
+ card->ctrl[i].e_cnt = 0;
+
+ list_for_each_entry(dma, &card->ctrl[i].queue, list) {
+ if (dma->dma_addr)
+ pci_unmap_page(card->dev, dma->dma_addr,
+ get_dma_size(dma),
+ dma->cmd == HW_CMD_BLK_WRITE ?
+ PCI_DMA_TODEVICE :
+ PCI_DMA_FROMDEVICE);
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+ }
+
+ kfree(issued_dmas);
+
+ return 0;
+}
+
+void rsxx_eeh_cancel_dmas(struct rsxx_cardinfo *card)
+{
+ struct rsxx_dma *dma;
+ struct rsxx_dma *tmp;
+ int i;
+
+ for (i = 0; i < card->n_targets; i++) {
+ spin_lock(&card->ctrl[i].queue_lock);
+ list_for_each_entry_safe(dma, tmp, &card->ctrl[i].queue, list) {
+ list_del(&dma->list);
+
+ rsxx_complete_dma(&card->ctrl[i], dma, DMA_CANCELLED);
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+ }
+}
+
+int rsxx_eeh_remap_dmas(struct rsxx_cardinfo *card)
+{
+ struct rsxx_dma *dma;
+ int i;
+
+ for (i = 0; i < card->n_targets; i++) {
+ spin_lock(&card->ctrl[i].queue_lock);
+ list_for_each_entry(dma, &card->ctrl[i].queue, list) {
+ dma->dma_addr = pci_map_page(card->dev, dma->page,
+ dma->pg_off, get_dma_size(dma),
+ dma->cmd == HW_CMD_BLK_WRITE ?
+ PCI_DMA_TODEVICE :
+ PCI_DMA_FROMDEVICE);
+ if (!dma->dma_addr) {
+ spin_unlock(&card->ctrl[i].queue_lock);
+ kmem_cache_free(rsxx_dma_pool, dma);
+ return -ENOMEM;
+ }
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+ }
+
+ return 0;
+}
int rsxx_dma_init(void)
{
/*----------------- IOCTL Definitions -------------------*/
+#define RSXX_MAX_DATA 8
+
struct rsxx_reg_access {
__u32 addr;
__u32 cnt;
__u32 stat;
__u32 stream;
- __u32 data[8];
+ __u32 data[RSXX_MAX_DATA];
};
-#define RSXX_MAX_REG_CNT (8 * (sizeof(__u32)))
+#define RSXX_MAX_REG_CNT (RSXX_MAX_DATA * (sizeof(__u32)))
#define RSXX_IOC_MAGIC 'r'
};
/* Vendor ID Values */
-#define RSXX_VENDOR_ID_TMS_IBM 0
+#define RSXX_VENDOR_ID_IBM 0
#define RSXX_VENDOR_ID_DSI 1
#define RSXX_VENDOR_COUNT 2
struct proc_cmd;
-#define PCI_VENDOR_ID_TMS_IBM 0x15B6
-#define PCI_DEVICE_ID_RS70_FLASH 0x0019
-#define PCI_DEVICE_ID_RS70D_FLASH 0x001A
-#define PCI_DEVICE_ID_RS80_FLASH 0x001C
-#define PCI_DEVICE_ID_RS81_FLASH 0x001E
+#define PCI_DEVICE_ID_FS70_FLASH 0x04A9
+#define PCI_DEVICE_ID_FS80_FLASH 0x04AA
#define RS70_PCI_REV_SUPPORTED 4
#define DRIVER_NAME "rsxx"
-#define DRIVER_VERSION "3.7"
+#define DRIVER_VERSION "4.0"
/* Block size is 4096 */
#define RSXX_HW_BLK_SHIFT 12
#define RSXX_MAX_OUTSTANDING_CMDS 255
#define RSXX_CS_IDX_MASK 0xff
+#define STATUS_BUFFER_SIZE8 4096
+#define COMMAND_BUFFER_SIZE8 4096
+
#define RSXX_MAX_TARGETS 8
struct dma_tracker_list;
u32 discards_failed;
u32 done_rescheduled;
u32 issue_rescheduled;
+ u32 dma_sw_err;
+ u32 dma_hw_fault;
+ u32 dma_cancelled;
u32 sw_q_depth; /* Number of DMAs on the SW queue. */
atomic_t hw_q_depth; /* Number of DMAs queued to HW. */
};
struct rsxx_cardinfo {
struct pci_dev *dev;
unsigned int halt;
+ unsigned int eeh_state;
void __iomem *regmap;
spinlock_t irq_lock;
PERF_RD512_HI = 0xac,
PERF_WR512_LO = 0xb0,
PERF_WR512_HI = 0xb4,
+ PCI_RECONFIG = 0xb8,
};
enum rsxx_intr {
CR_INTR_DMA5 = 0x00000080,
CR_INTR_DMA6 = 0x00000100,
CR_INTR_DMA7 = 0x00000200,
+ CR_INTR_ALL_C = 0x0000003f,
+ CR_INTR_ALL_G = 0x000003ff,
CR_INTR_DMA_ALL = 0x000003f5,
CR_INTR_ALL = 0xffffffff,
};
DMA_QUEUE_RESET = 0x00000001,
};
+enum rsxx_hw_fifo_flush {
+ RSXX_FLUSH_BUSY = 0x00000002,
+ RSXX_FLUSH_TIMEOUT = 0x00000004,
+};
+
enum rsxx_pci_revision {
RSXX_DISCARD_SUPPORT = 2,
+ RSXX_EEH_SUPPORT = 3,
};
enum rsxx_creg_cmd {
void rsxx_dma_destroy(struct rsxx_cardinfo *card);
int rsxx_dma_init(void);
void rsxx_dma_cleanup(void);
+void rsxx_dma_queue_reset(struct rsxx_cardinfo *card);
+int rsxx_dma_configure(struct rsxx_cardinfo *card);
int rsxx_dma_queue_bio(struct rsxx_cardinfo *card,
struct bio *bio,
atomic_t *n_dmas,
rsxx_dma_cb cb,
void *cb_data);
+int rsxx_hw_buffers_init(struct pci_dev *dev, struct rsxx_dma_ctrl *ctrl);
+int rsxx_eeh_save_issued_dmas(struct rsxx_cardinfo *card);
+void rsxx_eeh_cancel_dmas(struct rsxx_cardinfo *card);
+int rsxx_eeh_remap_dmas(struct rsxx_cardinfo *card);
/***** cregs.c *****/
int rsxx_creg_write(struct rsxx_cardinfo *card, u32 addr,
void rsxx_creg_destroy(struct rsxx_cardinfo *card);
int rsxx_creg_init(void);
void rsxx_creg_cleanup(void);
-
int rsxx_reg_access(struct rsxx_cardinfo *card,
struct rsxx_reg_access __user *ucmd,
int read);
+void rsxx_eeh_save_issued_creg(struct rsxx_cardinfo *card);
+void rsxx_kick_creg_queue(struct rsxx_cardinfo *card);
#define foreach_grant_safe(pos, n, rbtree, node) \
for ((pos) = container_of(rb_first((rbtree)), typeof(*(pos)), node), \
- (n) = rb_next(&(pos)->node); \
+ (n) = (&(pos)->node != NULL) ? rb_next(&(pos)->node) : NULL; \
&(pos)->node != NULL; \
(pos) = container_of(n, typeof(*(pos)), node), \
(n) = (&(pos)->node != NULL) ? rb_next(&(pos)->node) : NULL)
static void print_stats(struct xen_blkif *blkif)
{
- pr_info("xen-blkback (%s): oo %3d | rd %4d | wr %4d | f %4d"
- " | ds %4d\n",
+ pr_info("xen-blkback (%s): oo %3llu | rd %4llu | wr %4llu | f %4llu"
+ " | ds %4llu\n",
current->comm, blkif->st_oo_req,
blkif->st_rd_req, blkif->st_wr_req,
blkif->st_f_req, blkif->st_ds_req);
}
struct seg_buf {
- unsigned long buf;
+ unsigned int offset;
unsigned int nsec;
};
/*
* If this is a new persistent grant
* save the handler
*/
- persistent_gnts[i]->handle = map[j].handle;
- persistent_gnts[i]->dev_bus_addr =
- map[j++].dev_bus_addr;
+ persistent_gnts[i]->handle = map[j++].handle;
}
pending_handle(pending_req, i) =
persistent_gnts[i]->handle;
if (ret)
continue;
-
- seg[i].buf = persistent_gnts[i]->dev_bus_addr |
- (req->u.rw.seg[i].first_sect << 9);
} else {
- pending_handle(pending_req, i) = map[j].handle;
+ pending_handle(pending_req, i) = map[j++].handle;
bitmap_set(pending_req->unmap_seg, i, 1);
- if (ret) {
- j++;
+ if (ret)
continue;
- }
-
- seg[i].buf = map[j++].dev_bus_addr |
- (req->u.rw.seg[i].first_sect << 9);
}
+ seg[i].offset = (req->u.rw.seg[i].first_sect << 9);
}
return ret;
}
return err;
}
+static int dispatch_other_io(struct xen_blkif *blkif,
+ struct blkif_request *req,
+ struct pending_req *pending_req)
+{
+ free_req(pending_req);
+ make_response(blkif, req->u.other.id, req->operation,
+ BLKIF_RSP_EOPNOTSUPP);
+ return -EIO;
+}
+
static void xen_blk_drain_io(struct xen_blkif *blkif)
{
atomic_set(&blkif->drain, 1);
/* Apply all sanity checks to /private copy/ of request. */
barrier();
- if (unlikely(req.operation == BLKIF_OP_DISCARD)) {
+
+ switch (req.operation) {
+ case BLKIF_OP_READ:
+ case BLKIF_OP_WRITE:
+ case BLKIF_OP_WRITE_BARRIER:
+ case BLKIF_OP_FLUSH_DISKCACHE:
+ if (dispatch_rw_block_io(blkif, &req, pending_req))
+ goto done;
+ break;
+ case BLKIF_OP_DISCARD:
free_req(pending_req);
if (dispatch_discard_io(blkif, &req))
- break;
- } else if (dispatch_rw_block_io(blkif, &req, pending_req))
+ goto done;
break;
+ default:
+ if (dispatch_other_io(blkif, &req, pending_req))
+ goto done;
+ break;
+ }
/* Yield point for this unbounded loop. */
cond_resched();
}
-
+done:
return more_to_do;
}
pr_debug(DRV_PFX "access denied: %s of [%llu,%llu] on dev=%04x\n",
operation == READ ? "read" : "write",
preq.sector_number,
- preq.sector_number + preq.nr_sects, preq.dev);
+ preq.sector_number + preq.nr_sects,
+ blkif->vbd.pdevice);
goto fail_response;
}
(bio_add_page(bio,
pages[i],
seg[i].nsec << 9,
- seg[i].buf & ~PAGE_MASK) == 0)) {
+ seg[i].offset) == 0)) {
bio = bio_alloc(GFP_KERNEL, nseg-i);
if (unlikely(bio == NULL))
bio->bi_end_io = end_block_io_op;
}
- /*
- * We set it one so that the last submit_bio does not have to call
- * atomic_inc.
- */
atomic_set(&pending_req->pendcnt, nbio);
-
- /* Get a reference count for the disk queue and start sending I/O */
blk_start_plug(&plug);
for (i = 0; i < nbio; i++)
fail_put_bio:
for (i = 0; i < nbio; i++)
bio_put(biolist[i]);
+ atomic_set(&pending_req->pendcnt, 1);
__end_block_io_op(pending_req, -EINVAL);
msleep(1); /* back off a bit */
return -EIO;
uint64_t nr_sectors;
} __attribute__((__packed__));
+struct blkif_x86_32_request_other {
+ uint8_t _pad1;
+ blkif_vdev_t _pad2;
+ uint64_t id; /* private guest value, echoed in resp */
+} __attribute__((__packed__));
+
struct blkif_x86_32_request {
uint8_t operation; /* BLKIF_OP_??? */
union {
struct blkif_x86_32_request_rw rw;
struct blkif_x86_32_request_discard discard;
+ struct blkif_x86_32_request_other other;
} u;
} __attribute__((__packed__));
uint64_t nr_sectors;
} __attribute__((__packed__));
+struct blkif_x86_64_request_other {
+ uint8_t _pad1;
+ blkif_vdev_t _pad2;
+ uint32_t _pad3; /* offsetof(blkif_..,u.discard.id)==8 */
+ uint64_t id; /* private guest value, echoed in resp */
+} __attribute__((__packed__));
+
struct blkif_x86_64_request {
uint8_t operation; /* BLKIF_OP_??? */
union {
struct blkif_x86_64_request_rw rw;
struct blkif_x86_64_request_discard discard;
+ struct blkif_x86_64_request_other other;
} u;
} __attribute__((__packed__));
struct page *page;
grant_ref_t gnt;
grant_handle_t handle;
- uint64_t dev_bus_addr;
struct rb_node node;
};
/* statistics */
unsigned long st_print;
- int st_rd_req;
- int st_wr_req;
- int st_oo_req;
- int st_f_req;
- int st_ds_req;
- int st_rd_sect;
- int st_wr_sect;
+ unsigned long long st_rd_req;
+ unsigned long long st_wr_req;
+ unsigned long long st_oo_req;
+ unsigned long long st_f_req;
+ unsigned long long st_ds_req;
+ unsigned long long st_rd_sect;
+ unsigned long long st_wr_sect;
wait_queue_head_t waiting_to_free;
};
dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
break;
default:
+ /*
+ * Don't know how to translate this op. Only get the
+ * ID so failure can be reported to the frontend.
+ */
+ dst->u.other.id = src->u.other.id;
break;
}
}
dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
break;
default:
+ /*
+ * Don't know how to translate this op. Only get the
+ * ID so failure can be reported to the frontend.
+ */
+ dst->u.other.id = src->u.other.id;
break;
}
}
} \
static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL)
-VBD_SHOW(oo_req, "%d\n", be->blkif->st_oo_req);
-VBD_SHOW(rd_req, "%d\n", be->blkif->st_rd_req);
-VBD_SHOW(wr_req, "%d\n", be->blkif->st_wr_req);
-VBD_SHOW(f_req, "%d\n", be->blkif->st_f_req);
-VBD_SHOW(ds_req, "%d\n", be->blkif->st_ds_req);
-VBD_SHOW(rd_sect, "%d\n", be->blkif->st_rd_sect);
-VBD_SHOW(wr_sect, "%d\n", be->blkif->st_wr_sect);
+VBD_SHOW(oo_req, "%llu\n", be->blkif->st_oo_req);
+VBD_SHOW(rd_req, "%llu\n", be->blkif->st_rd_req);
+VBD_SHOW(wr_req, "%llu\n", be->blkif->st_wr_req);
+VBD_SHOW(f_req, "%llu\n", be->blkif->st_f_req);
+VBD_SHOW(ds_req, "%llu\n", be->blkif->st_ds_req);
+VBD_SHOW(rd_sect, "%llu\n", be->blkif->st_rd_sect);
+VBD_SHOW(wr_sect, "%llu\n", be->blkif->st_wr_sect);
static struct attribute *xen_vbdstat_attrs[] = {
&dev_attr_oo_req.attr,
#include <linux/mutex.h>
#include <linux/scatterlist.h>
#include <linux/bitmap.h>
-#include <linux/llist.h>
+#include <linux/list.h>
#include <xen/xen.h>
#include <xen/xenbus.h>
struct grant {
grant_ref_t gref;
unsigned long pfn;
- struct llist_node node;
+ struct list_head node;
};
struct blk_shadow {
struct blkif_request req;
struct request *request;
- unsigned long frame[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct grant *grants_used[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
struct work_struct work;
struct gnttab_free_callback callback;
struct blk_shadow shadow[BLK_RING_SIZE];
- struct llist_head persistent_gnts;
+ struct list_head persistent_gnts;
unsigned int persistent_gnts_c;
unsigned long shadow_free;
unsigned int feature_flush;
return 0;
}
+static int fill_grant_buffer(struct blkfront_info *info, int num)
+{
+ struct page *granted_page;
+ struct grant *gnt_list_entry, *n;
+ int i = 0;
+
+ while(i < num) {
+ gnt_list_entry = kzalloc(sizeof(struct grant), GFP_NOIO);
+ if (!gnt_list_entry)
+ goto out_of_memory;
+
+ granted_page = alloc_page(GFP_NOIO);
+ if (!granted_page) {
+ kfree(gnt_list_entry);
+ goto out_of_memory;
+ }
+
+ gnt_list_entry->pfn = page_to_pfn(granted_page);
+ gnt_list_entry->gref = GRANT_INVALID_REF;
+ list_add(&gnt_list_entry->node, &info->persistent_gnts);
+ i++;
+ }
+
+ return 0;
+
+out_of_memory:
+ list_for_each_entry_safe(gnt_list_entry, n,
+ &info->persistent_gnts, node) {
+ list_del(&gnt_list_entry->node);
+ __free_page(pfn_to_page(gnt_list_entry->pfn));
+ kfree(gnt_list_entry);
+ i--;
+ }
+ BUG_ON(i != 0);
+ return -ENOMEM;
+}
+
+static struct grant *get_grant(grant_ref_t *gref_head,
+ struct blkfront_info *info)
+{
+ struct grant *gnt_list_entry;
+ unsigned long buffer_mfn;
+
+ BUG_ON(list_empty(&info->persistent_gnts));
+ gnt_list_entry = list_first_entry(&info->persistent_gnts, struct grant,
+ node);
+ list_del(&gnt_list_entry->node);
+
+ if (gnt_list_entry->gref != GRANT_INVALID_REF) {
+ info->persistent_gnts_c--;
+ return gnt_list_entry;
+ }
+
+ /* Assign a gref to this page */
+ gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
+ BUG_ON(gnt_list_entry->gref == -ENOSPC);
+ buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn);
+ gnttab_grant_foreign_access_ref(gnt_list_entry->gref,
+ info->xbdev->otherend_id,
+ buffer_mfn, 0);
+ return gnt_list_entry;
+}
+
static const char *op_name(int op)
{
static const char *const names[] = {
static int blkif_queue_request(struct request *req)
{
struct blkfront_info *info = req->rq_disk->private_data;
- unsigned long buffer_mfn;
struct blkif_request *ring_req;
unsigned long id;
unsigned int fsect, lsect;
*/
bool new_persistent_gnts;
grant_ref_t gref_head;
- struct page *granted_page;
struct grant *gnt_list_entry = NULL;
struct scatterlist *sg;
fsect = sg->offset >> 9;
lsect = fsect + (sg->length >> 9) - 1;
- if (info->persistent_gnts_c) {
- BUG_ON(llist_empty(&info->persistent_gnts));
- gnt_list_entry = llist_entry(
- llist_del_first(&info->persistent_gnts),
- struct grant, node);
-
- ref = gnt_list_entry->gref;
- buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn);
- info->persistent_gnts_c--;
- } else {
- ref = gnttab_claim_grant_reference(&gref_head);
- BUG_ON(ref == -ENOSPC);
-
- gnt_list_entry =
- kmalloc(sizeof(struct grant),
- GFP_ATOMIC);
- if (!gnt_list_entry)
- return -ENOMEM;
-
- granted_page = alloc_page(GFP_ATOMIC);
- if (!granted_page) {
- kfree(gnt_list_entry);
- return -ENOMEM;
- }
-
- gnt_list_entry->pfn =
- page_to_pfn(granted_page);
- gnt_list_entry->gref = ref;
-
- buffer_mfn = pfn_to_mfn(page_to_pfn(
- granted_page));
- gnttab_grant_foreign_access_ref(ref,
- info->xbdev->otherend_id,
- buffer_mfn, 0);
- }
+ gnt_list_entry = get_grant(&gref_head, info);
+ ref = gnt_list_entry->gref;
info->shadow[id].grants_used[i] = gnt_list_entry;
kunmap_atomic(shared_data);
}
- info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn);
ring_req->u.rw.seg[i] =
(struct blkif_request_segment) {
.gref = ref,
static void blkif_free(struct blkfront_info *info, int suspend)
{
- struct llist_node *all_gnts;
- struct grant *persistent_gnt, *tmp;
- struct llist_node *n;
+ struct grant *persistent_gnt;
+ struct grant *n;
/* Prevent new requests being issued until we fix things up. */
spin_lock_irq(&info->io_lock);
blk_stop_queue(info->rq);
/* Remove all persistent grants */
- if (info->persistent_gnts_c) {
- all_gnts = llist_del_all(&info->persistent_gnts);
- persistent_gnt = llist_entry(all_gnts, typeof(*(persistent_gnt)), node);
- while (persistent_gnt) {
- gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+ if (!list_empty(&info->persistent_gnts)) {
+ list_for_each_entry_safe(persistent_gnt, n,
+ &info->persistent_gnts, node) {
+ list_del(&persistent_gnt->node);
+ if (persistent_gnt->gref != GRANT_INVALID_REF) {
+ gnttab_end_foreign_access(persistent_gnt->gref,
+ 0, 0UL);
+ info->persistent_gnts_c--;
+ }
__free_page(pfn_to_page(persistent_gnt->pfn));
- tmp = persistent_gnt;
- n = persistent_gnt->node.next;
- if (n)
- persistent_gnt = llist_entry(n, typeof(*(persistent_gnt)), node);
- else
- persistent_gnt = NULL;
- kfree(tmp);
+ kfree(persistent_gnt);
}
- info->persistent_gnts_c = 0;
}
+ BUG_ON(info->persistent_gnts_c != 0);
/* No more gnttab callback work. */
gnttab_cancel_free_callback(&info->callback);
}
/* Add the persistent grant into the list of free grants */
for (i = 0; i < s->req.u.rw.nr_segments; i++) {
- llist_add(&s->grants_used[i]->node, &info->persistent_gnts);
+ list_add(&s->grants_used[i]->node, &info->persistent_gnts);
info->persistent_gnts_c++;
}
}
sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+ /* Allocate memory for grants */
+ err = fill_grant_buffer(info, BLK_RING_SIZE *
+ BLKIF_MAX_SEGMENTS_PER_REQUEST);
+ if (err)
+ goto fail;
+
err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring));
if (err < 0) {
free_page((unsigned long)sring);
spin_lock_init(&info->io_lock);
info->xbdev = dev;
info->vdevice = vdevice;
- init_llist_head(&info->persistent_gnts);
+ INIT_LIST_HEAD(&info->persistent_gnts);
info->persistent_gnts_c = 0;
info->connected = BLKIF_STATE_DISCONNECTED;
INIT_WORK(&info->work, blkif_restart_queue);
int j;
/* Stage 1: Make a safe copy of the shadow state. */
- copy = kmalloc(sizeof(info->shadow),
+ copy = kmemdup(info->shadow, sizeof(info->shadow),
GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
if (!copy)
return -ENOMEM;
- memcpy(copy, info->shadow, sizeof(info->shadow));
/* Stage 2: Set up free list. */
memset(&info->shadow, 0, sizeof(info->shadow));
gnttab_grant_foreign_access_ref(
req->u.rw.seg[j].gref,
info->xbdev->otherend_id,
- pfn_to_mfn(info->shadow[req->u.rw.id].frame[j]),
+ pfn_to_mfn(copy[i].grants_used[j]->pfn),
0);
}
info->shadow[req->u.rw.id].req = *req;
{ USB_DEVICE(0x03F0, 0x311D) },
/* Atheros AR3012 with sflash firmware*/
+ { USB_DEVICE(0x0CF3, 0x0036) },
{ USB_DEVICE(0x0CF3, 0x3004) },
+ { USB_DEVICE(0x0CF3, 0x3008) },
{ USB_DEVICE(0x0CF3, 0x311D) },
+ { USB_DEVICE(0x0CF3, 0x817a) },
{ USB_DEVICE(0x13d3, 0x3375) },
+ { USB_DEVICE(0x04CA, 0x3004) },
{ USB_DEVICE(0x04CA, 0x3005) },
{ USB_DEVICE(0x04CA, 0x3006) },
{ USB_DEVICE(0x04CA, 0x3008) },
static struct usb_device_id ath3k_blist_tbl[] = {
/* Atheros AR3012 with sflash firmware*/
+ { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0CF3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE },
/* Atheros 3012 with sflash firmware */
+ { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
divisor = parent_rate / rate;
/* If prate / rate would be decimal, incr the divisor */
- if (rate * divisor < *prate)
+ if (rate * divisor < parent_rate)
divisor++;
if (divisor == cdev->div_mask + 1)
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
cpumask_copy(policy->cpus, perf->shared_cpu_map);
}
- cpumask_copy(policy->related_cpus, perf->shared_cpu_map);
#ifdef CONFIG_SMP
dmi_check_system(sw_any_bug_dmi_table);
if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) {
cpumask_clear(policy->cpus);
cpumask_set_cpu(cpu, policy->cpus);
- cpumask_copy(policy->related_cpus, cpu_sibling_mask(cpu));
policy->shared_type = CPUFREQ_SHARED_TYPE_HW;
pr_info_once(PFX "overriding BIOS provided _PSD data\n");
}
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
- if (!cpufreq_frequency_get_table(cpu))
+ if (!policy)
return;
- if (policy && !policy_is_shared(policy)) {
+ if (!cpufreq_frequency_get_table(cpu))
+ goto put_ref;
+
+ if (!policy_is_shared(policy)) {
pr_debug("%s: Free sysfs stat\n", __func__);
sysfs_remove_group(&policy->kobj, &stats_attr_group);
}
- if (policy)
- cpufreq_cpu_put(policy);
+
+put_ref:
+ cpufreq_cpu_put(policy);
}
static int cpufreq_stats_create_table(struct cpufreq_policy *policy,
static int intel_pstate_min_pstate(void)
{
u64 value;
- rdmsrl(0xCE, value);
+ rdmsrl(MSR_PLATFORM_INFO, value);
return (value >> 40) & 0xFF;
}
static int intel_pstate_max_pstate(void)
{
u64 value;
- rdmsrl(0xCE, value);
+ rdmsrl(MSR_PLATFORM_INFO, value);
return (value >> 8) & 0xFF;
}
{
u64 value;
int nont, ret;
- rdmsrl(0x1AD, value);
+ rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value);
nont = intel_pstate_max_pstate();
ret = ((value) & 255);
if (ret <= nont)
sample->idletime_us * 100,
sample->duration_us);
core_pct = div64_u64(sample->aperf * 100, sample->mperf);
- sample->freq = cpu->pstate.turbo_pstate * core_pct * 1000;
+ sample->freq = cpu->pstate.max_pstate * core_pct * 1000;
sample->core_pct_busy = div_s64((sample->pstate_pct_busy * core_pct),
100);
static int __initdata no_load;
+static int intel_pstate_msrs_not_valid(void)
+{
+ /* Check that all the msr's we are using are valid. */
+ u64 aperf, mperf, tmp;
+
+ rdmsrl(MSR_IA32_APERF, aperf);
+ rdmsrl(MSR_IA32_MPERF, mperf);
+
+ if (!intel_pstate_min_pstate() ||
+ !intel_pstate_max_pstate() ||
+ !intel_pstate_turbo_pstate())
+ return -ENODEV;
+
+ rdmsrl(MSR_IA32_APERF, tmp);
+ if (!(tmp - aperf))
+ return -ENODEV;
+
+ rdmsrl(MSR_IA32_MPERF, tmp);
+ if (!(tmp - mperf))
+ return -ENODEV;
+
+ return 0;
+}
static int __init intel_pstate_init(void)
{
int cpu, rc = 0;
if (!id)
return -ENODEV;
+ if (intel_pstate_msrs_not_valid())
+ return -ENODEV;
+
pr_info("Intel P-state driver initializing.\n");
all_cpu_data = vmalloc(sizeof(void *) * num_possible_cpus());
};
static struct caam_alg_template driver_algs[] = {
- /*
- * single-pass ipsec_esp descriptor
- * authencesn(*,*) is also registered, although not present
- * explicitly here.
- */
+ /* single-pass ipsec_esp descriptor */
{
.name = "authenc(hmac(md5),cbc(aes))",
.driver_name = "authenc-hmac-md5-cbc-aes-caam",
for (i = 0; i < ARRAY_SIZE(driver_algs); i++) {
/* TODO: check if h/w supports alg */
struct caam_crypto_alg *t_alg;
- bool done = false;
-authencesn:
t_alg = caam_alg_alloc(ctrldev, &driver_algs[i]);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
dev_warn(ctrldev, "%s alg registration failed\n",
t_alg->crypto_alg.cra_driver_name);
kfree(t_alg);
- } else {
+ } else
list_add_tail(&t_alg->entry, &priv->alg_list);
- if (driver_algs[i].type == CRYPTO_ALG_TYPE_AEAD &&
- !memcmp(driver_algs[i].name, "authenc", 7) &&
- !done) {
- char *name;
-
- name = driver_algs[i].name;
- memmove(name + 10, name + 7, strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- name = driver_algs[i].driver_name;
- memmove(name + 10, name + 7, strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- done = true;
- goto authencesn;
- }
- }
}
if (!list_empty(&priv->alg_list))
dev_info(ctrldev, "%s algorithms registered in /proc/crypto\n",
#include <linux/types.h>
#include <linux/debugfs.h>
#include <linux/circ_buf.h>
-#include <linux/string.h>
#include <net/xfrm.h>
#include <crypto/algapi.h>
#include <linux/spinlock.h>
#include <linux/rtnetlink.h>
#include <linux/slab.h>
-#include <linux/string.h>
#include <crypto/algapi.h>
#include <crypto/aes.h>
};
static struct talitos_alg_template driver_algs[] = {
- /*
- * AEAD algorithms. These use a single-pass ipsec_esp descriptor.
- * authencesn(*,*) is also registered, although not present
- * explicitly here.
- */
+ /* AEAD algorithms. These use a single-pass ipsec_esp descriptor */
{ .type = CRYPTO_ALG_TYPE_AEAD,
.alg.crypto = {
.cra_name = "authenc(hmac(sha1),cbc(aes))",
if (hw_supports(dev, driver_algs[i].desc_hdr_template)) {
struct talitos_crypto_alg *t_alg;
char *name = NULL;
- bool authenc = false;
-authencesn:
t_alg = talitos_alg_alloc(dev, &driver_algs[i]);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
err = crypto_register_alg(
&t_alg->algt.alg.crypto);
name = t_alg->algt.alg.crypto.cra_driver_name;
- authenc = authenc ? !authenc :
- !(bool)memcmp(name, "authenc", 7);
break;
case CRYPTO_ALG_TYPE_AHASH:
err = crypto_register_ahash(
dev_err(dev, "%s alg registration failed\n",
name);
kfree(t_alg);
- } else {
+ } else
list_add_tail(&t_alg->entry, &priv->alg_list);
- if (authenc) {
- struct crypto_alg *alg =
- &driver_algs[i].alg.crypto;
-
- name = alg->cra_name;
- memmove(name + 10, name + 7,
- strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- name = alg->cra_driver_name;
- memmove(name + 10, name + 7,
- strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- goto authencesn;
- }
- }
}
}
if (!list_empty(&priv->alg_list))
*maxburst = 0;
}
+static inline void convert_slave_id(struct dw_dma_chan *dwc)
+{
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+
+ dwc->dma_sconfig.slave_id -= dw->request_line_base;
+}
+
static int
set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
{
convert_burst(&dwc->dma_sconfig.src_maxburst);
convert_burst(&dwc->dma_sconfig.dst_maxburst);
+ convert_slave_id(dwc);
return 0;
}
if (dma_spec->args_count != 3)
return NULL;
- fargs.req = be32_to_cpup(dma_spec->args+0);
- fargs.src = be32_to_cpup(dma_spec->args+1);
- fargs.dst = be32_to_cpup(dma_spec->args+2);
+ fargs.req = dma_spec->args[0];
+ fargs.src = dma_spec->args[1];
+ fargs.dst = dma_spec->args[2];
if (WARN_ON(fargs.req >= DW_DMA_MAX_NR_REQUESTS ||
fargs.src >= dw->nr_masters ||
static int dw_probe(struct platform_device *pdev)
{
+ const struct platform_device_id *match;
struct dw_dma_platform_data *pdata;
struct resource *io;
struct dw_dma *dw;
memcpy(dw->data_width, pdata->data_width, 4);
}
+ /* Get the base request line if set */
+ match = platform_get_device_id(pdev);
+ if (match)
+ dw->request_line_base = (unsigned int)match->driver_data;
+
/* Calculate all channel mask before DMA setup */
dw->all_chan_mask = (1 << nr_channels) - 1;
#endif
static const struct platform_device_id dw_dma_ids[] = {
- { "INTL9C60", 0 },
+ /* Name, Request Line Base */
+ { "INTL9C60", (kernel_ulong_t)16 },
{ }
};
/* hardware configuration */
unsigned char nr_masters;
unsigned char data_width[4];
+ unsigned int request_line_base;
struct dw_dma_chan chan[0];
};
edac_dbg(1, "MC node: %d, csrow: %d\n",
pvt->mc_node_id, i);
- if (row_dct0)
+ if (row_dct0) {
nr_pages = amd64_csrow_nr_pages(pvt, 0, i);
+ csrow->channels[0]->dimm->nr_pages = nr_pages;
+ }
/* K8 has only one DCT */
- if (boot_cpu_data.x86 != 0xf && row_dct1)
- nr_pages += amd64_csrow_nr_pages(pvt, 1, i);
+ if (boot_cpu_data.x86 != 0xf && row_dct1) {
+ int row_dct1_pages = amd64_csrow_nr_pages(pvt, 1, i);
+
+ csrow->channels[1]->dimm->nr_pages = row_dct1_pages;
+ nr_pages += row_dct1_pages;
+ }
mtype = amd64_determine_memory_type(pvt, i);
dimm = csrow->channels[j]->dimm;
dimm->mtype = mtype;
dimm->edac_mode = edac_mode;
- dimm->nr_pages = nr_pages;
}
- csrow->nr_pages = nr_pages;
}
return empty;
mci->pvt_info = pvt;
mci->pdev = &pvt->F2->dev;
- mci->csbased = 1;
setup_mci_misc_attrs(mci, fam_type);
edac_dimm_info_location(dimm, location, sizeof(location));
edac_dbg(4, "%s%i: %smapped as virtual row %d, chan %d\n",
- dimm->mci->mem_is_per_rank ? "rank" : "dimm",
+ dimm->mci->csbased ? "rank" : "dimm",
number, location, dimm->csrow, dimm->cschannel);
edac_dbg(4, " dimm = %p\n", dimm);
edac_dbg(4, " dimm->label = '%s'\n", dimm->label);
memcpy(mci->layers, layers, sizeof(*layer) * n_layers);
mci->nr_csrows = tot_csrows;
mci->num_cschannel = tot_channels;
- mci->mem_is_per_rank = per_rank;
+ mci->csbased = per_rank;
/*
* Alocate and fill the csrow/channels structs
* incrementing the compat API counters
*/
edac_dbg(4, "%s csrows map: (%d,%d)\n",
- mci->mem_is_per_rank ? "rank" : "dimm",
+ mci->csbased ? "rank" : "dimm",
dimm->csrow, dimm->cschannel);
if (row == -1)
row = dimm->csrow;
* and the per-dimm/per-rank one
*/
#define DEVICE_ATTR_LEGACY(_name, _mode, _show, _store) \
- struct device_attribute dev_attr_legacy_##_name = __ATTR(_name, _mode, _show, _store)
+ static struct device_attribute dev_attr_legacy_##_name = __ATTR(_name, _mode, _show, _store)
struct dev_ch_attribute {
struct device_attribute attr;
int i;
u32 nr_pages = 0;
- if (csrow->mci->csbased)
- return sprintf(data, "%u\n", PAGES_TO_MiB(csrow->nr_pages));
-
for (i = 0; i < csrow->nr_channels; i++)
nr_pages += csrow->channels[i]->dimm->nr_pages;
return sprintf(data, "%u\n", PAGES_TO_MiB(nr_pages));
device_initialize(&dimm->dev);
dimm->dev.parent = &mci->dev;
- if (mci->mem_is_per_rank)
+ if (mci->csbased)
dev_set_name(&dimm->dev, "rank%d", index);
else
dev_set_name(&dimm->dev, "dimm%d", index);
for (csrow_idx = 0; csrow_idx < mci->nr_csrows; csrow_idx++) {
struct csrow_info *csrow = mci->csrows[csrow_idx];
- if (csrow->mci->csbased) {
- total_pages += csrow->nr_pages;
- } else {
- for (j = 0; j < csrow->nr_channels; j++) {
- struct dimm_info *dimm = csrow->channels[j]->dimm;
+ for (j = 0; j < csrow->nr_channels; j++) {
+ struct dimm_info *dimm = csrow->channels[j]->dimm;
- total_pages += dimm->nr_pages;
- }
+ total_pages += dimm->nr_pages;
}
}
#define DEV_NAME "max77693-muic"
#define DELAY_MS_DEFAULT 20000 /* unit: millisecond */
+/*
+ * Default value of MAX77693 register to bring up MUIC device.
+ * If user don't set some initial value for MUIC device through platform data,
+ * extcon-max77693 driver use 'default_init_data' to bring up base operation
+ * of MAX77693 MUIC device.
+ */
+struct max77693_reg_data default_init_data[] = {
+ {
+ /* STATUS2 - [3]ChgDetRun */
+ .addr = MAX77693_MUIC_REG_STATUS2,
+ .data = STATUS2_CHGDETRUN_MASK,
+ }, {
+ /* INTMASK1 - Unmask [3]ADC1KM,[0]ADCM */
+ .addr = MAX77693_MUIC_REG_INTMASK1,
+ .data = INTMASK1_ADC1K_MASK
+ | INTMASK1_ADC_MASK,
+ }, {
+ /* INTMASK2 - Unmask [0]ChgTypM */
+ .addr = MAX77693_MUIC_REG_INTMASK2,
+ .data = INTMASK2_CHGTYP_MASK,
+ }, {
+ /* INTMASK3 - Mask all of interrupts */
+ .addr = MAX77693_MUIC_REG_INTMASK3,
+ .data = 0x0,
+ }, {
+ /* CDETCTRL2 */
+ .addr = MAX77693_MUIC_REG_CDETCTRL2,
+ .data = CDETCTRL2_VIDRMEN_MASK
+ | CDETCTRL2_DXOVPEN_MASK,
+ },
+};
+
enum max77693_muic_adc_debounce_time {
ADC_DEBOUNCE_TIME_5MS = 0,
ADC_DEBOUNCE_TIME_10MS,
{
struct max77693_dev *max77693 = dev_get_drvdata(pdev->dev.parent);
struct max77693_platform_data *pdata = dev_get_platdata(max77693->dev);
- struct max77693_muic_platform_data *muic_pdata = pdata->muic_data;
struct max77693_muic_info *info;
+ struct max77693_reg_data *init_data;
+ int num_init_data;
int delay_jiffies;
int ret;
int i;
goto err_irq;
}
- /* Initialize MUIC register by using platform data */
- for (i = 0 ; i < muic_pdata->num_init_data ; i++) {
- enum max77693_irq_source irq_src = MAX77693_IRQ_GROUP_NR;
+
+ /* Initialize MUIC register by using platform data or default data */
+ if (pdata->muic_data) {
+ init_data = pdata->muic_data->init_data;
+ num_init_data = pdata->muic_data->num_init_data;
+ } else {
+ init_data = default_init_data;
+ num_init_data = ARRAY_SIZE(default_init_data);
+ }
+
+ for (i = 0 ; i < num_init_data ; i++) {
+ enum max77693_irq_source irq_src
+ = MAX77693_IRQ_GROUP_NR;
max77693_write_reg(info->max77693->regmap_muic,
- muic_pdata->init_data[i].addr,
- muic_pdata->init_data[i].data);
+ init_data[i].addr,
+ init_data[i].data);
- switch (muic_pdata->init_data[i].addr) {
+ switch (init_data[i].addr) {
case MAX77693_MUIC_REG_INTMASK1:
irq_src = MUIC_INT1;
break;
if (irq_src < MAX77693_IRQ_GROUP_NR)
info->max77693->irq_masks_cur[irq_src]
- = muic_pdata->init_data[i].data;
+ = init_data[i].data;
}
- /*
- * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
- * h/w path of COMP2/COMN1 on CONTROL1 register.
- */
- if (muic_pdata->path_uart)
- info->path_uart = muic_pdata->path_uart;
- else
- info->path_uart = CONTROL1_SW_UART;
+ if (pdata->muic_data) {
+ struct max77693_muic_platform_data *muic_pdata = pdata->muic_data;
- if (muic_pdata->path_usb)
- info->path_usb = muic_pdata->path_usb;
- else
+ /*
+ * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
+ * h/w path of COMP2/COMN1 on CONTROL1 register.
+ */
+ if (muic_pdata->path_uart)
+ info->path_uart = muic_pdata->path_uart;
+ else
+ info->path_uart = CONTROL1_SW_UART;
+
+ if (muic_pdata->path_usb)
+ info->path_usb = muic_pdata->path_usb;
+ else
+ info->path_usb = CONTROL1_SW_USB;
+
+ /*
+ * Default delay time for detecting cable state
+ * after certain time.
+ */
+ if (muic_pdata->detcable_delay_ms)
+ delay_jiffies =
+ msecs_to_jiffies(muic_pdata->detcable_delay_ms);
+ else
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ } else {
info->path_usb = CONTROL1_SW_USB;
+ info->path_uart = CONTROL1_SW_UART;
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ }
/* Set initial path for UART */
max77693_muic_set_path(info, info->path_uart, true);
* driver should notify cable state to upper layer.
*/
INIT_DELAYED_WORK(&info->wq_detcable, max77693_muic_detect_cable_wq);
- if (muic_pdata->detcable_delay_ms)
- delay_jiffies = msecs_to_jiffies(muic_pdata->detcable_delay_ms);
- else
- delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
schedule_delayed_work(&info->wq_detcable, delay_jiffies);
return ret;
goto err_irq;
}
- /* Initialize registers according to platform data */
if (pdata->muic_pdata) {
- struct max8997_muic_platform_data *mdata = info->muic_pdata;
-
- for (i = 0; i < mdata->num_init_data; i++) {
- max8997_write_reg(info->muic, mdata->init_data[i].addr,
- mdata->init_data[i].data);
+ struct max8997_muic_platform_data *muic_pdata
+ = pdata->muic_pdata;
+
+ /* Initialize registers according to platform data */
+ for (i = 0; i < muic_pdata->num_init_data; i++) {
+ max8997_write_reg(info->muic,
+ muic_pdata->init_data[i].addr,
+ muic_pdata->init_data[i].data);
}
- }
- /*
- * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
- * h/w path of COMP2/COMN1 on CONTROL1 register.
- */
- if (pdata->muic_pdata->path_uart)
- info->path_uart = pdata->muic_pdata->path_uart;
- else
- info->path_uart = CONTROL1_SW_UART;
+ /*
+ * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
+ * h/w path of COMP2/COMN1 on CONTROL1 register.
+ */
+ if (muic_pdata->path_uart)
+ info->path_uart = muic_pdata->path_uart;
+ else
+ info->path_uart = CONTROL1_SW_UART;
- if (pdata->muic_pdata->path_usb)
- info->path_usb = pdata->muic_pdata->path_usb;
- else
+ if (muic_pdata->path_usb)
+ info->path_usb = muic_pdata->path_usb;
+ else
+ info->path_usb = CONTROL1_SW_USB;
+
+ /*
+ * Default delay time for detecting cable state
+ * after certain time.
+ */
+ if (muic_pdata->detcable_delay_ms)
+ delay_jiffies =
+ msecs_to_jiffies(muic_pdata->detcable_delay_ms);
+ else
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ } else {
+ info->path_uart = CONTROL1_SW_UART;
info->path_usb = CONTROL1_SW_USB;
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ }
/* Set initial path for UART */
max8997_muic_set_path(info, info->path_uart, true);
* driver should notify cable state to upper layer.
*/
INIT_DELAYED_WORK(&info->wq_detcable, max8997_muic_detect_cable_wq);
- if (pdata->muic_pdata->detcable_delay_ms)
- delay_jiffies = msecs_to_jiffies(pdata->muic_pdata->detcable_delay_ms);
- else
- delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
schedule_delayed_work(&info->wq_detcable, delay_jiffies);
return 0;
Subsequent efibootmgr releases may be found at:
<http://linux.dell.com/efibootmgr>
+config EFI_VARS_PSTORE
+ bool "Register efivars backend for pstore"
+ depends on EFI_VARS && PSTORE
+ default y
+ help
+ Say Y here to enable use efivars as a backend to pstore. This
+ will allow writing console messages, crash dumps, or anything
+ else supported by pstore to EFI variables.
+
+config EFI_VARS_PSTORE_DEFAULT_DISABLE
+ bool "Disable using efivars as a pstore backend by default"
+ depends on EFI_VARS_PSTORE
+ default n
+ help
+ Saying Y here will disable the use of efivars as a storage
+ backend for pstore by default. This setting can be overridden
+ using the efivars module's pstore_disable parameter.
+
config EFI_PCDP
bool "Console device selection via EFI PCDP or HCDP table"
depends on ACPI && EFI && IA64
*/
#define GUID_LEN 36
+static bool efivars_pstore_disable =
+ IS_ENABLED(CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE);
+
+module_param_named(pstore_disable, efivars_pstore_disable, bool, 0644);
+
/*
* The maximum size of VariableName + Data = 1024
* Therefore, it's reasonable to save that much
static void efivar_update_sysfs_entries(struct work_struct *);
static DECLARE_WORK(efivar_work, efivar_update_sysfs_entries);
+static bool efivar_wq_enabled = true;
/* Return the number of unicode characters in data */
static unsigned long
.create = efivarfs_create,
};
-static struct pstore_info efi_pstore_info;
-
-#ifdef CONFIG_PSTORE
+#ifdef CONFIG_EFI_VARS_PSTORE
static int efi_pstore_open(struct pstore_info *psi)
{
spin_unlock_irqrestore(&efivars->lock, flags);
- if (reason == KMSG_DUMP_OOPS)
+ if (reason == KMSG_DUMP_OOPS && efivar_wq_enabled)
schedule_work(&efivar_work);
*id = part;
return 0;
}
-#else
-static int efi_pstore_open(struct pstore_info *psi)
-{
- return 0;
-}
-
-static int efi_pstore_close(struct pstore_info *psi)
-{
- return 0;
-}
-
-static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, int *count,
- struct timespec *timespec,
- char **buf, struct pstore_info *psi)
-{
- return -1;
-}
-
-static int efi_pstore_write(enum pstore_type_id type,
- enum kmsg_dump_reason reason, u64 *id,
- unsigned int part, int count, size_t size,
- struct pstore_info *psi)
-{
- return 0;
-}
-
-static int efi_pstore_erase(enum pstore_type_id type, u64 id, int count,
- struct timespec time, struct pstore_info *psi)
-{
- return 0;
-}
-#endif
static struct pstore_info efi_pstore_info = {
.owner = THIS_MODULE,
.erase = efi_pstore_erase,
};
+static void efivar_pstore_register(struct efivars *efivars)
+{
+ efivars->efi_pstore_info = efi_pstore_info;
+ efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL);
+ if (efivars->efi_pstore_info.buf) {
+ efivars->efi_pstore_info.bufsize = 1024;
+ efivars->efi_pstore_info.data = efivars;
+ spin_lock_init(&efivars->efi_pstore_info.buf_lock);
+ pstore_register(&efivars->efi_pstore_info);
+ }
+}
+#else
+static void efivar_pstore_register(struct efivars *efivars)
+{
+ return;
+}
+#endif
+
static ssize_t efivar_create(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
return found;
}
+/*
+ * Returns the size of variable_name, in bytes, including the
+ * terminating NULL character, or variable_name_size if no NULL
+ * character is found among the first variable_name_size bytes.
+ */
+static unsigned long var_name_strnsize(efi_char16_t *variable_name,
+ unsigned long variable_name_size)
+{
+ unsigned long len;
+ efi_char16_t c;
+
+ /*
+ * The variable name is, by definition, a NULL-terminated
+ * string, so make absolutely sure that variable_name_size is
+ * the value we expect it to be. If not, return the real size.
+ */
+ for (len = 2; len <= variable_name_size; len += sizeof(c)) {
+ c = variable_name[(len / sizeof(c)) - 1];
+ if (!c)
+ break;
+ }
+
+ return min(len, variable_name_size);
+}
+
static void efivar_update_sysfs_entries(struct work_struct *work)
{
struct efivars *efivars = &__efivars;
if (!found) {
kfree(variable_name);
break;
- } else
+ } else {
+ variable_name_size = var_name_strnsize(variable_name,
+ variable_name_size);
efivar_create_sysfs_entry(efivars,
variable_name_size,
variable_name, &vendor);
+ }
}
}
}
EXPORT_SYMBOL_GPL(unregister_efivars);
+/*
+ * Print a warning when duplicate EFI variables are encountered and
+ * disable the sysfs workqueue since the firmware is buggy.
+ */
+static void dup_variable_bug(efi_char16_t *s16, efi_guid_t *vendor_guid,
+ unsigned long len16)
+{
+ size_t i, len8 = len16 / sizeof(efi_char16_t);
+ char *s8;
+
+ /*
+ * Disable the workqueue since the algorithm it uses for
+ * detecting new variables won't work with this buggy
+ * implementation of GetNextVariableName().
+ */
+ efivar_wq_enabled = false;
+
+ s8 = kzalloc(len8, GFP_KERNEL);
+ if (!s8)
+ return;
+
+ for (i = 0; i < len8; i++)
+ s8[i] = s16[i];
+
+ printk(KERN_WARNING "efivars: duplicate variable: %s-%pUl\n",
+ s8, vendor_guid);
+ kfree(s8);
+}
+
int register_efivars(struct efivars *efivars,
const struct efivar_operations *ops,
struct kobject *parent_kobj)
&vendor_guid);
switch (status) {
case EFI_SUCCESS:
+ variable_name_size = var_name_strnsize(variable_name,
+ variable_name_size);
+
+ /*
+ * Some firmware implementations return the
+ * same variable name on multiple calls to
+ * get_next_variable(). Terminate the loop
+ * immediately as there is no guarantee that
+ * we'll ever see a different variable name,
+ * and may end up looping here forever.
+ */
+ if (variable_is_present(variable_name, &vendor_guid)) {
+ dup_variable_bug(variable_name, &vendor_guid,
+ variable_name_size);
+ status = EFI_NOT_FOUND;
+ break;
+ }
+
efivar_create_sysfs_entry(efivars,
variable_name_size,
variable_name,
if (error)
unregister_efivars(efivars);
- efivars->efi_pstore_info = efi_pstore_info;
-
- efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL);
- if (efivars->efi_pstore_info.buf) {
- efivars->efi_pstore_info.bufsize = 1024;
- efivars->efi_pstore_info.data = efivars;
- spin_lock_init(&efivars->efi_pstore_info.buf_lock);
- pstore_register(&efivars->efi_pstore_info);
- }
+ if (!efivars_pstore_disable)
+ efivar_pstore_register(efivars);
register_filesystem(&efivarfs_type);
if (!np)
return;
- do {
+ for (;; index++) {
ret = of_parse_phandle_with_args(np, "gpio-ranges",
"#gpio-range-cells", index, &pinspec);
if (ret)
if (ret)
break;
-
- } while (index++);
+ }
}
#else
unsigned vblank = (pt->vactive_vblank_hi & 0xf) << 8 | pt->vblank_lo;
unsigned hsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc0) << 2 | pt->hsync_offset_lo;
unsigned hsync_pulse_width = (pt->hsync_vsync_offset_pulse_width_hi & 0x30) << 4 | pt->hsync_pulse_width_lo;
- unsigned vsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc) >> 2 | pt->vsync_offset_pulse_width_lo >> 4;
+ unsigned vsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc) << 2 | pt->vsync_offset_pulse_width_lo >> 4;
unsigned vsync_pulse_width = (pt->hsync_vsync_offset_pulse_width_hi & 0x3) << 4 | (pt->vsync_offset_pulse_width_lo & 0xf);
/* ignore tiny modes */
}
mode->type = DRM_MODE_TYPE_DRIVER;
+ mode->vrefresh = drm_mode_vrefresh(mode);
drm_mode_set_name(mode);
return mode;
/* position control register for hardware window 0, 2 ~ 4.*/
#define VIDOSD_A(win) (VIDOSD_BASE + 0x00 + (win) * 16)
#define VIDOSD_B(win) (VIDOSD_BASE + 0x04 + (win) * 16)
-/* size control register for hardware window 0. */
-#define VIDOSD_C_SIZE_W0 (VIDOSD_BASE + 0x08)
-/* alpha control register for hardware window 1 ~ 4. */
-#define VIDOSD_C(win) (VIDOSD_BASE + 0x18 + (win) * 16)
-/* size control register for hardware window 1 ~ 4. */
+/*
+ * size control register for hardware windows 0 and alpha control register
+ * for hardware windows 1 ~ 4
+ */
+#define VIDOSD_C(win) (VIDOSD_BASE + 0x08 + (win) * 16)
+/* size control register for hardware windows 1 ~ 2. */
#define VIDOSD_D(win) (VIDOSD_BASE + 0x0C + (win) * 16)
#define VIDWx_BUF_START(win, buf) (VIDW_BUF_START(buf) + (win) * 8)
#define VIDWx_BUF_SIZE(win, buf) (VIDW_BUF_SIZE(buf) + (win) * 4)
/* color key control register for hardware window 1 ~ 4. */
-#define WKEYCON0_BASE(x) ((WKEYCON0 + 0x140) + (x * 8))
+#define WKEYCON0_BASE(x) ((WKEYCON0 + 0x140) + ((x - 1) * 8))
/* color key value register for hardware window 1 ~ 4. */
-#define WKEYCON1_BASE(x) ((WKEYCON1 + 0x140) + (x * 8))
+#define WKEYCON1_BASE(x) ((WKEYCON1 + 0x140) + ((x - 1) * 8))
/* FIMD has totally five hardware windows. */
#define WINDOWS_NR 5
#ifdef CONFIG_OF
static const struct of_device_id fimd_driver_dt_match[] = {
- { .compatible = "samsung,exynos4-fimd",
+ { .compatible = "samsung,exynos4210-fimd",
.data = &exynos4_fimd_driver_data },
- { .compatible = "samsung,exynos5-fimd",
+ { .compatible = "samsung,exynos5250-fimd",
.data = &exynos5_fimd_driver_data },
{},
};
if (win != 3 && win != 4) {
u32 offset = VIDOSD_D(win);
if (win == 0)
- offset = VIDOSD_C_SIZE_W0;
+ offset = VIDOSD_C(win);
val = win_data->ovl_width * win_data->ovl_height;
writel(val, ctx->regs + offset);
/* registers for base address */
#define G2D_SRC_BASE_ADDR 0x0304
+#define G2D_SRC_COLOR_MODE 0x030C
+#define G2D_SRC_LEFT_TOP 0x0310
+#define G2D_SRC_RIGHT_BOTTOM 0x0314
#define G2D_SRC_PLANE2_BASE_ADDR 0x0318
#define G2D_DST_BASE_ADDR 0x0404
+#define G2D_DST_COLOR_MODE 0x040C
+#define G2D_DST_LEFT_TOP 0x0410
+#define G2D_DST_RIGHT_BOTTOM 0x0414
#define G2D_DST_PLANE2_BASE_ADDR 0x0418
#define G2D_PAT_BASE_ADDR 0x0500
#define G2D_MSK_BASE_ADDR 0x0520
#define G2D_DMA_LIST_DONE_COUNT_OFFSET 17
/* G2D_DMA_HOLD_CMD */
-#define G2D_USET_HOLD (1 << 2)
+#define G2D_USER_HOLD (1 << 2)
#define G2D_LIST_HOLD (1 << 1)
#define G2D_BITBLT_HOLD (1 << 0)
#define G2D_START_NHOLT (1 << 1)
#define G2D_START_BITBLT (1 << 0)
+/* buffer color format */
+#define G2D_FMT_XRGB8888 0
+#define G2D_FMT_ARGB8888 1
+#define G2D_FMT_RGB565 2
+#define G2D_FMT_XRGB1555 3
+#define G2D_FMT_ARGB1555 4
+#define G2D_FMT_XRGB4444 5
+#define G2D_FMT_ARGB4444 6
+#define G2D_FMT_PACKED_RGB888 7
+#define G2D_FMT_A8 11
+#define G2D_FMT_L8 12
+
+/* buffer valid length */
+#define G2D_LEN_MIN 1
+#define G2D_LEN_MAX 8000
+
#define G2D_CMDLIST_SIZE (PAGE_SIZE / 4)
#define G2D_CMDLIST_NUM 64
#define G2D_CMDLIST_POOL_SIZE (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM)
#define G2D_CMDLIST_DATA_NUM (G2D_CMDLIST_SIZE / sizeof(u32) - 2)
-#define MAX_BUF_ADDR_NR 6
-
/* maximum buffer pool size of userptr is 64MB as default */
#define MAX_POOL (64 * 1024 * 1024)
BUF_TYPE_USERPTR,
};
+enum g2d_reg_type {
+ REG_TYPE_NONE = -1,
+ REG_TYPE_SRC,
+ REG_TYPE_SRC_PLANE2,
+ REG_TYPE_DST,
+ REG_TYPE_DST_PLANE2,
+ REG_TYPE_PAT,
+ REG_TYPE_MSK,
+ MAX_REG_TYPE_NR
+};
+
/* cmdlist data structure */
struct g2d_cmdlist {
u32 head;
u32 last; /* last data offset */
};
+/*
+ * A structure of buffer description
+ *
+ * @format: color format
+ * @left_x: the x coordinates of left top corner
+ * @top_y: the y coordinates of left top corner
+ * @right_x: the x coordinates of right bottom corner
+ * @bottom_y: the y coordinates of right bottom corner
+ *
+ */
+struct g2d_buf_desc {
+ unsigned int format;
+ unsigned int left_x;
+ unsigned int top_y;
+ unsigned int right_x;
+ unsigned int bottom_y;
+};
+
+/*
+ * A structure of buffer information
+ *
+ * @map_nr: manages the number of mapped buffers
+ * @reg_types: stores regitster type in the order of requested command
+ * @handles: stores buffer handle in its reg_type position
+ * @types: stores buffer type in its reg_type position
+ * @descs: stores buffer description in its reg_type position
+ *
+ */
+struct g2d_buf_info {
+ unsigned int map_nr;
+ enum g2d_reg_type reg_types[MAX_REG_TYPE_NR];
+ unsigned long handles[MAX_REG_TYPE_NR];
+ unsigned int types[MAX_REG_TYPE_NR];
+ struct g2d_buf_desc descs[MAX_REG_TYPE_NR];
+};
+
struct drm_exynos_pending_g2d_event {
struct drm_pending_event base;
struct drm_exynos_g2d_event event;
bool in_pool;
bool out_of_list;
};
-
struct g2d_cmdlist_node {
struct list_head list;
struct g2d_cmdlist *cmdlist;
- unsigned int map_nr;
- unsigned long handles[MAX_BUF_ADDR_NR];
- unsigned int obj_type[MAX_BUF_ADDR_NR];
dma_addr_t dma_addr;
+ struct g2d_buf_info buf_info;
struct drm_exynos_pending_g2d_event *event;
};
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
int nr;
int ret;
+ struct g2d_buf_info *buf_info;
init_dma_attrs(&g2d->cmdlist_dma_attrs);
dma_set_attr(DMA_ATTR_WRITE_COMBINE, &g2d->cmdlist_dma_attrs);
}
for (nr = 0; nr < G2D_CMDLIST_NUM; nr++) {
+ unsigned int i;
+
node[nr].cmdlist =
g2d->cmdlist_pool_virt + nr * G2D_CMDLIST_SIZE;
node[nr].dma_addr =
g2d->cmdlist_pool + nr * G2D_CMDLIST_SIZE;
+ buf_info = &node[nr].buf_info;
+ for (i = 0; i < MAX_REG_TYPE_NR; i++)
+ buf_info->reg_types[i] = REG_TYPE_NONE;
+
list_add_tail(&node[nr].list, &g2d->free_cmdlist);
}
DMA_BIDIRECTIONAL);
if (ret < 0) {
DRM_ERROR("failed to map sgt with dma region.\n");
- goto err_free_sgt;
+ goto err_sg_free_table;
}
g2d_userptr->dma_addr = sgt->sgl[0].dma_address;
return &g2d_userptr->dma_addr;
-err_free_sgt:
+err_sg_free_table:
sg_free_table(sgt);
+
+err_free_sgt:
kfree(sgt);
sgt = NULL;
g2d->current_pool = 0;
}
+static enum g2d_reg_type g2d_get_reg_type(int reg_offset)
+{
+ enum g2d_reg_type reg_type;
+
+ switch (reg_offset) {
+ case G2D_SRC_BASE_ADDR:
+ case G2D_SRC_COLOR_MODE:
+ case G2D_SRC_LEFT_TOP:
+ case G2D_SRC_RIGHT_BOTTOM:
+ reg_type = REG_TYPE_SRC;
+ break;
+ case G2D_SRC_PLANE2_BASE_ADDR:
+ reg_type = REG_TYPE_SRC_PLANE2;
+ break;
+ case G2D_DST_BASE_ADDR:
+ case G2D_DST_COLOR_MODE:
+ case G2D_DST_LEFT_TOP:
+ case G2D_DST_RIGHT_BOTTOM:
+ reg_type = REG_TYPE_DST;
+ break;
+ case G2D_DST_PLANE2_BASE_ADDR:
+ reg_type = REG_TYPE_DST_PLANE2;
+ break;
+ case G2D_PAT_BASE_ADDR:
+ reg_type = REG_TYPE_PAT;
+ break;
+ case G2D_MSK_BASE_ADDR:
+ reg_type = REG_TYPE_MSK;
+ break;
+ default:
+ reg_type = REG_TYPE_NONE;
+ DRM_ERROR("Unknown register offset![%d]\n", reg_offset);
+ break;
+ };
+
+ return reg_type;
+}
+
+static unsigned long g2d_get_buf_bpp(unsigned int format)
+{
+ unsigned long bpp;
+
+ switch (format) {
+ case G2D_FMT_XRGB8888:
+ case G2D_FMT_ARGB8888:
+ bpp = 4;
+ break;
+ case G2D_FMT_RGB565:
+ case G2D_FMT_XRGB1555:
+ case G2D_FMT_ARGB1555:
+ case G2D_FMT_XRGB4444:
+ case G2D_FMT_ARGB4444:
+ bpp = 2;
+ break;
+ case G2D_FMT_PACKED_RGB888:
+ bpp = 3;
+ break;
+ default:
+ bpp = 1;
+ break;
+ }
+
+ return bpp;
+}
+
+static bool g2d_check_buf_desc_is_valid(struct g2d_buf_desc *buf_desc,
+ enum g2d_reg_type reg_type,
+ unsigned long size)
+{
+ unsigned int width, height;
+ unsigned long area;
+
+ /*
+ * check source and destination buffers only.
+ * so the others are always valid.
+ */
+ if (reg_type != REG_TYPE_SRC && reg_type != REG_TYPE_DST)
+ return true;
+
+ width = buf_desc->right_x - buf_desc->left_x;
+ if (width < G2D_LEN_MIN || width > G2D_LEN_MAX) {
+ DRM_ERROR("width[%u] is out of range!\n", width);
+ return false;
+ }
+
+ height = buf_desc->bottom_y - buf_desc->top_y;
+ if (height < G2D_LEN_MIN || height > G2D_LEN_MAX) {
+ DRM_ERROR("height[%u] is out of range!\n", height);
+ return false;
+ }
+
+ area = (unsigned long)width * (unsigned long)height *
+ g2d_get_buf_bpp(buf_desc->format);
+ if (area > size) {
+ DRM_ERROR("area[%lu] is out of range[%lu]!\n", area, size);
+ return false;
+ }
+
+ return true;
+}
+
static int g2d_map_cmdlist_gem(struct g2d_data *g2d,
struct g2d_cmdlist_node *node,
struct drm_device *drm_dev,
struct drm_file *file)
{
struct g2d_cmdlist *cmdlist = node->cmdlist;
+ struct g2d_buf_info *buf_info = &node->buf_info;
int offset;
+ int ret;
int i;
- for (i = 0; i < node->map_nr; i++) {
+ for (i = 0; i < buf_info->map_nr; i++) {
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ int reg_pos;
unsigned long handle;
dma_addr_t *addr;
- offset = cmdlist->last - (i * 2 + 1);
- handle = cmdlist->data[offset];
+ reg_pos = cmdlist->last - 2 * (i + 1);
+
+ offset = cmdlist->data[reg_pos];
+ handle = cmdlist->data[reg_pos + 1];
+
+ reg_type = g2d_get_reg_type(offset);
+ if (reg_type == REG_TYPE_NONE) {
+ ret = -EFAULT;
+ goto err;
+ }
+
+ buf_desc = &buf_info->descs[reg_type];
+
+ if (buf_info->types[reg_type] == BUF_TYPE_GEM) {
+ unsigned long size;
+
+ size = exynos_drm_gem_get_size(drm_dev, handle, file);
+ if (!size) {
+ ret = -EFAULT;
+ goto err;
+ }
+
+ if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type,
+ size)) {
+ ret = -EFAULT;
+ goto err;
+ }
- if (node->obj_type[i] == BUF_TYPE_GEM) {
addr = exynos_drm_gem_get_dma_addr(drm_dev, handle,
file);
if (IS_ERR(addr)) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
}
} else {
struct drm_exynos_g2d_userptr g2d_userptr;
if (copy_from_user(&g2d_userptr, (void __user *)handle,
sizeof(struct drm_exynos_g2d_userptr))) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
+ }
+
+ if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type,
+ g2d_userptr.size)) {
+ ret = -EFAULT;
+ goto err;
}
addr = g2d_userptr_get_dma_addr(drm_dev,
file,
&handle);
if (IS_ERR(addr)) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
}
}
- cmdlist->data[offset] = *addr;
- node->handles[i] = handle;
+ cmdlist->data[reg_pos + 1] = *addr;
+ buf_info->reg_types[i] = reg_type;
+ buf_info->handles[reg_type] = handle;
}
return 0;
+
+err:
+ buf_info->map_nr = i;
+ return ret;
}
static void g2d_unmap_cmdlist_gem(struct g2d_data *g2d,
struct drm_file *filp)
{
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
+ struct g2d_buf_info *buf_info = &node->buf_info;
int i;
- for (i = 0; i < node->map_nr; i++) {
- unsigned long handle = node->handles[i];
+ for (i = 0; i < buf_info->map_nr; i++) {
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ unsigned long handle;
+
+ reg_type = buf_info->reg_types[i];
+
+ buf_desc = &buf_info->descs[reg_type];
+ handle = buf_info->handles[reg_type];
- if (node->obj_type[i] == BUF_TYPE_GEM)
+ if (buf_info->types[reg_type] == BUF_TYPE_GEM)
exynos_drm_gem_put_dma_addr(subdrv->drm_dev, handle,
filp);
else
g2d_userptr_put_dma_addr(subdrv->drm_dev, handle,
false);
- node->handles[i] = 0;
+ buf_info->reg_types[i] = REG_TYPE_NONE;
+ buf_info->handles[reg_type] = 0;
+ buf_info->types[reg_type] = 0;
+ memset(buf_desc, 0x00, sizeof(*buf_desc));
}
- node->map_nr = 0;
+ buf_info->map_nr = 0;
}
static void g2d_dma_start(struct g2d_data *g2d,
pm_runtime_get_sync(g2d->dev);
clk_enable(g2d->gate_clk);
- /* interrupt enable */
- writel_relaxed(G2D_INTEN_ACF | G2D_INTEN_UCF | G2D_INTEN_GCF,
- g2d->regs + G2D_INTEN);
-
writel_relaxed(node->dma_addr, g2d->regs + G2D_DMA_SFR_BASE_ADDR);
writel_relaxed(G2D_DMA_START, g2d->regs + G2D_DMA_COMMAND);
}
struct g2d_data *g2d = container_of(work, struct g2d_data,
runqueue_work);
-
mutex_lock(&g2d->runqueue_mutex);
clk_disable(g2d->gate_clk);
pm_runtime_put_sync(g2d->dev);
int i;
for (i = 0; i < nr; i++) {
- index = cmdlist->last - 2 * (i + 1);
+ struct g2d_buf_info *buf_info = &node->buf_info;
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ unsigned long value;
- if (for_addr) {
- /* check userptr buffer type. */
- reg_offset = (cmdlist->data[index] &
- ~0x7fffffff) >> 31;
- if (reg_offset) {
- node->obj_type[i] = BUF_TYPE_USERPTR;
- cmdlist->data[index] &= ~G2D_BUF_USERPTR;
- }
- }
+ index = cmdlist->last - 2 * (i + 1);
reg_offset = cmdlist->data[index] & ~0xfffff000;
-
if (reg_offset < G2D_VALID_START || reg_offset > G2D_VALID_END)
goto err;
if (reg_offset % 4)
if (!for_addr)
goto err;
- if (node->obj_type[i] != BUF_TYPE_USERPTR)
- node->obj_type[i] = BUF_TYPE_GEM;
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ /* check userptr buffer type. */
+ if ((cmdlist->data[index] & ~0x7fffffff) >> 31) {
+ buf_info->types[reg_type] = BUF_TYPE_USERPTR;
+ cmdlist->data[index] &= ~G2D_BUF_USERPTR;
+ } else
+ buf_info->types[reg_type] = BUF_TYPE_GEM;
+ break;
+ case G2D_SRC_COLOR_MODE:
+ case G2D_DST_COLOR_MODE:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->format = value & 0xf;
+ break;
+ case G2D_SRC_LEFT_TOP:
+ case G2D_DST_LEFT_TOP:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->left_x = value & 0x1fff;
+ buf_desc->top_y = (value & 0x1fff0000) >> 16;
+ break;
+ case G2D_SRC_RIGHT_BOTTOM:
+ case G2D_DST_RIGHT_BOTTOM:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->right_x = value & 0x1fff;
+ buf_desc->bottom_y = (value & 0x1fff0000) >> 16;
break;
default:
if (for_addr)
cmdlist->data[cmdlist->last++] = G2D_SRC_BASE_ADDR;
cmdlist->data[cmdlist->last++] = 0;
+ /*
+ * 'LIST_HOLD' command should be set to the DMA_HOLD_CMD_REG
+ * and GCF bit should be set to INTEN register if user wants
+ * G2D interrupt event once current command list execution is
+ * finished.
+ * Otherwise only ACF bit should be set to INTEN register so
+ * that one interrupt is occured after all command lists
+ * have been completed.
+ */
if (node->event) {
+ cmdlist->data[cmdlist->last++] = G2D_INTEN;
+ cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF | G2D_INTEN_GCF;
cmdlist->data[cmdlist->last++] = G2D_DMA_HOLD_CMD;
cmdlist->data[cmdlist->last++] = G2D_LIST_HOLD;
+ } else {
+ cmdlist->data[cmdlist->last++] = G2D_INTEN;
+ cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF;
}
/* Check size of cmdlist: last 2 is about G2D_BITBLT_START */
if (ret < 0)
goto err_free_event;
- node->map_nr = req->cmd_buf_nr;
+ node->buf_info.map_nr = req->cmd_buf_nr;
if (req->cmd_buf_nr) {
struct drm_exynos_g2d_cmd *cmd_buf;
exynos_gem_obj = NULL;
}
+unsigned long exynos_drm_gem_get_size(struct drm_device *dev,
+ unsigned int gem_handle,
+ struct drm_file *file_priv)
+{
+ struct exynos_drm_gem_obj *exynos_gem_obj;
+ struct drm_gem_object *obj;
+
+ obj = drm_gem_object_lookup(dev, file_priv, gem_handle);
+ if (!obj) {
+ DRM_ERROR("failed to lookup gem object.\n");
+ return 0;
+ }
+
+ exynos_gem_obj = to_exynos_gem_obj(obj);
+
+ drm_gem_object_unreference_unlocked(obj);
+
+ return exynos_gem_obj->buffer->size;
+}
+
+
struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,
unsigned long size)
{
int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
+/* get buffer size to gem handle. */
+unsigned long exynos_drm_gem_get_size(struct drm_device *dev,
+ unsigned int gem_handle,
+ struct drm_file *file_priv);
+
/* initialize gem object. */
int exynos_drm_gem_init_object(struct drm_gem_object *obj);
}
edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
- edid = kzalloc(edid_len, GFP_KERNEL);
+ edid = kmemdup(ctx->raw_edid, edid_len, GFP_KERNEL);
if (!edid) {
DRM_DEBUG_KMS("failed to allocate edid\n");
return ERR_PTR(-ENOMEM);
}
- memcpy(edid, ctx->raw_edid, edid_len);
return edid;
}
return -EINVAL;
}
edid_len = (1 + raw_edid->extensions) * EDID_LENGTH;
- ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL);
+ ctx->raw_edid = kmemdup(raw_edid, edid_len, GFP_KERNEL);
if (!ctx->raw_edid) {
DRM_DEBUG_KMS("failed to allocate raw_edid.\n");
return -ENOMEM;
}
- memcpy(ctx->raw_edid, raw_edid, edid_len);
} else {
/*
* with connection = 0, free raw_edid
mixer_ctx->win_data[win].enabled = false;
}
-int mixer_check_timing(void *ctx, struct fb_videomode *timing)
+static int mixer_check_timing(void *ctx, struct fb_videomode *timing)
{
struct mixer_context *mixer_ctx = ctx;
u32 w, h;
static void
describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
{
- seq_printf(m, "%p: %s%s %8zdKiB %02x %02x %d %d %d%s%s%s",
+ seq_printf(m, "%pK: %s%s %8zdKiB %02x %02x %d %d %d%s%s%s",
&obj->base,
get_pin_flag(obj),
get_tiling_flag(obj),
"Enable Haswell and ValleyView Support. "
"(default: false)");
+int i915_disable_power_well __read_mostly = 0;
+module_param_named(disable_power_well, i915_disable_power_well, int, 0600);
+MODULE_PARM_DESC(disable_power_well,
+ "Disable the power well when possible (default: false)");
+
static struct drm_driver driver;
extern int intel_agp_enabled;
extern bool i915_enable_hangcheck __read_mostly;
extern int i915_enable_ppgtt __read_mostly;
extern unsigned int i915_preliminary_hw_support __read_mostly;
+extern int i915_disable_power_well __read_mostly;
extern int i915_suspend(struct drm_device *dev, pm_message_t state);
extern int i915_resume(struct drm_device *dev);
int count)
{
int i;
+ int relocs_total = 0;
+ int relocs_max = INT_MAX / sizeof(struct drm_i915_gem_relocation_entry);
for (i = 0; i < count; i++) {
char __user *ptr = (char __user *)(uintptr_t)exec[i].relocs_ptr;
if (exec[i].flags & __EXEC_OBJECT_UNKNOWN_FLAGS)
return -EINVAL;
- /* First check for malicious input causing overflow */
- if (exec[i].relocation_count >
- INT_MAX / sizeof(struct drm_i915_gem_relocation_entry))
+ /* First check for malicious input causing overflow in
+ * the worst case where we need to allocate the entire
+ * relocation tree as a single array.
+ */
+ if (exec[i].relocation_count > relocs_max - relocs_total)
return -EINVAL;
+ relocs_total += exec[i].relocation_count;
length = exec[i].relocation_count *
sizeof(struct drm_i915_gem_relocation_entry);
num_connectors++;
}
+ if (is_cpu_edp)
+ intel_crtc->cpu_transcoder = TRANSCODER_EDP;
+ else
+ intel_crtc->cpu_transcoder = pipe;
+
/* We are not sure yet this won't happen. */
WARN(!HAS_PCH_LPT(dev), "Unexpected PCH type %d\n",
INTEL_PCH_TYPE(dev));
int pipe = intel_crtc->pipe;
int ret;
- if (IS_HASWELL(dev) && intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))
- intel_crtc->cpu_transcoder = TRANSCODER_EDP;
- else
- intel_crtc->cpu_transcoder = pipe;
-
drm_vblank_pre_modeset(dev, pipe);
ret = dev_priv->display.crtc_mode_set(crtc, mode, adjusted_mode,
struct intel_link_m_n m_n;
int pipe = intel_crtc->pipe;
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
+ int target_clock;
/*
* Find the lane count in the intel_encoder private
}
}
+ target_clock = mode->clock;
+ for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
+ if (intel_encoder->type == INTEL_OUTPUT_EDP) {
+ target_clock = intel_edp_target_clock(intel_encoder,
+ mode);
+ break;
+ }
+ }
+
/*
* Compute the GMCH and Link ratios. The '3' here is
* the number of bytes_per_pixel post-LUT, which we always
* set up for 8-bits of R/G/B, or 3 bytes total.
*/
intel_link_compute_m_n(intel_crtc->bpp, lane_count,
- mode->clock, adjusted_mode->clock, &m_n);
+ target_clock, adjusted_mode->clock, &m_n);
if (IS_HASWELL(dev)) {
I915_WRITE(PIPE_DATA_M1(cpu_transcoder),
for (i = 0; i < intel_dp->lane_count; i++)
if ((intel_dp->train_set[i] & DP_TRAIN_MAX_SWING_REACHED) == 0)
break;
- if (i == intel_dp->lane_count && voltage_tries == 5) {
+ if (i == intel_dp->lane_count) {
++loop_tries;
if (loop_tries == 5) {
DRM_DEBUG_KMS("too many full retries, give up\n");
algo->data = bus;
}
-#define HAS_GMBUS_IRQ(dev) (INTEL_INFO(dev)->gen >= 4)
+/*
+ * gmbus on gen4 seems to be able to generate legacy interrupts even when in MSI
+ * mode. This results in spurious interrupt warnings if the legacy irq no. is
+ * shared with another device. The kernel then disables that interrupt source
+ * and so prevents the other device from working properly.
+ */
+#define HAS_GMBUS_IRQ(dev) (INTEL_INFO(dev)->gen >= 5)
static int
gmbus_wait_hw_status(struct drm_i915_private *dev_priv,
u32 gmbus2_status,
u32 gmbus2 = 0;
DEFINE_WAIT(wait);
+ if (!HAS_GMBUS_IRQ(dev_priv->dev))
+ gmbus4_irq_en = 0;
+
/* Important: The hw handles only the first bit, so set only one! Since
* we also need to check for NAKs besides the hw ready/idle signal, we
* need to wake up periodically and check that ourselves. */
if (dev_priv->backlight_level == 0)
dev_priv->backlight_level = intel_panel_get_max_backlight(dev);
- dev_priv->backlight_enabled = true;
- intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
-
if (INTEL_INFO(dev)->gen >= 4) {
uint32_t reg, tmp;
}
set_level:
- /* Check the current backlight level and try to set again if it's zero.
- * On some machines, BLC_PWM_CPU_CTL is cleared to zero automatically
- * when BLC_PWM_CPU_CTL2 and BLC_PWM_PCH_CTL1 are written.
+ /* Call below after setting BLC_PWM_CPU_CTL2 and BLC_PWM_PCH_CTL1.
+ * BLC_PWM_CPU_CTL may be cleared to zero automatically when these
+ * registers are set.
*/
- if (!intel_panel_get_backlight(dev))
- intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
+ dev_priv->backlight_enabled = true;
+ intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
}
static void intel_panel_init_backlight(struct drm_device *dev)
if (!IS_HASWELL(dev))
return;
+ if (!i915_disable_power_well && !enable)
+ return;
+
tmp = I915_READ(HSW_PWR_WELL_DRIVER);
is_enabled = tmp & HSW_PWR_WELL_STATE;
enable_requested = tmp & HSW_PWR_WELL_ENABLE;
m = n = p = 0;
vcomax = 800000;
vcomin = 400000;
- pllreffreq = 3333;
+ pllreffreq = 33333;
delta = 0xffffffff;
permitteddelta = clock * 5 / 1000;
- for (testp = 16; testp > 0; testp--) {
+ for (testp = 16; testp > 0; testp >>= 1) {
if (clock * testp > vcomax)
continue;
if (clock * testp < vcomin)
continue;
for (testm = 1; testm < 33; testm++) {
- for (testn = 1; testn < 257; testn++) {
+ for (testn = 17; testn < 257; testn++) {
computed = (pllreffreq * testn) /
(testm * testp);
if (computed > clock)
if (tmpdelta < delta) {
delta = tmpdelta;
n = testn - 1;
- m = (testm - 1) | ((n >> 1) & 0x80);
+ m = (testm - 1);
p = testp - 1;
}
if ((clock * testp) >= 600000)
- p |= 80;
+ p |= 0x80;
}
}
}
struct nouveau_object *parent = NULL;
struct nouveau_object *namedb = NULL;
struct nouveau_handle *handle = NULL;
- int ret = -EINVAL;
parent = nouveau_handle_ref(client, _parent);
if (!parent)
}
nouveau_object_ref(NULL, &parent);
- return ret;
+ return handle ? 0 : -EINVAL;
}
int
#include <core/device.h>
#include <core/subdev.h>
-enum nouveau_therm_mode {
+enum nouveau_therm_fan_mode {
NOUVEAU_THERM_CTRL_NONE = 0,
NOUVEAU_THERM_CTRL_MANUAL = 1,
NOUVEAU_THERM_CTRL_AUTO = 2,
}
int
-nouveau_therm_mode(struct nouveau_therm *therm, int mode)
+nouveau_therm_fan_mode(struct nouveau_therm *therm, int mode)
{
struct nouveau_therm_priv *priv = (void *)therm;
struct nouveau_device *device = nv_device(therm);
(mode != NOUVEAU_THERM_CTRL_NONE && device->card_type >= NV_C0))
return -EINVAL;
+ /* do not allow automatic fan management if the thermal sensor is
+ * not available */
+ if (priv->mode == 2 && therm->temp_get(therm) < 0)
+ return -EINVAL;
+
if (priv->mode == mode)
return 0;
- nv_info(therm, "Thermal management: %s\n", name[mode]);
+ nv_info(therm, "fan management: %s\n", name[mode]);
nouveau_therm_update(therm, mode);
return 0;
}
priv->fan->bios.max_duty = value;
return 0;
case NOUVEAU_THERM_ATTR_FAN_MODE:
- return nouveau_therm_mode(therm, value);
+ return nouveau_therm_fan_mode(therm, value);
case NOUVEAU_THERM_ATTR_THRS_FAN_BOOST:
priv->bios_sensor.thrs_fan_boost.temp = value;
priv->sensor.program_alarms(therm);
return ret;
if (priv->suspend >= 0)
- nouveau_therm_mode(therm, priv->mode);
+ nouveau_therm_fan_mode(therm, priv->mode);
priv->sensor.program_alarms(therm);
return 0;
}
int
nouveau_therm_preinit(struct nouveau_therm *therm)
{
- nouveau_therm_ic_ctor(therm);
nouveau_therm_sensor_ctor(therm);
+ nouveau_therm_ic_ctor(therm);
nouveau_therm_fan_ctor(therm);
- nouveau_therm_mode(therm, NOUVEAU_THERM_CTRL_NONE);
+ nouveau_therm_fan_mode(therm, NOUVEAU_THERM_CTRL_NONE);
+ nouveau_therm_sensor_preinit(therm);
return 0;
}
struct i2c_board_info *info)
{
struct nouveau_therm_priv *priv = (void *)nouveau_therm(i2c);
+ struct nvbios_therm_sensor *sensor = &priv->bios_sensor;
struct i2c_client *client;
request_module("%s%s", I2C_MODULE_PREFIX, info->type);
}
nv_info(priv,
- "Found an %s at address 0x%x (controlled by lm_sensors)\n",
- info->type, info->addr);
+ "Found an %s at address 0x%x (controlled by lm_sensors, "
+ "temp offset %+i C)\n",
+ info->type, info->addr, sensor->offset_constant);
priv->ic = client;
return true;
struct nouveau_therm_priv base;
};
+enum nv40_sensor_style { INVALID_STYLE = -1, OLD_STYLE = 0, NEW_STYLE = 1 };
+
+static enum nv40_sensor_style
+nv40_sensor_style(struct nouveau_therm *therm)
+{
+ struct nouveau_device *device = nv_device(therm);
+
+ switch (device->chipset) {
+ case 0x43:
+ case 0x44:
+ case 0x4a:
+ case 0x47:
+ return OLD_STYLE;
+
+ case 0x46:
+ case 0x49:
+ case 0x4b:
+ case 0x4e:
+ case 0x4c:
+ case 0x67:
+ case 0x68:
+ case 0x63:
+ return NEW_STYLE;
+ default:
+ return INVALID_STYLE;
+ }
+}
+
static int
nv40_sensor_setup(struct nouveau_therm *therm)
{
- struct nouveau_device *device = nv_device(therm);
+ enum nv40_sensor_style style = nv40_sensor_style(therm);
/* enable ADC readout and disable the ALARM threshold */
- if (device->chipset >= 0x46) {
+ if (style == NEW_STYLE) {
nv_mask(therm, 0x15b8, 0x80000000, 0);
nv_wr32(therm, 0x15b0, 0x80003fff);
- mdelay(10); /* wait for the temperature to stabilize */
+ mdelay(20); /* wait for the temperature to stabilize */
return nv_rd32(therm, 0x15b4) & 0x3fff;
- } else {
+ } else if (style == OLD_STYLE) {
nv_wr32(therm, 0x15b0, 0xff);
+ mdelay(20); /* wait for the temperature to stabilize */
return nv_rd32(therm, 0x15b4) & 0xff;
- }
+ } else
+ return -ENODEV;
}
static int
nv40_temp_get(struct nouveau_therm *therm)
{
struct nouveau_therm_priv *priv = (void *)therm;
- struct nouveau_device *device = nv_device(therm);
struct nvbios_therm_sensor *sensor = &priv->bios_sensor;
+ enum nv40_sensor_style style = nv40_sensor_style(therm);
int core_temp;
- if (device->chipset >= 0x46) {
+ if (style == NEW_STYLE) {
nv_wr32(therm, 0x15b0, 0x80003fff);
core_temp = nv_rd32(therm, 0x15b4) & 0x3fff;
- } else {
+ } else if (style == OLD_STYLE) {
nv_wr32(therm, 0x15b0, 0xff);
core_temp = nv_rd32(therm, 0x15b4) & 0xff;
- }
-
- /* Setup the sensor if the temperature is 0 */
- if (core_temp == 0)
- core_temp = nv40_sensor_setup(therm);
+ } else
+ return -ENODEV;
- if (sensor->slope_div == 0)
- sensor->slope_div = 1;
- if (sensor->offset_den == 0)
- sensor->offset_den = 1;
- if (sensor->slope_mult < 1)
- sensor->slope_mult = 1;
+ /* if the slope or the offset is unset, do no use the sensor */
+ if (!sensor->slope_div || !sensor->slope_mult ||
+ !sensor->offset_num || !sensor->offset_den)
+ return -ENODEV;
core_temp = core_temp * sensor->slope_mult / sensor->slope_div;
core_temp = core_temp + sensor->offset_num / sensor->offset_den;
core_temp = core_temp + sensor->offset_constant - 8;
+ /* reserve negative temperatures for errors */
+ if (core_temp < 0)
+ core_temp = 0;
+
return core_temp;
}
struct i2c_client *ic;
};
-int nouveau_therm_mode(struct nouveau_therm *therm, int mode);
+int nouveau_therm_fan_mode(struct nouveau_therm *therm, int mode);
int nouveau_therm_attr_get(struct nouveau_therm *therm,
enum nouveau_therm_attr_type type);
int nouveau_therm_attr_set(struct nouveau_therm *therm,
int nouveau_therm_preinit(struct nouveau_therm *);
+void nouveau_therm_sensor_preinit(struct nouveau_therm *);
void nouveau_therm_sensor_set_threshold_state(struct nouveau_therm *therm,
enum nouveau_therm_thrs thrs,
enum nouveau_therm_thrs_state st);
{
struct nouveau_therm_priv *priv = (void *)therm;
- priv->bios_sensor.slope_mult = 1;
- priv->bios_sensor.slope_div = 1;
- priv->bios_sensor.offset_num = 0;
- priv->bios_sensor.offset_den = 1;
priv->bios_sensor.offset_constant = 0;
priv->bios_sensor.thrs_fan_boost.temp = 90;
struct nouveau_therm_priv *priv = (void *)therm;
struct nvbios_therm_sensor *s = &priv->bios_sensor;
- if (!priv->bios_sensor.slope_div)
- priv->bios_sensor.slope_div = 1;
- if (!priv->bios_sensor.offset_den)
- priv->bios_sensor.offset_den = 1;
-
/* enforce a minimum hysteresis on thresholds */
s->thrs_fan_boost.hysteresis = max_t(u8, s->thrs_fan_boost.hysteresis, 2);
s->thrs_down_clock.hysteresis = max_t(u8, s->thrs_down_clock.hysteresis, 2);
const char *thresolds[] = {
"fanboost", "downclock", "critical", "shutdown"
};
- uint8_t temperature = therm->temp_get(therm);
+ int temperature = therm->temp_get(therm);
if (thrs < 0 || thrs > 3)
return;
if (dir == NOUVEAU_THERM_THRS_FALLING)
- nv_info(therm, "temperature (%u C) went below the '%s' threshold\n",
+ nv_info(therm, "temperature (%i C) went below the '%s' threshold\n",
temperature, thresolds[thrs]);
else
- nv_info(therm, "temperature (%u C) hit the '%s' threshold\n",
+ nv_info(therm, "temperature (%i C) hit the '%s' threshold\n",
temperature, thresolds[thrs]);
active = (dir == NOUVEAU_THERM_THRS_RISING);
case NOUVEAU_THERM_THRS_FANBOOST:
if (active) {
nouveau_therm_fan_set(therm, true, 100);
- nouveau_therm_mode(therm, NOUVEAU_THERM_CTRL_AUTO);
+ nouveau_therm_fan_mode(therm, NOUVEAU_THERM_CTRL_AUTO);
}
break;
case NOUVEAU_THERM_THRS_DOWNCLOCK:
NOUVEAU_THERM_THRS_SHUTDOWN);
/* schedule the next poll in one second */
- if (list_empty(&alarm->head))
+ if (therm->temp_get(therm) >= 0 && list_empty(&alarm->head))
ptimer->alarm(ptimer, 1000 * 1000 * 1000, alarm);
spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags);
alarm_timer_callback(&priv->sensor.therm_poll_alarm);
}
+void
+nouveau_therm_sensor_preinit(struct nouveau_therm *therm)
+{
+ const char *sensor_avail = "yes";
+
+ if (therm->temp_get(therm) < 0)
+ sensor_avail = "no";
+
+ nv_info(therm, "internal sensor: %s\n", sensor_avail);
+}
+
int
nouveau_therm_sensor_ctor(struct nouveau_therm *therm)
{
struct drm_device *dev = dev_get_drvdata(d);
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_therm *therm = nouveau_therm(drm->device);
+ int temp = therm->temp_get(therm);
- return snprintf(buf, PAGE_SIZE, "%d\n", therm->temp_get(therm) * 1000);
+ if (temp < 0)
+ return temp;
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", temp * 1000);
}
static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, nouveau_hwmon_show_temp,
NULL, 0);
nouveau_hwmon_get_pwm1_max,
nouveau_hwmon_set_pwm1_max, 0);
-static struct attribute *hwmon_attributes[] = {
+static struct attribute *hwmon_default_attributes[] = {
+ &sensor_dev_attr_name.dev_attr.attr,
+ &sensor_dev_attr_update_rate.dev_attr.attr,
+ NULL
+};
+static struct attribute *hwmon_temp_attributes[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_auto_point1_pwm.dev_attr.attr,
&sensor_dev_attr_temp1_auto_point1_temp.dev_attr.attr,
&sensor_dev_attr_temp1_crit_hyst.dev_attr.attr,
&sensor_dev_attr_temp1_emergency.dev_attr.attr,
&sensor_dev_attr_temp1_emergency_hyst.dev_attr.attr,
- &sensor_dev_attr_name.dev_attr.attr,
- &sensor_dev_attr_update_rate.dev_attr.attr,
NULL
};
static struct attribute *hwmon_fan_rpm_attributes[] = {
NULL
};
-static const struct attribute_group hwmon_attrgroup = {
- .attrs = hwmon_attributes,
+static const struct attribute_group hwmon_default_attrgroup = {
+ .attrs = hwmon_default_attributes,
+};
+static const struct attribute_group hwmon_temp_attrgroup = {
+ .attrs = hwmon_temp_attributes,
};
static const struct attribute_group hwmon_fan_rpm_attrgroup = {
.attrs = hwmon_fan_rpm_attributes,
}
dev_set_drvdata(hwmon_dev, dev);
- /* default sysfs entries */
- ret = sysfs_create_group(&hwmon_dev->kobj, &hwmon_attrgroup);
+ /* set the default attributes */
+ ret = sysfs_create_group(&hwmon_dev->kobj, &hwmon_default_attrgroup);
if (ret) {
if (ret)
goto error;
}
+ /* if the card has a working thermal sensor */
+ if (therm->temp_get(therm) >= 0) {
+ ret = sysfs_create_group(&hwmon_dev->kobj, &hwmon_temp_attrgroup);
+ if (ret) {
+ if (ret)
+ goto error;
+ }
+ }
+
/* if the card has a pwm fan */
/*XXX: incorrect, need better detection for this, some boards have
* the gpio entries for pwm fan control even when there's no
struct nouveau_pm *pm = nouveau_pm(dev);
if (pm->hwmon) {
- sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup);
- sysfs_remove_group(&pm->hwmon->kobj,
- &hwmon_pwm_fan_attrgroup);
- sysfs_remove_group(&pm->hwmon->kobj,
- &hwmon_fan_rpm_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_default_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_temp_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_pwm_fan_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_fan_rpm_attrgroup);
hwmon_device_unregister(pm->hwmon);
}
swap_interval <<= 4;
if (swap_interval == 0)
swap_interval |= 0x100;
+ if (chan == NULL)
+ evo_sync(crtc->dev);
push = evo_wait(sync, 128);
if (unlikely(push == NULL))
sync->addr ^= 0x10;
sync->data++;
FIRE_RING (chan);
- } else {
- evo_sync(crtc->dev);
}
/* queue the flip */
(rdev->pdev->device == 0x9907) ||
(rdev->pdev->device == 0x9908) ||
(rdev->pdev->device == 0x9909) ||
+ (rdev->pdev->device == 0x990B) ||
+ (rdev->pdev->device == 0x990C) ||
+ (rdev->pdev->device == 0x990F) ||
(rdev->pdev->device == 0x9910) ||
- (rdev->pdev->device == 0x9917)) {
+ (rdev->pdev->device == 0x9917) ||
+ (rdev->pdev->device == 0x9999)) {
rdev->config.cayman.max_simds_per_se = 6;
rdev->config.cayman.max_backends_per_se = 2;
} else if ((rdev->pdev->device == 0x9903) ||
(rdev->pdev->device == 0x9904) ||
(rdev->pdev->device == 0x990A) ||
+ (rdev->pdev->device == 0x990D) ||
+ (rdev->pdev->device == 0x990E) ||
(rdev->pdev->device == 0x9913) ||
(rdev->pdev->device == 0x9918)) {
rdev->config.cayman.max_simds_per_se = 4;
(rdev->pdev->device == 0x9990) ||
(rdev->pdev->device == 0x9991) ||
(rdev->pdev->device == 0x9994) ||
+ (rdev->pdev->device == 0x9995) ||
+ (rdev->pdev->device == 0x9996) ||
+ (rdev->pdev->device == 0x999A) ||
(rdev->pdev->device == 0x99A0)) {
rdev->config.cayman.max_simds_per_se = 3;
rdev->config.cayman.max_backends_per_se = 1;
WREG32(DMA_TILING_CONFIG + DMA0_REGISTER_OFFSET, gb_addr_config);
WREG32(DMA_TILING_CONFIG + DMA1_REGISTER_OFFSET, gb_addr_config);
- tmp = gb_addr_config & NUM_PIPES_MASK;
- tmp = r6xx_remap_render_backend(rdev, tmp,
- rdev->config.cayman.max_backends_per_se *
- rdev->config.cayman.max_shader_engines,
- CAYMAN_MAX_BACKENDS, disabled_rb_mask);
+ if ((rdev->config.cayman.max_backends_per_se == 1) &&
+ (rdev->flags & RADEON_IS_IGP)) {
+ if ((disabled_rb_mask & 3) == 1) {
+ /* RB0 disabled, RB1 enabled */
+ tmp = 0x11111111;
+ } else {
+ /* RB1 disabled, RB0 enabled */
+ tmp = 0x00000000;
+ }
+ } else {
+ tmp = gb_addr_config & NUM_PIPES_MASK;
+ tmp = r6xx_remap_render_backend(rdev, tmp,
+ rdev->config.cayman.max_backends_per_se *
+ rdev->config.cayman.max_shader_engines,
+ CAYMAN_MAX_BACKENDS, disabled_rb_mask);
+ }
WREG32(GB_BACKEND_MAP, tmp);
cgts_tcc_disable = 0xffff0000;
int cayman_suspend(struct radeon_device *rdev)
{
r600_audio_fini(rdev);
+ radeon_vm_manager_fini(rdev);
cayman_cp_enable(rdev, false);
cayman_dma_stop(rdev);
evergreen_irq_suspend(rdev);
goto out_cleanup;
}
- /* r100 doesn't have dma engine so skip the test */
- /* also, VRAM-to-VRAM test doesn't make much sense for DMA */
- /* skip it as well if domains are the same */
- if ((rdev->asic->copy.dma) && (sdomain != ddomain)) {
+ if (rdev->asic->copy.dma) {
time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
RADEON_BENCHMARK_COPY_DMA, n);
if (time < 0)
sdomain, ddomain, "dma");
}
- time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
- RADEON_BENCHMARK_COPY_BLIT, n);
- if (time < 0)
- goto out_cleanup;
- if (time > 0)
- radeon_benchmark_log_results(n, size, time,
- sdomain, ddomain, "blit");
+ if (rdev->asic->copy.blit) {
+ time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
+ RADEON_BENCHMARK_COPY_BLIT, n);
+ if (time < 0)
+ goto out_cleanup;
+ if (time > 0)
+ radeon_benchmark_log_results(n, size, time,
+ sdomain, ddomain, "blit");
+ }
out_cleanup:
if (sobj) {
int si_suspend(struct radeon_device *rdev)
{
+ radeon_vm_manager_fini(rdev);
si_cp_enable(rdev, false);
cayman_dma_stop(rdev);
si_irq_suspend(rdev);
#define USB_VENDOR_ID_MONTEREY 0x0566
#define USB_DEVICE_ID_GENIUS_KB29E 0x3004
+#define USB_VENDOR_ID_MSI 0x1770
+#define USB_DEVICE_ID_MSI_GX680R_LED_PANEL 0xff00
+
#define USB_VENDOR_ID_NATIONAL_SEMICONDUCTOR 0x0400
#define USB_DEVICE_ID_N_S_HARMONY 0xc359
#define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001 0x3001
#define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008
+#define USB_VENDOR_ID_REALTEK 0x0bda
+#define USB_DEVICE_ID_REALTEK_READER 0x0152
+
#define USB_VENDOR_ID_ROCCAT 0x1e7d
#define USB_DEVICE_ID_ROCCAT_ARVO 0x30d4
#define USB_DEVICE_ID_ROCCAT_ISKU 0x319c
{
struct mt_device *td = hid_get_drvdata(hid);
__s32 quirks = td->mtclass.quirks;
+ struct input_dev *input = field->hidinput->input;
if (hid->claimed & HID_CLAIMED_INPUT) {
switch (usage->hid) {
break;
default:
+ if (usage->type)
+ input_event(input, usage->type, usage->code,
+ value);
return;
}
if (usage->usage_index + 1 == field->report_count) {
/* we only take into account the last report. */
if (usage->hid == td->last_slot_field)
- mt_complete_slot(td, field->hidinput->input);
+ mt_complete_slot(td, input);
if (field->index == td->last_field_index
&& td->num_received >= td->num_expected)
{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_REALTEK, USB_DEVICE_ID_REALTEK_READER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_SIGMATEL, USB_DEVICE_ID_SIGMATEL_STMP3780, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_SUN, USB_DEVICE_ID_RARITAN_KVM_DONGLE, HID_QUIRK_NOGET },
which contains this code, we don't worry about the wasted space.
*/
-#include <linux/hwmon.h>
+#include <linux/kernel.h>
/* straight from the datasheet */
#define LM75_TEMP_MIN (-55000)
menuconfig I2C
tristate "I2C support"
- depends on !S390
select RT_MUTEXES
---help---
I2C (pronounce: I-squared-C) is a slow serial bus protocol used in
config I2C_SMBUS
tristate "SMBus-specific protocols" if !I2C_HELPER_AUTO
+ depends on GENERIC_HARDIRQS
help
Say Y here if you want support for SMBus extensions to the I2C
specification. At the moment, the only supported extension is
config I2C_ISCH
tristate "Intel SCH SMBus 1.0"
- depends on PCI
+ depends on PCI && GENERIC_HARDIRQS
select LPC_SCH
help
Say Y here if you want to use SMBus controller on the Intel SCH
config I2C_OCORES
tristate "OpenCores I2C Controller"
+ depends on GENERIC_HARDIRQS
help
If you say yes to this option, support will be included for the
OpenCores I2C controller. For details see
config I2C_PARPORT
tristate "Parallel port adapter"
- depends on PARPORT
+ depends on PARPORT && GENERIC_HARDIRQS
select I2C_ALGOBIT
select I2C_SMBUS
help
config I2C_PARPORT_LIGHT
tristate "Parallel port adapter (light)"
+ depends on GENERIC_HARDIRQS
select I2C_ALGOBIT
select I2C_SMBUS
help
/* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */
#define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59
#define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a
+#define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15
#define ISMT_DESC_ENTRIES 32 /* number of descriptor entries */
#define ISMT_MAX_RETRIES 3 /* number of SMBus retries to attempt */
static const DEFINE_PCI_DEVICE_TABLE(ismt_ids) = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) },
{ 0, }
};
int clk_multiplier = I2C_CLK_MULTIPLIER_STD_FAST_MODE;
u32 clk_divisor;
- tegra_i2c_clock_enable(i2c_dev);
+ err = tegra_i2c_clock_enable(i2c_dev);
+ if (err < 0) {
+ dev_err(i2c_dev->dev, "Clock enable failed %d\n", err);
+ return err;
+ }
tegra_periph_reset_assert(i2c_dev->div_clk);
udelay(2);
if (i2c_dev->is_suspended)
return -EBUSY;
- tegra_i2c_clock_enable(i2c_dev);
+ ret = tegra_i2c_clock_enable(i2c_dev);
+ if (ret < 0) {
+ dev_err(i2c_dev->dev, "Clock enable failed %d\n", ret);
+ return ret;
+ }
+
for (i = 0; i < num; i++) {
enum msg_end_type end_type = MSG_END_STOP;
if (i < (num - 1)) {
*
* Copyright (c) 2010 Ericsson AB.
*
- * Author: Guenter Roeck <guenter.roeck@ericsson.com>
+ * Author: Guenter Roeck <linux@roeck-us.net>
*
* Derived from:
* pca954x.c
neigh = dst_neigh_lookup(ep->dst,
&ep->com.cm_id->remote_addr.sin_addr.s_addr);
+ if (!neigh) {
+ pr_err("%s - cannot alloc neigh.\n", __func__);
+ err = -ENOMEM;
+ goto fail4;
+ }
+
/* get a l2t entry */
if (neigh->dev->flags & IFF_LOOPBACK) {
PDBG("%s LOOPBACK\n", __func__);
dst = &rt->dst;
neigh = dst_neigh_lookup_skb(dst, skb);
+ if (!neigh) {
+ pr_err("%s - failed to allocate neigh!\n",
+ __func__);
+ goto free_dst;
+ }
+
if (neigh->dev->flags & IFF_LOOPBACK) {
pdev = ip_dev_find(&init_net, iph->daddr);
e = cxgb4_l2t_get(dev->rdev.lldi.l2t, neigh,
wq->rq.queue = dma_alloc_coherent(&(rdev->lldi.pdev->dev),
wq->rq.memsize, &(wq->rq.dma_addr),
GFP_KERNEL);
- if (!wq->rq.queue)
+ if (!wq->rq.queue) {
+ ret = -ENOMEM;
goto free_sq;
+ }
PDBG("%s sq base va 0x%p pa 0x%llx rq base va 0x%p pa 0x%llx\n",
__func__, wq->sq.queue,
(unsigned long long)virt_to_phys(wq->sq.queue),
goto bail;
}
- opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
+ opcode = (be32_to_cpu(ohdr->bth[0]) >> 24) & 0x7f;
dev->opstats[opcode].n_bytes += tlen;
dev->opstats[opcode].n_packets++;
config INFINIBAND_QIB
- tristate "QLogic PCIe HCA support"
+ tristate "Intel PCIe HCA support"
depends on 64BIT
---help---
- This is a low-level driver for QLogic PCIe QLE InfiniBand host
- channel adapters. This driver does not support the QLogic
+ This is a low-level driver for Intel PCIe QLE InfiniBand host
+ channel adapters. This driver does not support the Intel
HyperTransport card (model QHT7140).
/*
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006, 2007, 2008, 2009 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
"Attempt pre-IBTA 1.2 DDR speed negotiation");
MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("QLogic <support@qlogic.com>");
-MODULE_DESCRIPTION("QLogic IB driver");
+MODULE_AUTHOR("Intel <ibsupport@intel.com>");
+MODULE_DESCRIPTION("Intel IB driver");
MODULE_VERSION(QIB_DRIVER_VERSION);
/*
/*
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006, 2007, 2008, 2009, 2010 QLogic Corporation.
* All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
/*
* This file contains all the chip-specific register information and
- * access functions for the QLogic QLogic_IB PCI-Express chip.
+ * access functions for the Intel Intel_IB PCI-Express chip.
*
*/
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2012, 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
static void qib_remove_one(struct pci_dev *);
static int qib_init_one(struct pci_dev *, const struct pci_device_id *);
-#define DRIVER_LOAD_MSG "QLogic " QIB_DRV_NAME " loaded: "
+#define DRIVER_LOAD_MSG "Intel " QIB_DRV_NAME " loaded: "
#define PFX QIB_DRV_NAME ": "
static DEFINE_PCI_DEVICE_TABLE(qib_pci_tbl) = {
dd = qib_init_iba6120_funcs(pdev, ent);
#else
qib_early_err(&pdev->dev,
- "QLogic PCIE device 0x%x cannot work if CONFIG_PCI_MSI is not enabled\n",
+ "Intel PCIE device 0x%x cannot work if CONFIG_PCI_MSI is not enabled\n",
ent->device);
dd = ERR_PTR(-ENODEV);
#endif
default:
qib_early_err(&pdev->dev,
- "Failing on unknown QLogic deviceid 0x%x\n",
+ "Failing on unknown Intel deviceid 0x%x\n",
ent->device);
ret = -ENODEV;
}
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
#include "qib.h"
#include "qib_7220.h"
-#define SD7220_FW_NAME "qlogic/sd7220.fw"
+#define SD7220_FW_NAME "intel/sd7220.fw"
MODULE_FIRMWARE(SD7220_FW_NAME);
/*
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2012, 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2005, 2006 PathScale, Inc. All rights reserved.
*
ibdev->dma_ops = &qib_dma_mapping_ops;
snprintf(ibdev->node_desc, sizeof(ibdev->node_desc),
- "QLogic Infiniband HCA %s", init_utsname()->nodename);
+ "Intel Infiniband HCA %s", init_utsname()->nodename);
ret = ib_register_device(ibdev, qib_create_port_files);
if (ret)
if (++priv->tx_outstanding == ipoib_sendq_size) {
ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
tx->qp->qp_num);
- if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP))
- ipoib_warn(priv, "request notify on send CQ failed\n");
netif_stop_queue(dev);
+ rc = ib_req_notify_cq(priv->send_cq,
+ IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
+ if (rc < 0)
+ ipoib_warn(priv, "request notify on send CQ failed\n");
+ else if (rc)
+ ipoib_send_comp_handler(priv->send_cq, dev);
}
}
}
#define GET_TIME(x) rdtscl(x)
#define DELTA(x,y) ((y)-(x))
#define TIME_NAME "TSC"
-#elif defined(__alpha__)
+#elif defined(__alpha__) || defined(CONFIG_MN10300) || defined(CONFIG_ARM) || defined(CONFIG_TILE)
#define GET_TIME(x) do { x = get_cycles(); } while (0)
#define DELTA(x,y) ((y)-(x))
-#define TIME_NAME "PCC"
-#elif defined(CONFIG_MN10300) || defined(CONFIG_TILE)
-#define GET_TIME(x) do { x = get_cycles(); } while (0)
-#define DELTA(x, y) ((x) - (y))
-#define TIME_NAME "TSC"
+#define TIME_NAME "get_cycles"
#else
#define FAKE_TIME
static unsigned long analog_faketime = 0;
# OMAP IOMMU support
config OMAP_IOMMU
bool "OMAP IOMMU Support"
- depends on ARCH_OMAP
+ depends on ARCH_OMAP2PLUS
select IOMMU_API
config OMAP_IOVMM
/* allocate a protection domain if a device is added */
dma_domain = find_protection_domain(devid);
- if (dma_domain)
- goto out;
- dma_domain = dma_ops_domain_alloc();
- if (!dma_domain)
- goto out;
- dma_domain->target_dev = devid;
-
- spin_lock_irqsave(&iommu_pd_list_lock, flags);
- list_add_tail(&dma_domain->list, &iommu_pd_list);
- spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
-
- dev_data = get_dev_data(dev);
+ if (!dma_domain) {
+ dma_domain = dma_ops_domain_alloc();
+ if (!dma_domain)
+ goto out;
+ dma_domain->target_dev = devid;
+
+ spin_lock_irqsave(&iommu_pd_list_lock, flags);
+ list_add_tail(&dma_domain->list, &iommu_pd_list);
+ spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
+ }
dev->archdata.dma_ops = &amd_iommu_dma_ops;
* BIOS should disable L2B micellaneous clock gating by setting
* L2_L2B_CK_GATE_CONTROL[CKGateL2BMiscDisable](D0F2xF4_x90[2]) = 1b
*/
-static void __init amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
+static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
{
u32 value;
#include <linux/cpumask.h>
#include <linux/kernel.h>
#include <linux/string.h>
-#include <linux/cpumask.h>
#include <linux/errno.h>
#include <linux/msi.h>
#include <linux/irq.h>
config HISAX_NETJET
bool "NETjet card"
- depends on PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on VIRT_TO_BUS
help
This enables HiSax support for the NetJet from Traverse
Technologies.
config HISAX_NETJET_U
bool "NETspider U card"
- depends on PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on VIRT_TO_BUS
help
This enables HiSax support for the Netspider U interface ISDN card
from Traverse Technologies.
{
struct blk_plug plug;
+ BUG_ON(dm_bufio_in_request());
+
blk_start_plug(&plug);
dm_bufio_lock(c);
__le32 read_misses;
__le32 write_hits;
__le32 write_misses;
+
+ __le32 policy_version[CACHE_POLICY_VERSION_SIZE];
} __packed;
struct dm_cache_metadata {
bool clean_when_opened:1;
char policy_name[CACHE_POLICY_NAME_SIZE];
+ unsigned policy_version[CACHE_POLICY_VERSION_SIZE];
size_t policy_hint_size;
struct dm_cache_statistics stats;
};
memset(disk_super->uuid, 0, sizeof(disk_super->uuid));
disk_super->magic = cpu_to_le64(CACHE_SUPERBLOCK_MAGIC);
disk_super->version = cpu_to_le32(CACHE_VERSION);
- memset(disk_super->policy_name, 0, CACHE_POLICY_NAME_SIZE);
+ memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
+ memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version));
disk_super->policy_hint_size = 0;
r = dm_sm_copy_root(cmd->metadata_sm, &disk_super->metadata_space_map_root,
disk_super->metadata_block_size = cpu_to_le32(DM_CACHE_METADATA_BLOCK_SIZE >> SECTOR_SHIFT);
disk_super->data_block_size = cpu_to_le32(cmd->data_block_size);
disk_super->cache_blocks = cpu_to_le32(0);
- memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
disk_super->read_hits = cpu_to_le32(0);
disk_super->read_misses = cpu_to_le32(0);
cmd->data_block_size = le32_to_cpu(disk_super->data_block_size);
cmd->cache_blocks = to_cblock(le32_to_cpu(disk_super->cache_blocks));
strncpy(cmd->policy_name, disk_super->policy_name, sizeof(cmd->policy_name));
+ cmd->policy_version[0] = le32_to_cpu(disk_super->policy_version[0]);
+ cmd->policy_version[1] = le32_to_cpu(disk_super->policy_version[1]);
+ cmd->policy_version[2] = le32_to_cpu(disk_super->policy_version[2]);
cmd->policy_hint_size = le32_to_cpu(disk_super->policy_hint_size);
cmd->stats.read_hits = le32_to_cpu(disk_super->read_hits);
disk_super->discard_nr_blocks = cpu_to_le64(from_dblock(cmd->discard_nr_blocks));
disk_super->cache_blocks = cpu_to_le32(from_cblock(cmd->cache_blocks));
strncpy(disk_super->policy_name, cmd->policy_name, sizeof(disk_super->policy_name));
+ disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
+ disk_super->policy_version[1] = cpu_to_le32(cmd->policy_version[1]);
+ disk_super->policy_version[2] = cpu_to_le32(cmd->policy_version[2]);
disk_super->read_hits = cpu_to_le32(cmd->stats.read_hits);
disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses);
bool hints_valid;
};
+static bool policy_unchanged(struct dm_cache_metadata *cmd,
+ struct dm_cache_policy *policy)
+{
+ const char *policy_name = dm_cache_policy_get_name(policy);
+ const unsigned *policy_version = dm_cache_policy_get_version(policy);
+ size_t policy_hint_size = dm_cache_policy_get_hint_size(policy);
+
+ /*
+ * Ensure policy names match.
+ */
+ if (strncmp(cmd->policy_name, policy_name, sizeof(cmd->policy_name)))
+ return false;
+
+ /*
+ * Ensure policy major versions match.
+ */
+ if (cmd->policy_version[0] != policy_version[0])
+ return false;
+
+ /*
+ * Ensure policy hint sizes match.
+ */
+ if (cmd->policy_hint_size != policy_hint_size)
+ return false;
+
+ return true;
+}
+
static bool hints_array_initialized(struct dm_cache_metadata *cmd)
{
return cmd->hint_root && cmd->policy_hint_size;
}
static bool hints_array_available(struct dm_cache_metadata *cmd,
- const char *policy_name)
+ struct dm_cache_policy *policy)
{
- bool policy_names_match = !strncmp(cmd->policy_name, policy_name,
- sizeof(cmd->policy_name));
-
- return cmd->clean_when_opened && policy_names_match &&
+ return cmd->clean_when_opened && policy_unchanged(cmd, policy) &&
hints_array_initialized(cmd);
}
return r;
}
-static int __load_mappings(struct dm_cache_metadata *cmd, const char *policy_name,
+static int __load_mappings(struct dm_cache_metadata *cmd,
+ struct dm_cache_policy *policy,
load_mapping_fn fn, void *context)
{
struct thunk thunk;
thunk.cmd = cmd;
thunk.respect_dirty_flags = cmd->clean_when_opened;
- thunk.hints_valid = hints_array_available(cmd, policy_name);
+ thunk.hints_valid = hints_array_available(cmd, policy);
return dm_array_walk(&cmd->info, cmd->root, __load_mapping, &thunk);
}
-int dm_cache_load_mappings(struct dm_cache_metadata *cmd, const char *policy_name,
+int dm_cache_load_mappings(struct dm_cache_metadata *cmd,
+ struct dm_cache_policy *policy,
load_mapping_fn fn, void *context)
{
int r;
down_read(&cmd->root_lock);
- r = __load_mappings(cmd, policy_name, fn, context);
+ r = __load_mappings(cmd, policy, fn, context);
up_read(&cmd->root_lock);
return r;
/* nothing to be done */
return 0;
- value = pack_value(oblock, flags | (dirty ? M_DIRTY : 0));
+ value = pack_value(oblock, (flags & ~M_DIRTY) | (dirty ? M_DIRTY : 0));
__dm_bless_for_disk(&value);
r = dm_array_set_value(&cmd->info, cmd->root, from_cblock(cblock),
__le32 value;
size_t hint_size;
const char *policy_name = dm_cache_policy_get_name(policy);
+ const unsigned *policy_version = dm_cache_policy_get_version(policy);
if (!policy_name[0] ||
(strlen(policy_name) > sizeof(cmd->policy_name) - 1))
return -EINVAL;
- if (strcmp(cmd->policy_name, policy_name)) {
+ if (!policy_unchanged(cmd, policy)) {
strncpy(cmd->policy_name, policy_name, sizeof(cmd->policy_name));
+ memcpy(cmd->policy_version, policy_version, sizeof(cmd->policy_version));
hint_size = dm_cache_policy_get_hint_size(policy);
if (!hint_size)
dm_cblock_t cblock, bool dirty,
uint32_t hint, bool hint_valid);
int dm_cache_load_mappings(struct dm_cache_metadata *cmd,
- const char *policy_name,
+ struct dm_cache_policy *policy,
load_mapping_fn fn,
void *context);
/*----------------------------------------------------------------*/
#define DM_MSG_PREFIX "cache cleaner"
-#define CLEANER_VERSION "1.0.0"
/* Cache entry struct. */
struct wb_cache_entry {
static struct dm_cache_policy_type wb_policy_type = {
.name = "cleaner",
+ .version = {1, 0, 0},
.hint_size = 0,
.owner = THIS_MODULE,
.create = wb_create
if (r < 0)
DMERR("register failed %d", r);
else
- DMINFO("version " CLEANER_VERSION " loaded");
+ DMINFO("version %u.%u.%u loaded",
+ wb_policy_type.version[0],
+ wb_policy_type.version[1],
+ wb_policy_type.version[2]);
return r;
}
*/
const char *dm_cache_policy_get_name(struct dm_cache_policy *p);
+const unsigned *dm_cache_policy_get_version(struct dm_cache_policy *p);
+
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p);
/*----------------------------------------------------------------*/
#include <linux/vmalloc.h>
#define DM_MSG_PREFIX "cache-policy-mq"
-#define MQ_VERSION "1.0.0"
static struct kmem_cache *mq_entry_cache;
static struct dm_cache_policy_type mq_policy_type = {
.name = "mq",
+ .version = {1, 0, 0},
.hint_size = 4,
.owner = THIS_MODULE,
.create = mq_create
static struct dm_cache_policy_type default_policy_type = {
.name = "default",
+ .version = {1, 0, 0},
.hint_size = 4,
.owner = THIS_MODULE,
.create = mq_create
r = dm_cache_policy_register(&default_policy_type);
if (!r) {
- DMINFO("version " MQ_VERSION " loaded");
+ DMINFO("version %u.%u.%u loaded",
+ mq_policy_type.version[0],
+ mq_policy_type.version[1],
+ mq_policy_type.version[2]);
return 0;
}
}
EXPORT_SYMBOL_GPL(dm_cache_policy_get_name);
+const unsigned *dm_cache_policy_get_version(struct dm_cache_policy *p)
+{
+ struct dm_cache_policy_type *t = p->private;
+
+ return t->version;
+}
+EXPORT_SYMBOL_GPL(dm_cache_policy_get_version);
+
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p)
{
struct dm_cache_policy_type *t = p->private;
* We maintain a little register of the different policy types.
*/
#define CACHE_POLICY_NAME_SIZE 16
+#define CACHE_POLICY_VERSION_SIZE 3
struct dm_cache_policy_type {
/* For use by the register code only. */
* what gets passed on the target line to select your policy.
*/
char name[CACHE_POLICY_NAME_SIZE];
+ unsigned version[CACHE_POLICY_VERSION_SIZE];
/*
* Policies may store a hint for each each cache block.
spinlock_t lock;
struct bio_list deferred_bios;
struct bio_list deferred_flush_bios;
+ struct bio_list deferred_writethrough_bios;
struct list_head quiesced_migrations;
struct list_head completed_migrations;
struct list_head need_commit_migrations;
/*
* origin_blocks entries, discarded if set.
*/
- sector_t discard_block_size; /* a power of 2 times sectors per block */
+ uint32_t discard_block_size; /* a power of 2 times sectors per block */
dm_dblock_t discard_nr_blocks;
unsigned long *discard_bitset;
bool tick:1;
unsigned req_nr:2;
struct dm_deferred_entry *all_io_entry;
+
+ /* writethrough fields */
+ struct cache *cache;
+ dm_cblock_t cblock;
+ bio_end_io_t *saved_bi_end_io;
};
struct dm_cache_migration {
return cache->sectors_per_block_shift >= 0;
}
+static dm_block_t block_div(dm_block_t b, uint32_t n)
+{
+ do_div(b, n);
+
+ return b;
+}
+
static dm_dblock_t oblock_to_dblock(struct cache *cache, dm_oblock_t oblock)
{
- sector_t discard_blocks = cache->discard_block_size;
+ uint32_t discard_blocks = cache->discard_block_size;
dm_block_t b = from_oblock(oblock);
if (!block_size_is_power_of_two(cache))
- (void) sector_div(discard_blocks, cache->sectors_per_block);
+ discard_blocks = discard_blocks / cache->sectors_per_block;
else
discard_blocks >>= cache->sectors_per_block_shift;
- (void) sector_div(b, discard_blocks);
+ b = block_div(b, discard_blocks);
return to_dblock(b);
}
spin_unlock_irqrestore(&cache->lock, flags);
}
+static void defer_writethrough_bio(struct cache *cache, struct bio *bio)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&cache->lock, flags);
+ bio_list_add(&cache->deferred_writethrough_bios, bio);
+ spin_unlock_irqrestore(&cache->lock, flags);
+
+ wake_worker(cache);
+}
+
+static void writethrough_endio(struct bio *bio, int err)
+{
+ struct per_bio_data *pb = get_per_bio_data(bio);
+ bio->bi_end_io = pb->saved_bi_end_io;
+
+ if (err) {
+ bio_endio(bio, err);
+ return;
+ }
+
+ remap_to_cache(pb->cache, bio, pb->cblock);
+
+ /*
+ * We can't issue this bio directly, since we're in interrupt
+ * context. So it get's put on a bio list for processing by the
+ * worker thread.
+ */
+ defer_writethrough_bio(pb->cache, bio);
+}
+
+/*
+ * When running in writethrough mode we need to send writes to clean blocks
+ * to both the cache and origin devices. In future we'd like to clone the
+ * bio and send them in parallel, but for now we're doing them in
+ * series as this is easier.
+ */
+static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio,
+ dm_oblock_t oblock, dm_cblock_t cblock)
+{
+ struct per_bio_data *pb = get_per_bio_data(bio);
+
+ pb->cache = cache;
+ pb->cblock = cblock;
+ pb->saved_bi_end_io = bio->bi_end_io;
+ bio->bi_end_io = writethrough_endio;
+
+ remap_to_origin_clear_discard(pb->cache, bio, oblock);
+}
+
/*----------------------------------------------------------------
* Migration processing
*
dm_block_t end_block = bio->bi_sector + bio_sectors(bio);
dm_block_t b;
- (void) sector_div(end_block, cache->discard_block_size);
+ end_block = block_div(end_block, cache->discard_block_size);
for (b = start_block; b < end_block; b++)
set_discard(cache, to_dblock(b));
inc_hit_counter(cache, bio);
pb->all_io_entry = dm_deferred_entry_inc(cache->all_io_ds);
- if (is_writethrough_io(cache, bio, lookup_result.cblock)) {
- /*
- * No need to mark anything dirty in write through mode.
- */
- pb->req_nr == 0 ?
- remap_to_cache(cache, bio, lookup_result.cblock) :
- remap_to_origin_clear_discard(cache, bio, block);
- } else
+ if (is_writethrough_io(cache, bio, lookup_result.cblock))
+ remap_to_origin_then_cache(cache, bio, block, lookup_result.cblock);
+ else
remap_to_cache_dirty(cache, bio, block, lookup_result.cblock);
issue(cache, bio);
case POLICY_MISS:
inc_miss_counter(cache, bio);
pb->all_io_entry = dm_deferred_entry_inc(cache->all_io_ds);
-
- if (pb->req_nr != 0) {
- /*
- * This is a duplicate writethrough io that is no
- * longer needed because the block has been demoted.
- */
- bio_endio(bio, 0);
- } else {
- remap_to_origin_clear_discard(cache, bio, block);
- issue(cache, bio);
- }
+ remap_to_origin_clear_discard(cache, bio, block);
+ issue(cache, bio);
break;
case POLICY_NEW:
submit_bios ? generic_make_request(bio) : bio_io_error(bio);
}
+static void process_deferred_writethrough_bios(struct cache *cache)
+{
+ unsigned long flags;
+ struct bio_list bios;
+ struct bio *bio;
+
+ bio_list_init(&bios);
+
+ spin_lock_irqsave(&cache->lock, flags);
+ bio_list_merge(&bios, &cache->deferred_writethrough_bios);
+ bio_list_init(&cache->deferred_writethrough_bios);
+ spin_unlock_irqrestore(&cache->lock, flags);
+
+ while ((bio = bio_list_pop(&bios)))
+ generic_make_request(bio);
+}
+
static void writeback_some_dirty_blocks(struct cache *cache)
{
int r = 0;
else
return !bio_list_empty(&cache->deferred_bios) ||
!bio_list_empty(&cache->deferred_flush_bios) ||
+ !bio_list_empty(&cache->deferred_writethrough_bios) ||
!list_empty(&cache->quiesced_migrations) ||
!list_empty(&cache->completed_migrations) ||
!list_empty(&cache->need_commit_migrations);
writeback_some_dirty_blocks(cache);
+ process_deferred_writethrough_bios(cache);
+
if (commit_if_needed(cache)) {
process_deferred_flush_bios(cache, false);
}
r = set_config_values(cache->policy, ca->policy_argc, ca->policy_argv);
- if (r)
+ if (r) {
+ *error = "Error setting cache policy's config values";
dm_cache_policy_destroy(cache->policy);
+ cache->policy = NULL;
+ }
return r;
}
#define DEFAULT_MIGRATION_THRESHOLD (2048 * 100)
-static unsigned cache_num_write_bios(struct dm_target *ti, struct bio *bio);
-
static int cache_create(struct cache_args *ca, struct cache **result)
{
int r = 0;
memcpy(&cache->features, &ca->features, sizeof(cache->features));
- if (cache->features.write_through)
- ti->num_write_bios = cache_num_write_bios;
-
cache->callbacks.congested_fn = cache_is_congested;
dm_table_add_target_callbacks(ti->table, &cache->callbacks);
/* FIXME: factor out this whole section */
origin_blocks = cache->origin_sectors = ca->origin_sectors;
- (void) sector_div(origin_blocks, ca->block_size);
+ origin_blocks = block_div(origin_blocks, ca->block_size);
cache->origin_blocks = to_oblock(origin_blocks);
cache->sectors_per_block = ca->block_size;
dm_block_t cache_size = ca->cache_sectors;
cache->sectors_per_block_shift = -1;
- (void) sector_div(cache_size, ca->block_size);
+ cache_size = block_div(cache_size, ca->block_size);
cache->cache_size = to_cblock(cache_size);
} else {
cache->sectors_per_block_shift = __ffs(ca->block_size);
spin_lock_init(&cache->lock);
bio_list_init(&cache->deferred_bios);
bio_list_init(&cache->deferred_flush_bios);
+ bio_list_init(&cache->deferred_writethrough_bios);
INIT_LIST_HEAD(&cache->quiesced_migrations);
INIT_LIST_HEAD(&cache->completed_migrations);
INIT_LIST_HEAD(&cache->need_commit_migrations);
goto out;
r = cache_create(ca, &cache);
+ if (r)
+ goto out;
r = copy_ctr_args(cache, argc - 3, (const char **)argv + 3);
if (r) {
return r;
}
-static unsigned cache_num_write_bios(struct dm_target *ti, struct bio *bio)
-{
- int r;
- struct cache *cache = ti->private;
- dm_oblock_t block = get_bio_block(cache, bio);
- dm_cblock_t cblock;
-
- r = policy_lookup(cache->policy, block, &cblock);
- if (r < 0)
- return 2; /* assume the worst */
-
- return (!r && !is_dirty(cache, cblock)) ? 2 : 1;
-}
-
static int cache_map(struct dm_target *ti, struct bio *bio)
{
struct cache *cache = ti->private;
inc_hit_counter(cache, bio);
pb->all_io_entry = dm_deferred_entry_inc(cache->all_io_ds);
- if (is_writethrough_io(cache, bio, lookup_result.cblock)) {
- /*
- * No need to mark anything dirty in write through mode.
- */
- pb->req_nr == 0 ?
- remap_to_cache(cache, bio, lookup_result.cblock) :
- remap_to_origin_clear_discard(cache, bio, block);
- cell_defer(cache, cell, false);
- } else {
+ if (is_writethrough_io(cache, bio, lookup_result.cblock))
+ remap_to_origin_then_cache(cache, bio, block, lookup_result.cblock);
+ else
remap_to_cache_dirty(cache, bio, block, lookup_result.cblock);
- cell_defer(cache, cell, false);
- }
+
+ cell_defer(cache, cell, false);
break;
case POLICY_MISS:
}
if (!cache->loaded_mappings) {
- r = dm_cache_load_mappings(cache->cmd,
- dm_cache_policy_get_name(cache->policy),
+ r = dm_cache_load_mappings(cache->cmd, cache->policy,
load_mapping, cache);
if (r) {
DMERR("could not load cache mappings");
static struct target_type cache_target = {
.name = "cache",
- .version = {1, 0, 0},
+ .version = {1, 1, 0},
.module = THIS_MODULE,
.ctr = cache_ctr,
.dtr = cache_dtr,
return q && blk_queue_discard(q);
}
+static bool is_factor(sector_t block_size, uint32_t n)
+{
+ return !sector_div(block_size, n);
+}
+
/*
* If discard_passdown was enabled verify that the data device
* supports discards. Disable discard_passdown if not.
else if (data_limits->discard_granularity > block_size)
reason = "discard granularity larger than a block";
- else if (block_size & (data_limits->discard_granularity - 1))
+ else if (!is_factor(block_size, data_limits->discard_granularity))
reason = "discard granularity not a factor of block size";
if (reason) {
.name = "thin-pool",
.features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE |
DM_TARGET_IMMUTABLE,
- .version = {1, 6, 1},
+ .version = {1, 7, 0},
.module = THIS_MODULE,
.ctr = pool_ctr,
.dtr = pool_dtr,
static struct target_type thin_target = {
.name = "thin",
- .version = {1, 7, 1},
+ .version = {1, 8, 0},
.module = THIS_MODULE,
.ctr = thin_ctr,
.dtr = thin_dtr,
*/
};
+struct dm_verity_prefetch_work {
+ struct work_struct work;
+ struct dm_verity *v;
+ sector_t block;
+ unsigned n_blocks;
+};
+
static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
{
return (struct shash_desc *)(io + 1);
* The root buffer is not prefetched, it is assumed that it will be cached
* all the time.
*/
-static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
+static void verity_prefetch_io(struct work_struct *work)
{
+ struct dm_verity_prefetch_work *pw =
+ container_of(work, struct dm_verity_prefetch_work, work);
+ struct dm_verity *v = pw->v;
int i;
for (i = v->levels - 2; i >= 0; i--) {
sector_t hash_block_start;
sector_t hash_block_end;
- verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
- verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
+ verity_hash_at_level(v, pw->block, i, &hash_block_start, NULL);
+ verity_hash_at_level(v, pw->block + pw->n_blocks - 1, i, &hash_block_end, NULL);
if (!i) {
unsigned cluster = ACCESS_ONCE(dm_verity_prefetch_cluster);
dm_bufio_prefetch(v->bufio, hash_block_start,
hash_block_end - hash_block_start + 1);
}
+
+ kfree(pw);
+}
+
+static void verity_submit_prefetch(struct dm_verity *v, struct dm_verity_io *io)
+{
+ struct dm_verity_prefetch_work *pw;
+
+ pw = kmalloc(sizeof(struct dm_verity_prefetch_work),
+ GFP_NOIO | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
+
+ if (!pw)
+ return;
+
+ INIT_WORK(&pw->work, verity_prefetch_io);
+ pw->v = v;
+ pw->block = io->block;
+ pw->n_blocks = io->n_blocks;
+ queue_work(v->verify_wq, &pw->work);
}
/*
memcpy(io->io_vec, bio_iovec(bio),
io->io_vec_size * sizeof(struct bio_vec));
- verity_prefetch_io(v, io);
+ verity_submit_prefetch(v, io);
generic_make_request(bio);
static struct target_type verity_target = {
.name = "verity",
- .version = {1, 1, 1},
+ .version = {1, 2, 0},
.module = THIS_MODULE,
.ctr = verity_ctr,
.dtr = verity_dtr,
removed++;
}
}
- if (removed)
- sysfs_notify(&mddev->kobj, NULL,
- "degraded");
-
+ if (removed && mddev->kobj.sd)
+ sysfs_notify(&mddev->kobj, NULL, "degraded");
rdev_for_each(rdev, mddev) {
if (rdev->raid_disk >= 0 &&
static inline int sysfs_link_rdev(struct mddev *mddev, struct md_rdev *rdev)
{
char nm[20];
- if (!test_bit(Replacement, &rdev->flags)) {
+ if (!test_bit(Replacement, &rdev->flags) && mddev->kobj.sd) {
sprintf(nm, "rd%d", rdev->raid_disk);
return sysfs_create_link(&mddev->kobj, &rdev->kobj, nm);
} else
static inline void sysfs_unlink_rdev(struct mddev *mddev, struct md_rdev *rdev)
{
char nm[20];
- if (!test_bit(Replacement, &rdev->flags)) {
+ if (!test_bit(Replacement, &rdev->flags) && mddev->kobj.sd) {
sprintf(nm, "rd%d", rdev->raid_disk);
sysfs_remove_link(&mddev->kobj, nm);
}
struct btree_node *n;
};
-static struct dm_btree_value_type le64_type = {
- .context = NULL,
- .size = sizeof(__le64),
- .inc = NULL,
- .dec = NULL,
- .equal = NULL
-};
-
-static int init_child(struct dm_btree_info *info, struct btree_node *parent,
+static int init_child(struct dm_btree_info *info, struct dm_btree_value_type *vt,
+ struct btree_node *parent,
unsigned index, struct child *result)
{
int r, inc;
result->n = dm_block_data(result->block);
if (inc)
- inc_children(info->tm, result->n, &le64_type);
+ inc_children(info->tm, result->n, vt);
*((__le64 *) value_ptr(parent, index)) =
cpu_to_le64(dm_block_location(result->block));
}
static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info,
- unsigned left_index)
+ struct dm_btree_value_type *vt, unsigned left_index)
{
int r;
struct btree_node *parent;
parent = dm_block_data(shadow_current(s));
- r = init_child(info, parent, left_index, &left);
+ r = init_child(info, vt, parent, left_index, &left);
if (r)
return r;
- r = init_child(info, parent, left_index + 1, &right);
+ r = init_child(info, vt, parent, left_index + 1, &right);
if (r) {
exit_child(info, &left);
return r;
}
static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info,
- unsigned left_index)
+ struct dm_btree_value_type *vt, unsigned left_index)
{
int r;
struct btree_node *parent = dm_block_data(shadow_current(s));
/*
* FIXME: fill out an array?
*/
- r = init_child(info, parent, left_index, &left);
+ r = init_child(info, vt, parent, left_index, &left);
if (r)
return r;
- r = init_child(info, parent, left_index + 1, ¢er);
+ r = init_child(info, vt, parent, left_index + 1, ¢er);
if (r) {
exit_child(info, &left);
return r;
}
- r = init_child(info, parent, left_index + 2, &right);
+ r = init_child(info, vt, parent, left_index + 2, &right);
if (r) {
exit_child(info, &left);
exit_child(info, ¢er);
}
static int rebalance_children(struct shadow_spine *s,
- struct dm_btree_info *info, uint64_t key)
+ struct dm_btree_info *info,
+ struct dm_btree_value_type *vt, uint64_t key)
{
int i, r, has_left_sibling, has_right_sibling;
uint32_t child_entries;
has_right_sibling = i < (le32_to_cpu(n->header.nr_entries) - 1);
if (!has_left_sibling)
- r = rebalance2(s, info, i);
+ r = rebalance2(s, info, vt, i);
else if (!has_right_sibling)
- r = rebalance2(s, info, i - 1);
+ r = rebalance2(s, info, vt, i - 1);
else
- r = rebalance3(s, info, i - 1);
+ r = rebalance3(s, info, vt, i - 1);
return r;
}
if (le32_to_cpu(n->header.flags) & LEAF_NODE)
return do_leaf(n, key, index);
- r = rebalance_children(s, info, key);
+ r = rebalance_children(s, info, vt, key);
if (r)
break;
return r;
}
+static struct dm_btree_value_type le64_type = {
+ .context = NULL,
+ .size = sizeof(__le64),
+ .inc = NULL,
+ .dec = NULL,
+ .equal = NULL
+};
+
int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, dm_block_t *new_root)
{
bi->bi_size = STRIPE_SIZE;
if (rrdev)
set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags);
- trace_block_bio_remap(bdev_get_queue(bi->bi_bdev),
- bi, disk_devt(conf->mddev->gendisk),
- sh->dev[i].sector);
+
+ if (conf->mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(bi->bi_bdev),
+ bi, disk_devt(conf->mddev->gendisk),
+ sh->dev[i].sector);
generic_make_request(bi);
}
if (rrdev) {
rbi->bi_io_vec[0].bv_len = STRIPE_SIZE;
rbi->bi_io_vec[0].bv_offset = 0;
rbi->bi_size = STRIPE_SIZE;
- trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
- rbi, disk_devt(conf->mddev->gendisk),
- sh->dev[i].sector);
+ if (conf->mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
+ rbi, disk_devt(conf->mddev->gendisk),
+ sh->dev[i].sector);
generic_make_request(rbi);
}
if (!rdev && !rrdev) {
int level = conf->level;
if (rcw) {
- /* if we are not expanding this is a proper write request, and
- * there will be bios with new data to be drained into the
- * stripe cache
- */
- if (!expand) {
- sh->reconstruct_state = reconstruct_state_drain_run;
- set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
- } else
- sh->reconstruct_state = reconstruct_state_run;
-
- set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
s->locked++;
}
}
+ /* if we are not expanding this is a proper write request, and
+ * there will be bios with new data to be drained into the
+ * stripe cache
+ */
+ if (!expand) {
+ if (!s->locked)
+ /* False alarm, nothing to do */
+ return;
+ sh->reconstruct_state = reconstruct_state_drain_run;
+ set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+ } else
+ sh->reconstruct_state = reconstruct_state_run;
+
+ set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
+
if (s->locked + conf->max_degraded == disks)
if (!test_and_set_bit(STRIPE_FULL_WRITE, &sh->state))
atomic_inc(&conf->pending_full_writes);
BUG_ON(!(test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags) ||
test_bit(R5_Wantcompute, &sh->dev[pd_idx].flags)));
- sh->reconstruct_state = reconstruct_state_prexor_drain_run;
- set_bit(STRIPE_OP_PREXOR, &s->ops_request);
- set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
- set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
-
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if (i == pd_idx)
s->locked++;
}
}
+ if (!s->locked)
+ /* False alarm - nothing to do */
+ return;
+ sh->reconstruct_state = reconstruct_state_prexor_drain_run;
+ set_bit(STRIPE_OP_PREXOR, &s->ops_request);
+ set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+ set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
}
/* keep the parity disk(s) locked while asynchronous operations
int i;
clear_bit(STRIPE_SYNCING, &sh->state);
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags))
+ wake_up(&conf->wait_for_overlap);
s->syncing = 0;
s->replacing = 0;
/* There is nothing more to do for sync/check/repair.
{
int i;
struct r5dev *dev;
+ int discard_pending = 0;
for (i = disks; i--; )
if (sh->dev[i].written) {
STRIPE_SECTORS,
!test_bit(STRIPE_DEGRADED, &sh->state),
0);
- }
- } else if (test_bit(R5_Discard, &sh->dev[i].flags))
- clear_bit(R5_Discard, &sh->dev[i].flags);
+ } else if (test_bit(R5_Discard, &dev->flags))
+ discard_pending = 1;
+ }
+ if (!discard_pending &&
+ test_bit(R5_Discard, &sh->dev[sh->pd_idx].flags)) {
+ clear_bit(R5_Discard, &sh->dev[sh->pd_idx].flags);
+ clear_bit(R5_UPTODATE, &sh->dev[sh->pd_idx].flags);
+ if (sh->qd_idx >= 0) {
+ clear_bit(R5_Discard, &sh->dev[sh->qd_idx].flags);
+ clear_bit(R5_UPTODATE, &sh->dev[sh->qd_idx].flags);
+ }
+ /* now that discard is done we can proceed with any sync */
+ clear_bit(STRIPE_DISCARD, &sh->state);
+ if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state))
+ set_bit(STRIPE_HANDLE, &sh->state);
+
+ }
if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state))
if (atomic_dec_and_test(&conf->pending_full_writes))
set_bit(STRIPE_HANDLE, &sh->state);
if (rmw < rcw && rmw > 0) {
/* prefer read-modify-write, but need to get some data */
- blk_add_trace_msg(conf->mddev->queue, "raid5 rmw %llu %d",
- (unsigned long long)sh->sector, rmw);
+ if (conf->mddev->queue)
+ blk_add_trace_msg(conf->mddev->queue,
+ "raid5 rmw %llu %d",
+ (unsigned long long)sh->sector, rmw);
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if ((dev->towrite || i == sh->pd_idx) &&
}
}
}
- if (rcw)
+ if (rcw && conf->mddev->queue)
blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d",
(unsigned long long)sh->sector,
rcw, qread, test_bit(STRIPE_DELAYED, &sh->state));
return;
}
- if (test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
- set_bit(STRIPE_SYNCING, &sh->state);
- clear_bit(STRIPE_INSYNC, &sh->state);
+ if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
+ spin_lock(&sh->stripe_lock);
+ /* Cannot process 'sync' concurrently with 'discard' */
+ if (!test_bit(STRIPE_DISCARD, &sh->state) &&
+ test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
+ set_bit(STRIPE_SYNCING, &sh->state);
+ clear_bit(STRIPE_INSYNC, &sh->state);
+ }
+ spin_unlock(&sh->stripe_lock);
}
clear_bit(STRIPE_DELAYED, &sh->state);
test_bit(STRIPE_INSYNC, &sh->state)) {
md_done_sync(conf->mddev, STRIPE_SECTORS, 1);
clear_bit(STRIPE_SYNCING, &sh->state);
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags))
+ wake_up(&conf->wait_for_overlap);
}
/* If the failed drives are just a ReadError, then we might need
atomic_inc(&conf->active_aligned_reads);
spin_unlock_irq(&conf->device_lock);
- trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev),
- align_bi, disk_devt(mddev->gendisk),
- raid_bio->bi_sector);
+ if (mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev),
+ align_bi, disk_devt(mddev->gendisk),
+ raid_bio->bi_sector);
generic_make_request(align_bi);
return 1;
} else {
}
spin_unlock_irq(&conf->device_lock);
}
- trace_block_unplug(mddev->queue, cnt, !from_schedule);
+ if (mddev->queue)
+ trace_block_unplug(mddev->queue, cnt, !from_schedule);
kfree(cb);
}
sh = get_active_stripe(conf, logical_sector, 0, 0, 0);
prepare_to_wait(&conf->wait_for_overlap, &w,
TASK_UNINTERRUPTIBLE);
+ set_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
+ if (test_bit(STRIPE_SYNCING, &sh->state)) {
+ release_stripe(sh);
+ schedule();
+ goto again;
+ }
+ clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
spin_lock_irq(&sh->stripe_lock);
for (d = 0; d < conf->raid_disks; d++) {
if (d == sh->pd_idx || d == sh->qd_idx)
goto again;
}
}
+ set_bit(STRIPE_DISCARD, &sh->state);
finish_wait(&conf->wait_for_overlap, &w);
for (d = 0; d < conf->raid_disks; d++) {
if (d == sh->pd_idx || d == sh->qd_idx)
struct stripe_operations {
int target, target2;
enum sum_check_flags zero_sum_result;
- #ifdef CONFIG_MULTICORE_RAID456
- unsigned long request;
- wait_queue_head_t wait_for_ops;
- #endif
} ops;
struct r5dev {
/* rreq and rvec are used for the replacement device when
STRIPE_COMPUTE_RUN,
STRIPE_OPS_REQ_PENDING,
STRIPE_ON_UNPLUG_LIST,
+ STRIPE_DISCARD,
};
/*
if (enable) {
if (is_code(code, M5MOLS_RESTYPE_MONITOR))
ret = m5mols_start_monitor(info);
- if (is_code(code, M5MOLS_RESTYPE_CAPTURE))
+ else if (is_code(code, M5MOLS_RESTYPE_CAPTURE))
ret = m5mols_start_capture(info);
else
ret = -EINVAL;
vdelay start of active video in 2 * field lines relative to
trailing edge of /VRESET pulse (VDELAY register).
sheight height of active video in 2 * field lines.
+ extraheight Added to sheight for cropcap.bounds.height only
videostart0 ITU-R frame line number of the line corresponding
to vdelay in the first field. */
#define CROPCAP(minhdelayx1, hdelayx1, swidth, totalwidth, sqwidth, \
- vdelay, sheight, videostart0) \
+ vdelay, sheight, extraheight, videostart0) \
.cropcap.bounds.left = minhdelayx1, \
/* * 2 because vertically we count field lines times two, */ \
/* e.g. 23 * 2 to 23 * 2 + 576 in PAL-BGHI defrect. */ \
.cropcap.bounds.top = (videostart0) * 2 - (vdelay) + MIN_VDELAY, \
/* 4 is a safety margin at the end of the line. */ \
.cropcap.bounds.width = (totalwidth) - (minhdelayx1) - 4, \
- .cropcap.bounds.height = (sheight) + (vdelay) - MIN_VDELAY, \
+ .cropcap.bounds.height = (sheight) + (extraheight) + (vdelay) - \
+ MIN_VDELAY, \
.cropcap.defrect.left = hdelayx1, \
.cropcap.defrect.top = (videostart0) * 2, \
.cropcap.defrect.width = swidth, \
/* totalwidth */ 1135,
/* sqwidth */ 944,
/* vdelay */ 0x20,
- /* bt878 (and bt848?) can capture another
- line below active video. */
- /* sheight */ (576 + 2) + 0x20 - 2,
+ /* sheight */ 576,
+ /* bt878 (and bt848?) can capture another
+ line below active video. */
+ /* extraheight */ 2,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_NTSC_M | V4L2_STD_NTSC_M_KR,
/* sqwidth */ 780,
/* vdelay */ 0x1a,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_SECAM,
/* sqwidth */ 944,
/* vdelay */ 0x20,
/* sheight */ 576,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_PAL_Nc,
/* sqwidth */ 780,
/* vdelay */ 0x1a,
/* sheight */ 576,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_PAL_M,
/* sqwidth */ 780,
/* vdelay */ 0x1a,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_PAL_N,
/* sqwidth */ 944,
/* vdelay */ 0x20,
/* sheight */ 576,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_NTSC_M_JP,
/* sqwidth */ 780,
/* vdelay */ 0x16,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
/* that one hopefully works with the strange timing
/* sqwidth */ 944,
/* vdelay */ 0x1a,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
}
};
static int gsc_m2m_resume(struct gsc_dev *gsc)
{
+ struct gsc_ctx *ctx;
unsigned long flags;
spin_lock_irqsave(&gsc->slock, flags);
/* Clear for full H/W setup in first run after resume */
+ ctx = gsc->m2m.ctx;
gsc->m2m.ctx = NULL;
spin_unlock_irqrestore(&gsc->slock, flags);
if (test_and_clear_bit(ST_M2M_SUSPENDED, &gsc->state))
- gsc_m2m_job_finish(gsc->m2m.ctx,
- VB2_BUF_STATE_ERROR);
+ gsc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
+
return 0;
}
/* Do not resume if the device was idle before system suspend */
spin_lock_irqsave(&gsc->slock, flags);
if (!test_and_clear_bit(ST_SUSPEND, &gsc->state) ||
- !gsc_m2m_active(gsc)) {
+ !gsc_m2m_opened(gsc)) {
spin_unlock_irqrestore(&gsc->slock, flags);
return 0;
}
static int fimc_m2m_resume(struct fimc_dev *fimc)
{
+ struct fimc_ctx *ctx;
unsigned long flags;
spin_lock_irqsave(&fimc->slock, flags);
/* Clear for full H/W setup in first run after resume */
+ ctx = fimc->m2m.ctx;
fimc->m2m.ctx = NULL;
spin_unlock_irqrestore(&fimc->slock, flags);
if (test_and_clear_bit(ST_M2M_SUSPENDED, &fimc->state))
- fimc_m2m_job_finish(fimc->m2m.ctx,
- VB2_BUF_STATE_ERROR);
+ fimc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
+
return 0;
}
void flite_hw_set_source_format(struct fimc_lite *dev, struct flite_frame *f)
{
enum v4l2_mbus_pixelcode pixelcode = dev->fmt->mbus_code;
- unsigned int i = ARRAY_SIZE(src_pixfmt_map);
+ int i = ARRAY_SIZE(src_pixfmt_map);
u32 cfg;
- while (i-- >= 0) {
+ while (--i >= 0) {
if (src_pixfmt_map[i][0] == pixelcode)
break;
}
{ V4L2_MBUS_FMT_VYUY8_2X8, FLITE_REG_CIODMAFMT_CRYCBY },
};
u32 cfg = readl(dev->regs + FLITE_REG_CIODMAFMT);
- unsigned int i = ARRAY_SIZE(pixcode);
+ int i = ARRAY_SIZE(pixcode);
- while (i-- >= 0)
+ while (--i >= 0)
if (pixcode[i][0] == dev->fmt->mbus_code)
break;
cfg &= ~FLITE_REG_CIODMAFMT_YCBCR_ORDER_MASK;
.id = V4L2_CTRL_CLASS_USER | 0x1001,
.type = V4L2_CTRL_TYPE_BOOLEAN,
.name = "Test Pattern 640x480",
+ .step = 1,
};
static int fimc_lite_create_capture_subdev(struct fimc_lite *fimc)
struct fimc_pipeline *pipeline;
struct v4l2_subdev *sd;
struct mutex *lock;
- int ret = 0;
+ int i, ret = 0;
int ref_count;
if (media_entity_type(sink->entity) != MEDIA_ENT_T_V4L2_SUBDEV)
return 0;
}
+ mutex_lock(lock);
+ ref_count = fimc ? fimc->vid_cap.refcnt : fimc_lite->ref_count;
+
if (!(flags & MEDIA_LNK_FL_ENABLED)) {
- int i;
- mutex_lock(lock);
- ret = __fimc_pipeline_close(pipeline);
+ if (ref_count > 0) {
+ ret = __fimc_pipeline_close(pipeline);
+ if (!ret && fimc)
+ fimc_ctrls_delete(fimc->vid_cap.ctx);
+ }
for (i = 0; i < IDX_MAX; i++)
pipeline->subdevs[i] = NULL;
- if (fimc)
- fimc_ctrls_delete(fimc->vid_cap.ctx);
- mutex_unlock(lock);
- return ret;
+ } else if (ref_count > 0) {
+ /*
+ * Link activation. Enable power of pipeline elements only if
+ * the pipeline is already in use, i.e. its video node is open.
+ * Recreate the controls destroyed during the link deactivation.
+ */
+ ret = __fimc_pipeline_open(pipeline,
+ source->entity, true);
+ if (!ret && fimc)
+ ret = fimc_capture_ctrls_create(fimc);
}
- /*
- * Link activation. Enable power of pipeline elements only if the
- * pipeline is already in use, i.e. its video node is opened.
- * Recreate the controls destroyed during the link deactivation.
- */
- mutex_lock(lock);
-
- ref_count = fimc ? fimc->vid_cap.refcnt : fimc_lite->ref_count;
- if (ref_count > 0)
- ret = __fimc_pipeline_open(pipeline, source->entity, true);
- if (!ret && fimc)
- ret = fimc_capture_ctrls_create(fimc);
mutex_unlock(lock);
return ret ? -EPIPE : ret;
unsigned int frame_type;
dspl_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
- frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
+ frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_disp_frame_type, ctx);
/* If frame is same as previous then skip and do not dequeue */
if (frame_type == S5P_FIMV_DECODE_FRAME_SKIPPED) {
.minimum = 0,
.maximum = 1,
.default_value = 0,
+ .step = 1,
.menu_skip_mask = 0,
},
{
config IR_RX51
tristate "Nokia N900 IR transmitter diode"
- depends on OMAP_DM_TIMER && LIRC && !ARCH_MULTIPLATFORM
+ depends on OMAP_DM_TIMER && ARCH_OMAP2PLUS && LIRC && !ARCH_MULTIPLATFORM
---help---
Say Y or M here if you want to enable support for the IR
transmitter diode built in the Nokia N900 (RX51) device.
videodev-objs += v4l2-compat-ioctl32.o
endif
-obj-$(CONFIG_VIDEO_DEV) += videodev.o
+obj-$(CONFIG_VIDEO_V4L2) += videodev.o
obj-$(CONFIG_VIDEO_V4L2_INT_DEVICE) += v4l2-int-device.o
obj-$(CONFIG_VIDEO_V4L2) += v4l2-common.o
mei_hcsr_set(hw, hcsr);
}
+/**
+ * mei_me_hw_reset_release - release device from the reset
+ *
+ * @dev: the device structure
+ */
+static void mei_me_hw_reset_release(struct mei_device *dev)
+{
+ struct mei_me_hw *hw = to_me_hw(dev);
+ u32 hcsr = mei_hcsr_read(hw);
+
+ hcsr |= H_IG;
+ hcsr &= ~H_RST;
+ mei_hcsr_set(hw, hcsr);
+}
/**
* mei_me_hw_reset - resets fw via mei csr register.
*
if (intr_enable)
hcsr |= H_IE;
else
- hcsr &= ~H_IE;
-
- mei_hcsr_set(hw, hcsr);
-
- hcsr = mei_hcsr_read(hw) | H_IG;
- hcsr &= ~H_RST;
+ hcsr |= ~H_IE;
mei_hcsr_set(hw, hcsr);
- hcsr = mei_hcsr_read(hw);
+ if (dev->dev_state == MEI_DEV_POWER_DOWN)
+ mei_me_hw_reset_release(dev);
- dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", hcsr);
+ dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", mei_hcsr_read(hw));
}
/**
mutex_unlock(&dev->device_lock);
return IRQ_HANDLED;
} else {
- dev_dbg(&dev->pdev->dev, "FW not ready.\n");
+ dev_dbg(&dev->pdev->dev, "Reset Completed.\n");
+ mei_me_hw_reset_release(dev);
mutex_unlock(&dev->device_lock);
return IRQ_HANDLED;
}
mei_cl_all_write_clear(dev);
}
+void mei_stop(struct mei_device *dev)
+{
+ dev_dbg(&dev->pdev->dev, "stopping the device.\n");
+
+ mutex_lock(&dev->device_lock);
+
+ cancel_delayed_work(&dev->timer_work);
+
+ mei_wd_stop(dev);
+
+ dev->dev_state = MEI_DEV_POWER_DOWN;
+ mei_reset(dev, 0);
+
+ mutex_unlock(&dev->device_lock);
+
+ flush_scheduled_work();
+}
+
void mei_device_init(struct mei_device *dev);
void mei_reset(struct mei_device *dev, int interrupts);
int mei_hw_init(struct mei_device *dev);
+void mei_stop(struct mei_device *dev);
/*
* MEI interrupt functions prototype
hw = to_me_hw(dev);
- mutex_lock(&dev->device_lock);
-
- cancel_delayed_work(&dev->timer_work);
- mei_wd_stop(dev);
+ dev_err(&pdev->dev, "stop\n");
+ mei_stop(dev);
mei_pdev = NULL;
- if (dev->iamthif_cl.state == MEI_FILE_CONNECTED) {
- dev->iamthif_cl.state = MEI_FILE_DISCONNECTING;
- mei_cl_disconnect(&dev->iamthif_cl);
- }
- if (dev->wd_cl.state == MEI_FILE_CONNECTED) {
- dev->wd_cl.state = MEI_FILE_DISCONNECTING;
- mei_cl_disconnect(&dev->wd_cl);
- }
-
- /* Unregistering watchdog device */
mei_watchdog_unregister(dev);
- /* remove entry if already in list */
- dev_dbg(&pdev->dev, "list del iamthif and wd file list.\n");
-
- if (dev->open_handle_count > 0)
- dev->open_handle_count--;
- mei_cl_unlink(&dev->wd_cl);
-
- if (dev->open_handle_count > 0)
- dev->open_handle_count--;
- mei_cl_unlink(&dev->iamthif_cl);
-
- dev->iamthif_current_cb = NULL;
- dev->me_clients_num = 0;
-
- mutex_unlock(&dev->device_lock);
-
- flush_scheduled_work();
-
/* disable interrupts */
mei_disable_interrupts(dev);
{
struct pci_dev *pdev = to_pci_dev(device);
struct mei_device *dev = pci_get_drvdata(pdev);
- int err;
if (!dev)
return -ENODEV;
- mutex_lock(&dev->device_lock);
- cancel_delayed_work(&dev->timer_work);
+ dev_err(&pdev->dev, "suspend\n");
- /* Stop watchdog if exists */
- err = mei_wd_stop(dev);
- /* Set new mei state */
- if (dev->dev_state == MEI_DEV_ENABLED ||
- dev->dev_state == MEI_DEV_RECOVERING_FROM_RESET) {
- dev->dev_state = MEI_DEV_POWER_DOWN;
- mei_reset(dev, 0);
- }
- mutex_unlock(&dev->device_lock);
+ mei_stop(dev);
+
+ mei_disable_interrupts(dev);
free_irq(pdev->irq, dev);
pci_disable_msi(pdev);
- return err;
+ return 0;
}
static int mei_pci_resume(struct device *device)
struct delayed_datagram_info {
struct datagram_entry *entry;
- struct vmci_datagram msg;
struct work_struct work;
bool in_dg_host_queue;
+ /* msg and msg_payload must be together. */
+ struct vmci_datagram msg;
+ u8 msg_payload[];
};
/* Number of in-flight host->host datagrams */
/* 10 parts were found on sflash on Netgear WNDR4500 */
#define BCM47XXPART_MAX_PARTS 12
+/*
+ * Amount of bytes we read when analyzing each block of flash memory.
+ * Set it big enough to allow detecting partition and reading important data.
+ */
+#define BCM47XXPART_BYTES_TO_READ 0x404
+
/* Magics */
#define BOARD_DATA_MAGIC 0x5246504D /* MPFR */
#define POT_MAGIC1 0x54544f50 /* POTT */
struct trx_header *trx;
int trx_part = -1;
int last_trx_part = -1;
- int max_bytes_to_read = 0x8004;
+ int possible_nvram_sizes[] = { 0x8000, 0xF000, 0x10000, };
if (blocksize <= 0x10000)
blocksize = 0x10000;
- if (blocksize == 0x20000)
- max_bytes_to_read = 0x18004;
/* Alloc */
parts = kzalloc(sizeof(struct mtd_partition) * BCM47XXPART_MAX_PARTS,
GFP_KERNEL);
- buf = kzalloc(max_bytes_to_read, GFP_KERNEL);
+ buf = kzalloc(BCM47XXPART_BYTES_TO_READ, GFP_KERNEL);
/* Parse block by block looking for magics */
for (offset = 0; offset <= master->size - blocksize;
}
/* Read beginning of the block */
- if (mtd_read(master, offset, max_bytes_to_read,
+ if (mtd_read(master, offset, BCM47XXPART_BYTES_TO_READ,
&bytes_read, (uint8_t *)buf) < 0) {
pr_err("mtd_read error while parsing (offset: 0x%X)!\n",
offset);
continue;
}
- /* Standard NVRAM */
- if (buf[0x000 / 4] == NVRAM_HEADER ||
- buf[0x1000 / 4] == NVRAM_HEADER ||
- buf[0x8000 / 4] == NVRAM_HEADER ||
- (blocksize == 0x20000 && (
- buf[0x10000 / 4] == NVRAM_HEADER ||
- buf[0x11000 / 4] == NVRAM_HEADER ||
- buf[0x18000 / 4] == NVRAM_HEADER))) {
- bcm47xxpart_add_part(&parts[curr_part++], "nvram",
- offset, 0);
- offset = rounddown(offset, blocksize);
- continue;
- }
-
/*
* board_data starts with board_id which differs across boards,
* but we can use 'MPFR' (hopefully) magic at 0x100
continue;
}
}
+
+ /* Look for NVRAM at the end of the last block. */
+ for (i = 0; i < ARRAY_SIZE(possible_nvram_sizes); i++) {
+ if (curr_part > BCM47XXPART_MAX_PARTS) {
+ pr_warn("Reached maximum number of partitions, scanning stopped!\n");
+ break;
+ }
+
+ offset = master->size - possible_nvram_sizes[i];
+ if (mtd_read(master, offset, 0x4, &bytes_read,
+ (uint8_t *)buf) < 0) {
+ pr_err("mtd_read error while reading at offset 0x%X!\n",
+ offset);
+ continue;
+ }
+
+ /* Standard NVRAM */
+ if (buf[0] == NVRAM_HEADER) {
+ bcm47xxpart_add_part(&parts[curr_part++], "nvram",
+ master->size - blocksize, 0);
+ break;
+ }
+ }
+
kfree(buf);
/*
oobreadlen -= toread;
}
}
+
+ if (chip->options & NAND_NEED_READRDY) {
+ /* Apply delay or wait for ready/busy pin */
+ if (!chip->dev_ready)
+ udelay(chip->chip_delay);
+ else
+ nand_wait_ready(mtd);
+ }
} else {
memcpy(buf, chip->buffers->databuf + col, bytes);
buf += bytes;
len = min(len, readlen);
buf = nand_transfer_oob(chip, buf, ops, len);
+ if (chip->options & NAND_NEED_READRDY) {
+ /* Apply delay or wait for ready/busy pin */
+ if (!chip->dev_ready)
+ udelay(chip->chip_delay);
+ else
+ nand_wait_ready(mtd);
+ }
+
readlen -= len;
if (!readlen)
break;
* 512 512 Byte page size
*/
struct nand_flash_dev nand_flash_ids[] = {
+#define SP_OPTIONS NAND_NEED_READRDY
+#define SP_OPTIONS16 (SP_OPTIONS | NAND_BUSWIDTH_16)
#ifdef CONFIG_MTD_NAND_MUSEUM_IDS
- {"NAND 1MiB 5V 8-bit", 0x6e, 256, 1, 0x1000, 0},
- {"NAND 2MiB 5V 8-bit", 0x64, 256, 2, 0x1000, 0},
- {"NAND 4MiB 5V 8-bit", 0x6b, 512, 4, 0x2000, 0},
- {"NAND 1MiB 3,3V 8-bit", 0xe8, 256, 1, 0x1000, 0},
- {"NAND 1MiB 3,3V 8-bit", 0xec, 256, 1, 0x1000, 0},
- {"NAND 2MiB 3,3V 8-bit", 0xea, 256, 2, 0x1000, 0},
- {"NAND 4MiB 3,3V 8-bit", 0xd5, 512, 4, 0x2000, 0},
- {"NAND 4MiB 3,3V 8-bit", 0xe3, 512, 4, 0x2000, 0},
- {"NAND 4MiB 3,3V 8-bit", 0xe5, 512, 4, 0x2000, 0},
- {"NAND 8MiB 3,3V 8-bit", 0xd6, 512, 8, 0x2000, 0},
-
- {"NAND 8MiB 1,8V 8-bit", 0x39, 512, 8, 0x2000, 0},
- {"NAND 8MiB 3,3V 8-bit", 0xe6, 512, 8, 0x2000, 0},
- {"NAND 8MiB 1,8V 16-bit", 0x49, 512, 8, 0x2000, NAND_BUSWIDTH_16},
- {"NAND 8MiB 3,3V 16-bit", 0x59, 512, 8, 0x2000, NAND_BUSWIDTH_16},
+ {"NAND 1MiB 5V 8-bit", 0x6e, 256, 1, 0x1000, SP_OPTIONS},
+ {"NAND 2MiB 5V 8-bit", 0x64, 256, 2, 0x1000, SP_OPTIONS},
+ {"NAND 4MiB 5V 8-bit", 0x6b, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 1MiB 3,3V 8-bit", 0xe8, 256, 1, 0x1000, SP_OPTIONS},
+ {"NAND 1MiB 3,3V 8-bit", 0xec, 256, 1, 0x1000, SP_OPTIONS},
+ {"NAND 2MiB 3,3V 8-bit", 0xea, 256, 2, 0x1000, SP_OPTIONS},
+ {"NAND 4MiB 3,3V 8-bit", 0xd5, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 4MiB 3,3V 8-bit", 0xe3, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 4MiB 3,3V 8-bit", 0xe5, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 8MiB 3,3V 8-bit", 0xd6, 512, 8, 0x2000, SP_OPTIONS},
+
+ {"NAND 8MiB 1,8V 8-bit", 0x39, 512, 8, 0x2000, SP_OPTIONS},
+ {"NAND 8MiB 3,3V 8-bit", 0xe6, 512, 8, 0x2000, SP_OPTIONS},
+ {"NAND 8MiB 1,8V 16-bit", 0x49, 512, 8, 0x2000, SP_OPTIONS16},
+ {"NAND 8MiB 3,3V 16-bit", 0x59, 512, 8, 0x2000, SP_OPTIONS16},
#endif
- {"NAND 16MiB 1,8V 8-bit", 0x33, 512, 16, 0x4000, 0},
- {"NAND 16MiB 3,3V 8-bit", 0x73, 512, 16, 0x4000, 0},
- {"NAND 16MiB 1,8V 16-bit", 0x43, 512, 16, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 16MiB 3,3V 16-bit", 0x53, 512, 16, 0x4000, NAND_BUSWIDTH_16},
-
- {"NAND 32MiB 1,8V 8-bit", 0x35, 512, 32, 0x4000, 0},
- {"NAND 32MiB 3,3V 8-bit", 0x75, 512, 32, 0x4000, 0},
- {"NAND 32MiB 1,8V 16-bit", 0x45, 512, 32, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 32MiB 3,3V 16-bit", 0x55, 512, 32, 0x4000, NAND_BUSWIDTH_16},
-
- {"NAND 64MiB 1,8V 8-bit", 0x36, 512, 64, 0x4000, 0},
- {"NAND 64MiB 3,3V 8-bit", 0x76, 512, 64, 0x4000, 0},
- {"NAND 64MiB 1,8V 16-bit", 0x46, 512, 64, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 64MiB 3,3V 16-bit", 0x56, 512, 64, 0x4000, NAND_BUSWIDTH_16},
-
- {"NAND 128MiB 1,8V 8-bit", 0x78, 512, 128, 0x4000, 0},
- {"NAND 128MiB 1,8V 8-bit", 0x39, 512, 128, 0x4000, 0},
- {"NAND 128MiB 3,3V 8-bit", 0x79, 512, 128, 0x4000, 0},
- {"NAND 128MiB 1,8V 16-bit", 0x72, 512, 128, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 128MiB 1,8V 16-bit", 0x49, 512, 128, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 128MiB 3,3V 16-bit", 0x74, 512, 128, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 128MiB 3,3V 16-bit", 0x59, 512, 128, 0x4000, NAND_BUSWIDTH_16},
-
- {"NAND 256MiB 3,3V 8-bit", 0x71, 512, 256, 0x4000, 0},
+ {"NAND 16MiB 1,8V 8-bit", 0x33, 512, 16, 0x4000, SP_OPTIONS},
+ {"NAND 16MiB 3,3V 8-bit", 0x73, 512, 16, 0x4000, SP_OPTIONS},
+ {"NAND 16MiB 1,8V 16-bit", 0x43, 512, 16, 0x4000, SP_OPTIONS16},
+ {"NAND 16MiB 3,3V 16-bit", 0x53, 512, 16, 0x4000, SP_OPTIONS16},
+
+ {"NAND 32MiB 1,8V 8-bit", 0x35, 512, 32, 0x4000, SP_OPTIONS},
+ {"NAND 32MiB 3,3V 8-bit", 0x75, 512, 32, 0x4000, SP_OPTIONS},
+ {"NAND 32MiB 1,8V 16-bit", 0x45, 512, 32, 0x4000, SP_OPTIONS16},
+ {"NAND 32MiB 3,3V 16-bit", 0x55, 512, 32, 0x4000, SP_OPTIONS16},
+
+ {"NAND 64MiB 1,8V 8-bit", 0x36, 512, 64, 0x4000, SP_OPTIONS},
+ {"NAND 64MiB 3,3V 8-bit", 0x76, 512, 64, 0x4000, SP_OPTIONS},
+ {"NAND 64MiB 1,8V 16-bit", 0x46, 512, 64, 0x4000, SP_OPTIONS16},
+ {"NAND 64MiB 3,3V 16-bit", 0x56, 512, 64, 0x4000, SP_OPTIONS16},
+
+ {"NAND 128MiB 1,8V 8-bit", 0x78, 512, 128, 0x4000, SP_OPTIONS},
+ {"NAND 128MiB 1,8V 8-bit", 0x39, 512, 128, 0x4000, SP_OPTIONS},
+ {"NAND 128MiB 3,3V 8-bit", 0x79, 512, 128, 0x4000, SP_OPTIONS},
+ {"NAND 128MiB 1,8V 16-bit", 0x72, 512, 128, 0x4000, SP_OPTIONS16},
+ {"NAND 128MiB 1,8V 16-bit", 0x49, 512, 128, 0x4000, SP_OPTIONS16},
+ {"NAND 128MiB 3,3V 16-bit", 0x74, 512, 128, 0x4000, SP_OPTIONS16},
+ {"NAND 128MiB 3,3V 16-bit", 0x59, 512, 128, 0x4000, SP_OPTIONS16},
+
+ {"NAND 256MiB 3,3V 8-bit", 0x71, 512, 256, 0x4000, SP_OPTIONS},
/*
* These are the new chips with large page size. The pagesize and the
bond_compute_features(bond);
+ bond_update_speed_duplex(new_slave);
+
read_lock(&bond->lock);
new_slave->last_arp_rx = jiffies -
new_slave->link == BOND_LINK_DOWN ? "DOWN" :
(new_slave->link == BOND_LINK_UP ? "UP" : "BACK"));
- bond_update_speed_duplex(new_slave);
-
if (USES_PRIMARY(bond->params.mode) && bond->params.primary[0]) {
/* if there is a primary slave, remember it */
if (strcmp(bond->params.primary, new_slave->dev->name) == 0) {
bond_set_backup_slave(slave);
}
- bond_update_speed_duplex(slave);
-
pr_info("%s: link status definitely up for interface %s, %u Mbps %s duplex.\n",
bond->dev->name, slave->dev->name,
slave->speed, slave->duplex ? "full" : "half");
sprintf(linkname, "slave_%s", slave->name);
ret = sysfs_create_link(&(master->dev.kobj), &(slave->dev.kobj),
linkname);
+
+ /* free the master link created earlier in case of error */
+ if (ret)
+ sysfs_remove_link(&(slave->dev.kobj), "master");
+
return ret;
}
bp->port.pmf = 0;
load_error1:
bnx2x_napi_disable(bp);
+ bnx2x_del_all_napi(bp);
/* clear pf_load status, as it was already set */
if (IS_PF(bp))
break;
default:
BNX2X_ERR("Non valid capability ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
DP(BNX2X_MSG_DCB, "capid %d:%x\n", capid, *cap);
break;
default:
BNX2X_ERR("Non valid TC-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
return -EINVAL;
}
-static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
+static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
{
struct bnx2x *bp = netdev_priv(netdev);
DP(BNX2X_MSG_DCB, "state = %d\n", bp->dcbx_local_feat.pfc.enabled);
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "dcbnl call not valid\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
#define UPDATE_QSTAT(s, t) \
do { \
- qstats->t##_hi = qstats_old->t##_hi + le32_to_cpu(s.hi); \
qstats->t##_lo = qstats_old->t##_lo + le32_to_cpu(s.lo); \
+ qstats->t##_hi = qstats_old->t##_hi + le32_to_cpu(s.hi) \
+ + ((qstats->t##_lo < qstats_old->t##_lo) ? 1 : 0); \
} while (0)
#define UPDATE_QSTAT_OLD(f) \
tp->link_config.active_speed = tp->link_config.speed;
tp->link_config.active_duplex = tp->link_config.duplex;
+ if (tg3_asic_rev(tp) == ASIC_REV_5714) {
+ /* With autoneg disabled, 5715 only links up when the
+ * advertisement register has the configured speed
+ * enabled.
+ */
+ tg3_writephy(tp, MII_ADVERTISE, ADVERTISE_ALL);
+ }
+
bmcr = 0;
switch (tp->link_config.speed) {
default:
}
#define EEPROM_STAT_ADDR 0x7bfc
-#define VPD_BASE 0
#define VPD_LEN 512
+#define VPD_BASE 0x400
+#define VPD_BASE_OLD 0
/**
* t4_seeprom_wp - enable/disable EEPROM write protection
int get_vpd_params(struct adapter *adapter, struct vpd_params *p)
{
u32 cclk_param, cclk_val;
- int i, ret;
+ int i, ret, addr;
int ec, sn;
u8 *vpd, csum;
unsigned int vpdr_len, kw_offset, id_len;
if (!vpd)
return -ENOMEM;
- ret = pci_read_vpd(adapter->pdev, VPD_BASE, VPD_LEN, vpd);
+ ret = pci_read_vpd(adapter->pdev, VPD_BASE, sizeof(u32), vpd);
+ if (ret < 0)
+ goto out;
+ addr = *vpd == 0x82 ? VPD_BASE : VPD_BASE_OLD;
+
+ ret = pci_read_vpd(adapter->pdev, addr, VPD_LEN, vpd);
if (ret < 0)
goto out;
config DE4X5
tristate "Generic DECchip & DIGITAL EtherWORKS PCI/EISA"
depends on (PCI || EISA)
+ depends on VIRT_TO_BUS || ALPHA || PPC || SPARC
select CRC32
---help---
This is support for the DIGITAL series of PCI/EISA Ethernet cards.
goto spin_unlock;
}
- /* Duplex link change */
if (phy_dev->link) {
- if (fep->full_duplex != phy_dev->duplex) {
- fec_restart(ndev, phy_dev->duplex);
- /* prevent unnecessary second fec_restart() below */
+ if (!fep->link) {
fep->link = phy_dev->link;
status_change = 1;
}
- }
- /* Link on or off change */
- if (phy_dev->link != fep->link) {
- fep->link = phy_dev->link;
- if (phy_dev->link)
+ if (fep->full_duplex != phy_dev->duplex)
+ status_change = 1;
+
+ if (phy_dev->speed != fep->speed) {
+ fep->speed = phy_dev->speed;
+ status_change = 1;
+ }
+
+ /* if any of the above changed restart the FEC */
+ if (status_change)
fec_restart(ndev, phy_dev->duplex);
- else
+ } else {
+ if (fep->link) {
fec_stop(ndev);
- status_change = 1;
+ status_change = 1;
+ }
}
spin_unlock:
static void fec_enet_free_buffers(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- int i;
+ unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
static int fec_enet_alloc_buffers(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- int i;
+ unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
struct fec_enet_private *fep = netdev_priv(ndev);
/* Don't know what to do yet. */
+ napi_disable(&fep->napi);
fep->opened = 0;
netif_stop_queue(ndev);
fec_stop(ndev);
struct fec_enet_private *fep = netdev_priv(ndev);
struct bufdesc *cbd_base;
struct bufdesc *bdp;
- int i;
+ unsigned int i;
/* Allocate memory for buffer descriptors. */
cbd_base = dma_alloc_coherent(NULL, PAGE_SIZE, &fep->bd_dma,
phy_interface_t phy_interface;
int link;
int full_duplex;
+ int speed;
struct completion mdio_done;
int irq[FEC_IRQ_NUM];
int bufdesc_ex;
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
}
+EXPORT_SYMBOL(fec_ptp_start_cyclecounter);
/**
* fec_ptp_adjfreq - adjust ptp cycle frequency
return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
-EFAULT : 0;
}
+EXPORT_SYMBOL(fec_ptp_ioctl);
/**
* fec_time_keep - call timecounter_read every second to avoid timer overrun
pr_info("registered PHC device on %s\n", ndev->name);
}
}
+EXPORT_SYMBOL(fec_ptp_init);
**/
void igb_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
{
- u32 dtxswc;
+ u32 reg_val, reg_offset;
switch (hw->mac.type) {
case e1000_82576:
+ reg_offset = E1000_DTXSWC;
+ break;
case e1000_i350:
- dtxswc = rd32(E1000_DTXSWC);
- if (enable) {
- dtxswc |= (E1000_DTXSWC_MAC_SPOOF_MASK |
- E1000_DTXSWC_VLAN_SPOOF_MASK);
- /* The PF can spoof - it has to in order to
- * support emulation mode NICs */
- dtxswc ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
- } else {
- dtxswc &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
- E1000_DTXSWC_VLAN_SPOOF_MASK);
- }
- wr32(E1000_DTXSWC, dtxswc);
+ reg_offset = E1000_TXSWC;
break;
default:
- break;
+ return;
+ }
+
+ reg_val = rd32(reg_offset);
+ if (enable) {
+ reg_val |= (E1000_DTXSWC_MAC_SPOOF_MASK |
+ E1000_DTXSWC_VLAN_SPOOF_MASK);
+ /* The PF can spoof - it has to in order to
+ * support emulation mode NICs
+ */
+ reg_val ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
+ } else {
+ reg_val &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
+ E1000_DTXSWC_VLAN_SPOOF_MASK);
}
+ wr32(reg_offset, reg_val);
}
/**
#include <linux/pci.h>
#ifdef CONFIG_IGB_HWMON
-struct i2c_board_info i350_sensor_info = {
+static struct i2c_board_info i350_sensor_info = {
I2C_BOARD_INFO("i350bb", (0Xf8 >> 1)),
};
if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211))
return;
- igb_enable_sriov(pdev, max_vfs);
pci_sriov_set_totalvfs(pdev, 7);
+ igb_enable_sriov(pdev, max_vfs);
#endif /* CONFIG_PCI_IOV */
}
if (max_vfs > 7) {
dev_warn(&pdev->dev,
"Maximum of 7 VFs per PF, using max\n");
- adapter->vfs_allocated_count = 7;
+ max_vfs = adapter->vfs_allocated_count = 7;
} else
adapter->vfs_allocated_count = max_vfs;
if (adapter->vfs_allocated_count)
case e1000_82576:
snprintf(adapter->ptp_caps.name, 16, "%pm", netdev->dev_addr);
adapter->ptp_caps.owner = THIS_MODULE;
- adapter->ptp_caps.max_adj = 1000000000;
+ adapter->ptp_caps.max_adj = 999999881;
adapter->ptp_caps.n_ext_ts = 0;
adapter->ptp_caps.pps = 0;
adapter->ptp_caps.adjfreq = igb_ptp_adjfreq_82576;
free_irq(adapter->msix_entries[vector].vector,
adapter->q_vector[vector]);
}
- pci_disable_msix(adapter->pdev);
- kfree(adapter->msix_entries);
- adapter->msix_entries = NULL;
+ /* This failure is non-recoverable - it indicates the system is
+ * out of MSIX vector resources and the VF driver cannot run
+ * without them. Set the number of msix vectors to zero
+ * indicating that not enough can be allocated. The error
+ * will be returned to the user indicating device open failed.
+ * Any further attempts to force the driver to open will also
+ * fail. The only way to recover is to unload the driver and
+ * reload it again. If the system has recovered some MSIX
+ * vectors then it may succeed.
+ */
+ adapter->num_msix_vectors = 0;
return err;
}
struct ixgbe_hw *hw = &adapter->hw;
int err;
+ /* A previous failure to open the device because of a lack of
+ * available MSIX vector resources may have reset the number
+ * of msix vectors variable to zero. The only way to recover
+ * is to unload/reload the driver and hope that the system has
+ * been able to recover some MSIX vector resources.
+ */
+ if (!adapter->num_msix_vectors)
+ return -ENOMEM;
+
/* disallow open during test */
if (test_bit(__IXGBEVF_TESTING, &adapter->state))
return -EBUSY;
err_req_irq:
ixgbevf_down(adapter);
- ixgbevf_free_irq(adapter);
err_setup_rx:
ixgbevf_free_all_rx_resources(adapter);
err_setup_tx:
return 0;
err_free:
- kfree(dev);
+ free_netdev(dev);
err_out:
return err;
}
/* Flush multicast filter */
mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, 0, 1, MLX4_MCAST_CONFIG);
+ /* Remove flow steering rules for the port*/
+ if (mdev->dev->caps.steering_mode ==
+ MLX4_STEERING_MODE_DEVICE_MANAGED) {
+ ASSERT_RTNL();
+ list_for_each_entry_safe(flow, tmp_flow,
+ &priv->ethtool_list, list) {
+ mlx4_flow_detach(mdev->dev, flow->id);
+ list_del(&flow->list);
+ }
+ }
+
mlx4_en_destroy_drop_qp(priv);
/* Free TX Rings */
if (!(mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAGS2_REASSIGN_MAC_EN))
mdev->mac_removed[priv->port] = 1;
- /* Remove flow steering rules for the port*/
- if (mdev->dev->caps.steering_mode ==
- MLX4_STEERING_MODE_DEVICE_MANAGED) {
- ASSERT_RTNL();
- list_for_each_entry_safe(flow, tmp_flow,
- &priv->ethtool_list, list) {
- mlx4_flow_detach(mdev->dev, flow->id);
- list_del(&flow->list);
- }
- }
-
/* Free RX Rings */
for (i = 0; i < priv->rx_ring_num; i++) {
mlx4_en_deactivate_rx_ring(priv, &priv->rx_ring[i]);
struct mlx4_slave_event_eq_info *event_eq =
priv->mfunc.master.slave_state[slave].event_eq;
u32 in_modifier = vhcr->in_modifier;
- u32 eqn = in_modifier & 0x1FF;
+ u32 eqn = in_modifier & 0x3FF;
u64 in_param = vhcr->in_param;
int err = 0;
int i;
struct list_head mcg_list;
spinlock_t mcg_spl;
int local_qpn;
+ atomic_t ref_count;
};
enum res_mtt_states {
struct res_fs_rule {
struct res_common com;
+ int qpn;
};
static void *res_tracker_lookup(struct rb_root *root, u64 res_id)
return dev->caps.num_mpts - 1;
}
-static void *find_res(struct mlx4_dev *dev, int res_id,
+static void *find_res(struct mlx4_dev *dev, u64 res_id,
enum mlx4_resource type)
{
struct mlx4_priv *priv = mlx4_priv(dev);
ret->local_qpn = id;
INIT_LIST_HEAD(&ret->mcg_list);
spin_lock_init(&ret->mcg_spl);
+ atomic_set(&ret->ref_count, 0);
return &ret->com;
}
return &ret->com;
}
-static struct res_common *alloc_fs_rule_tr(u64 id)
+static struct res_common *alloc_fs_rule_tr(u64 id, int qpn)
{
struct res_fs_rule *ret;
ret->com.res_id = id;
ret->com.state = RES_FS_RULE_ALLOCATED;
-
+ ret->qpn = qpn;
return &ret->com;
}
ret = alloc_xrcdn_tr(id);
break;
case RES_FS_RULE:
- ret = alloc_fs_rule_tr(id);
+ ret = alloc_fs_rule_tr(id, extra);
break;
default:
return NULL;
static int remove_qp_ok(struct res_qp *res)
{
- if (res->com.state == RES_QP_BUSY)
+ if (res->com.state == RES_QP_BUSY || atomic_read(&res->ref_count) ||
+ !list_empty(&res->mcg_list)) {
+ pr_err("resource tracker: fail to remove qp, state %d, ref_count %d\n",
+ res->com.state, atomic_read(&res->ref_count));
return -EBUSY;
- else if (res->com.state != RES_QP_RESERVED)
+ } else if (res->com.state != RES_QP_RESERVED) {
return -EPERM;
+ }
return 0;
}
struct list_head *rlist = &tracker->slave_list[slave].res_list[RES_MAC];
int err;
int qpn;
+ struct res_qp *rqp;
struct mlx4_net_trans_rule_hw_ctrl *ctrl;
struct _rule_hw *rule_header;
int header_id;
ctrl = (struct mlx4_net_trans_rule_hw_ctrl *)inbox->buf;
qpn = be32_to_cpu(ctrl->qpn) & 0xffffff;
- err = get_res(dev, slave, qpn, RES_QP, NULL);
+ err = get_res(dev, slave, qpn, RES_QP, &rqp);
if (err) {
pr_err("Steering rule with qpn 0x%x rejected.\n", qpn);
return err;
if (err)
goto err_put;
- err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, 0);
+ err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, qpn);
if (err) {
mlx4_err(dev, "Fail to add flow steering resources.\n ");
/* detach rule*/
mlx4_cmd(dev, vhcr->out_param, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
+ goto err_put;
}
+ atomic_inc(&rqp->ref_count);
err_put:
put_res(dev, slave, qpn, RES_QP);
return err;
struct mlx4_cmd_info *cmd)
{
int err;
+ struct res_qp *rqp;
+ struct res_fs_rule *rrule;
if (dev->caps.steering_mode !=
MLX4_STEERING_MODE_DEVICE_MANAGED)
return -EOPNOTSUPP;
+ err = get_res(dev, slave, vhcr->in_param, RES_FS_RULE, &rrule);
+ if (err)
+ return err;
+ /* Release the rule form busy state before removal */
+ put_res(dev, slave, vhcr->in_param, RES_FS_RULE);
+ err = get_res(dev, slave, rrule->qpn, RES_QP, &rqp);
+ if (err)
+ return err;
+
err = rem_res_range(dev, slave, vhcr->in_param, 1, RES_FS_RULE, 0);
if (err) {
mlx4_err(dev, "Fail to remove flow steering resources.\n ");
- return err;
+ goto out;
}
err = mlx4_cmd(dev, vhcr->in_param, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
+ if (!err)
+ atomic_dec(&rqp->ref_count);
+out:
+ put_res(dev, slave, rrule->qpn, RES_QP);
return err;
}
mutex_lock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex);
/*VLAN*/
rem_slave_macs(dev, slave);
+ rem_slave_fs_rule(dev, slave);
rem_slave_qps(dev, slave);
rem_slave_srqs(dev, slave);
rem_slave_cqs(dev, slave);
rem_slave_mtts(dev, slave);
rem_slave_counters(dev, slave);
rem_slave_xrcdns(dev, slave);
- rem_slave_fs_rule(dev, slave);
mutex_unlock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex);
}
}
platform_set_drvdata(pdev, ndev);
- if (lpc_mii_init(pldat) != 0)
+ ret = lpc_mii_init(pldat);
+ if (ret)
goto err_out_unregister_netdev;
netdev_info(ndev, "LPC mac at 0x%08x irq %d\n",
skb->protocol = eth_type_trans(skb, netdev);
if (tcp_ip_status & PCH_GBE_RXD_ACC_STAT_TCPIPOK)
- skb->ip_summed = CHECKSUM_NONE;
- else
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
napi_gro_receive(&adapter->napi, skb);
(*work_done)++;
/* MDIO bus release function */
static int sh_mdio_release(struct net_device *ndev)
{
+ struct sh_eth_private *mdp = netdev_priv(ndev);
struct mii_bus *bus = dev_get_drvdata(&ndev->dev);
/* unregister mdio bus */
/* free bitbang info */
free_mdio_bitbang(bus);
+ /* free bitbang memory */
+ kfree(mdp->bitbang);
+
return 0;
}
bitbang->ctrl.ops = &bb_ops;
/* MII controller setting */
+ mdp->bitbang = bitbang;
mdp->mii_bus = alloc_mdio_bitbang(&bitbang->ctrl);
if (!mdp->mii_bus) {
ret = -ENOMEM;
}
mdp->tsu_addr = ioremap(rtsu->start,
resource_size(rtsu));
+ if (mdp->tsu_addr == NULL) {
+ ret = -ENOMEM;
+ dev_err(&pdev->dev, "TSU ioremap failed.\n");
+ goto out_release;
+ }
mdp->port = devno % 2;
ndev->features = NETIF_F_HW_VLAN_FILTER;
}
const u16 *reg_offset;
void __iomem *addr;
void __iomem *tsu_addr;
+ struct bb_info *bitbang;
u32 num_rx_ring;
u32 num_tx_ring;
dma_addr_t rx_desc_dma;
return false;
tx_queue->empty_read_count = 0;
- return ((empty_read_count ^ write_count) & ~EFX_EMPTY_COUNT_VALID) == 0;
+ return ((empty_read_count ^ write_count) & ~EFX_EMPTY_COUNT_VALID) == 0
+ && tx_queue->write_count - write_count == 1;
}
/* For each entry inserted into the software descriptor ring, create a
/* If there is no more tx desc left free then we need to
* tell the kernel to stop sending us tx frames.
*/
- if (unlikely(cpdma_check_free_tx_desc(priv->txch)))
+ if (unlikely(!cpdma_check_free_tx_desc(priv->txch)))
netif_stop_queue(ndev);
return NETDEV_TX_OK;
struct platform_device *mdio;
parp = of_get_property(slave_node, "phy_id", &lenp);
- if ((parp == NULL) && (lenp != (sizeof(void *) * 2))) {
+ if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) {
pr_err("Missing slave[%d] phy_id property\n", i);
ret = -EINVAL;
goto error_ret;
/* If there is no more tx desc left free then we need to
* tell the kernel to stop sending us tx frames.
*/
- if (unlikely(cpdma_check_free_tx_desc(priv->txchan)))
+ if (unlikely(!cpdma_check_free_tx_desc(priv->txchan)))
netif_stop_queue(ndev);
return NETDEV_TX_OK;
goto done;
spin_lock_irqsave(&target_list_lock, flags);
+restart:
list_for_each_entry(nt, &target_list, list) {
netconsole_target_get(nt);
if (nt->np.dev == dev) {
case NETDEV_UNREGISTER:
/*
* rtnl_lock already held
+ * we might sleep in __netpoll_cleanup()
*/
- if (nt->np.dev) {
- __netpoll_cleanup(&nt->np);
- dev_put(nt->np.dev);
- nt->np.dev = NULL;
- }
+ spin_unlock_irqrestore(&target_list_lock, flags);
+ __netpoll_cleanup(&nt->np);
+ spin_lock_irqsave(&target_list_lock, flags);
+ dev_put(nt->np.dev);
+ nt->np.dev = NULL;
nt->enabled = 0;
stopped = true;
- break;
+ netconsole_target_put(nt);
+ goto restart;
}
}
netconsole_target_put(nt);
select CRC16
select CRC32
help
- This option adds support for SMSC LAN95XX based USB 2.0
+ This option adds support for SMSC LAN75XX based USB 2.0
Gigabit Ethernet adapters.
config USB_NET_SMSC95XX
struct cdc_ncm_ctx *ctx;
struct usb_driver *subdriver = ERR_PTR(-ENODEV);
int ret = -ENODEV;
- u8 data_altsetting = CDC_NCM_DATA_ALTSETTING_NCM;
+ u8 data_altsetting = cdc_ncm_select_altsetting(dev, intf);
struct cdc_mbim_state *info = (void *)&dev->data;
- /* see if interface supports MBIM alternate setting */
- if (intf->num_altsetting == 2) {
- if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
- usb_set_interface(dev->udev,
- intf->cur_altsetting->desc.bInterfaceNumber,
- CDC_NCM_COMM_ALTSETTING_MBIM);
- data_altsetting = CDC_NCM_DATA_ALTSETTING_MBIM;
- }
-
/* Probably NCM, defer for cdc_ncm_bind */
if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
goto err;
#define DRIVER_VERSION "14-Mar-2012"
+#if IS_ENABLED(CONFIG_USB_NET_CDC_MBIM)
+static bool prefer_mbim = true;
+#else
+static bool prefer_mbim;
+#endif
+module_param(prefer_mbim, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(prefer_mbim, "Prefer MBIM setting on dual NCM/MBIM functions");
+
static void cdc_ncm_txpath_bh(unsigned long param);
static void cdc_ncm_tx_timeout_start(struct cdc_ncm_ctx *ctx);
static enum hrtimer_restart cdc_ncm_tx_timer_cb(struct hrtimer *hr_timer);
}
EXPORT_SYMBOL_GPL(cdc_ncm_unbind);
-static int cdc_ncm_bind(struct usbnet *dev, struct usb_interface *intf)
+/* Select the MBIM altsetting iff it is preferred and available,
+ * returning the number of the corresponding data interface altsetting
+ */
+u8 cdc_ncm_select_altsetting(struct usbnet *dev, struct usb_interface *intf)
{
- int ret;
+ struct usb_host_interface *alt;
/* The MBIM spec defines a NCM compatible default altsetting,
* which we may have matched:
* endpoint descriptors, shall be constructed according to
* the rules given in section 6 (USB Device Model) of this
* specification."
- *
- * Do not bind to such interfaces, allowing cdc_mbim to handle
- * them
*/
-#if IS_ENABLED(CONFIG_USB_NET_CDC_MBIM)
- if ((intf->num_altsetting == 2) &&
- !usb_set_interface(dev->udev,
- intf->cur_altsetting->desc.bInterfaceNumber,
- CDC_NCM_COMM_ALTSETTING_MBIM)) {
- if (cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
- return -ENODEV;
- else
- usb_set_interface(dev->udev,
- intf->cur_altsetting->desc.bInterfaceNumber,
- CDC_NCM_COMM_ALTSETTING_NCM);
+ if (prefer_mbim && intf->num_altsetting == 2) {
+ alt = usb_altnum_to_altsetting(intf, CDC_NCM_COMM_ALTSETTING_MBIM);
+ if (alt && cdc_ncm_comm_intf_is_mbim(alt) &&
+ !usb_set_interface(dev->udev,
+ intf->cur_altsetting->desc.bInterfaceNumber,
+ CDC_NCM_COMM_ALTSETTING_MBIM))
+ return CDC_NCM_DATA_ALTSETTING_MBIM;
}
-#endif
+ return CDC_NCM_DATA_ALTSETTING_NCM;
+}
+EXPORT_SYMBOL_GPL(cdc_ncm_select_altsetting);
+
+static int cdc_ncm_bind(struct usbnet *dev, struct usb_interface *intf)
+{
+ int ret;
+
+ /* MBIM backwards compatible function? */
+ cdc_ncm_select_altsetting(dev, intf);
+ if (cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
+ return -ENODEV;
/* NCM data altsetting is always 1 */
ret = cdc_ncm_bind_common(dev, intf, 1);
BUILD_BUG_ON((sizeof(((struct usbnet *)0)->data) < sizeof(struct qmi_wwan_state)));
- /* control and data is shared? */
- if (intf->cur_altsetting->desc.bNumEndpoints == 3) {
- info->control = intf;
- info->data = intf;
- goto shared;
- }
-
- /* else require a single interrupt status endpoint on control intf */
- if (intf->cur_altsetting->desc.bNumEndpoints != 1)
- goto err;
+ /* set up initial state */
+ info->control = intf;
+ info->data = intf;
/* and a number of CDC descriptors */
while (len > 3) {
buf += h->bLength;
}
- /* did we find all the required ones? */
- if (!(found & (1 << USB_CDC_HEADER_TYPE)) ||
- !(found & (1 << USB_CDC_UNION_TYPE))) {
- dev_err(&intf->dev, "CDC functional descriptors missing\n");
- goto err;
- }
-
- /* verify CDC Union */
- if (desc->bInterfaceNumber != cdc_union->bMasterInterface0) {
- dev_err(&intf->dev, "bogus CDC Union: master=%u\n", cdc_union->bMasterInterface0);
- goto err;
- }
-
- /* need to save these for unbind */
- info->control = intf;
- info->data = usb_ifnum_to_if(dev->udev, cdc_union->bSlaveInterface0);
- if (!info->data) {
- dev_err(&intf->dev, "bogus CDC Union: slave=%u\n", cdc_union->bSlaveInterface0);
- goto err;
+ /* Use separate control and data interfaces if we found a CDC Union */
+ if (cdc_union) {
+ info->data = usb_ifnum_to_if(dev->udev, cdc_union->bSlaveInterface0);
+ if (desc->bInterfaceNumber != cdc_union->bMasterInterface0 || !info->data) {
+ dev_err(&intf->dev, "bogus CDC Union: master=%u, slave=%u\n",
+ cdc_union->bMasterInterface0, cdc_union->bSlaveInterface0);
+ goto err;
+ }
}
/* errors aren't fatal - we can live with the dynamic address */
}
/* claim data interface and set it up */
- status = usb_driver_claim_interface(driver, info->data, dev);
- if (status < 0)
- goto err;
+ if (info->control != info->data) {
+ status = usb_driver_claim_interface(driver, info->data, dev);
+ if (status < 0)
+ goto err;
+ }
-shared:
status = qmi_wwan_register_subdriver(dev);
if (status < 0 && info->control != info->data) {
usb_set_intfdata(info->data, NULL);
AR_PHY_AGC_CONTROL_FLTR_CAL |
AR_PHY_AGC_CONTROL_PKDET_CAL;
+ /* Use chip chainmask only for calibration */
ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask);
if (rtt) {
ar9003_hw_rtt_disable(ah);
}
+ /* Revert chainmask to runtime parameters */
+ ar9003_hw_set_chain_masks(ah, ah->rxchainmask, ah->txchainmask);
+
/* Initialize list pointers */
ah->cal_list = ah->cal_list_last = ah->cal_list_curr = NULL;
int i;
bool needreset = false;
- for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++)
- if (ATH_TXQ_SETUP(sc, i)) {
- txq = &sc->tx.txq[i];
- ath_txq_lock(sc, txq);
- if (txq->axq_depth) {
- if (txq->axq_tx_inprogress) {
- needreset = true;
- ath_txq_unlock(sc, txq);
- break;
- } else {
- txq->axq_tx_inprogress = true;
- }
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+ txq = sc->tx.txq_map[i];
+
+ ath_txq_lock(sc, txq);
+ if (txq->axq_depth) {
+ if (txq->axq_tx_inprogress) {
+ needreset = true;
+ ath_txq_unlock(sc, txq);
+ break;
+ } else {
+ txq->axq_tx_inprogress = true;
}
- ath_txq_unlock_complete(sc, txq);
}
+ ath_txq_unlock_complete(sc, txq);
+ }
if (needreset) {
ath_dbg(ath9k_hw_common(sc->sc_ah), RESET,
dma_addr_t txcmd_phys;
int txq_id = skb_get_queue_mapping(skb);
u16 len, idx, hdr_len;
+ u16 firstlen, secondlen;
u8 id;
u8 unicast;
u8 sta_id;
len =
sizeof(struct il3945_tx_cmd) + sizeof(struct il_cmd_header) +
hdr_len;
- len = (len + 3) & ~3;
+ firstlen = (len + 3) & ~3;
/* Physical address of this Tx command's header (not MAC header!),
* within command buffer array. */
txcmd_phys =
- pci_map_single(il->pci_dev, &out_cmd->hdr, len, PCI_DMA_TODEVICE);
+ pci_map_single(il->pci_dev, &out_cmd->hdr, firstlen,
+ PCI_DMA_TODEVICE);
if (unlikely(pci_dma_mapping_error(il->pci_dev, txcmd_phys)))
goto drop_unlock;
/* Set up TFD's 2nd entry to point directly to remainder of skb,
* if any (802.11 null frames have no payload). */
- len = skb->len - hdr_len;
- if (len) {
+ secondlen = skb->len - hdr_len;
+ if (secondlen > 0) {
phys_addr =
- pci_map_single(il->pci_dev, skb->data + hdr_len, len,
+ pci_map_single(il->pci_dev, skb->data + hdr_len, secondlen,
PCI_DMA_TODEVICE);
if (unlikely(pci_dma_mapping_error(il->pci_dev, phys_addr)))
goto drop_unlock;
/* Add buffer containing Tx command and MAC(!) header to TFD's
* first entry */
- il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, len, 1, 0);
+ il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, firstlen, 1, 0);
dma_unmap_addr_set(out_meta, mapping, txcmd_phys);
- dma_unmap_len_set(out_meta, len, len);
- if (len)
- il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, len, 0,
- U32_PAD(len));
+ dma_unmap_len_set(out_meta, len, firstlen);
+ if (secondlen > 0)
+ il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, secondlen, 0,
+ U32_PAD(secondlen));
if (!ieee80211_has_morefrags(hdr->frame_control)) {
txq->need_update = 1;
return -1;
}
+ cmd_code = le16_to_cpu(host_cmd->command);
+ cmd_size = le16_to_cpu(host_cmd->size);
+
+ if (adapter->hw_status == MWIFIEX_HW_STATUS_RESET &&
+ cmd_code != HostCmd_CMD_FUNC_SHUTDOWN &&
+ cmd_code != HostCmd_CMD_FUNC_INIT) {
+ dev_err(adapter->dev,
+ "DNLD_CMD: FW in reset state, ignore cmd %#x\n",
+ cmd_code);
+ mwifiex_complete_cmd(adapter, cmd_node);
+ mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
+ return -1;
+ }
+
/* Set command sequence number */
adapter->seq_num++;
host_cmd->seq_num = cpu_to_le16(HostCmd_SET_SEQ_NO_BSS_INFO
adapter->curr_cmd = cmd_node;
spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
- cmd_code = le16_to_cpu(host_cmd->command);
- cmd_size = le16_to_cpu(host_cmd->size);
-
/* Adjust skb length */
if (cmd_node->cmd_skb->len > cmd_size)
/*
ret = mwifiex_send_cmd_async(priv, cmd_no, cmd_action, cmd_oid,
data_buf);
- if (!ret)
- ret = mwifiex_wait_queue_complete(adapter);
return ret;
}
if (cmd_no == HostCmd_CMD_802_11_SCAN) {
mwifiex_queue_scan_cmd(priv, cmd_node);
} else {
- adapter->cmd_queued = cmd_node;
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true);
queue_work(adapter->workqueue, &adapter->main_work);
+ if (cmd_node->wait_q_enabled)
+ ret = mwifiex_wait_queue_complete(adapter, cmd_node);
}
return ret;
return ret;
}
+ /* cancel current command */
+ if (adapter->curr_cmd) {
+ dev_warn(adapter->dev, "curr_cmd is still in processing\n");
+ del_timer(&adapter->cmd_timer);
+ mwifiex_insert_cmd_to_free_q(adapter, adapter->curr_cmd);
+ adapter->curr_cmd = NULL;
+ }
+
/* shut down mwifiex */
dev_dbg(adapter->dev, "info: shutdown mwifiex...\n");
adhoc_join->bss_descriptor.bssid,
adhoc_join->bss_descriptor.ssid);
- for (i = 0; bss_desc->supported_rates[i] &&
- i < MWIFIEX_SUPPORTED_RATES;
- i++)
- ;
+ for (i = 0; i < MWIFIEX_SUPPORTED_RATES &&
+ bss_desc->supported_rates[i]; i++)
+ ;
rates_size = i;
/* Copy Data Rates from the Rates recorded in scan response */
u16 cmd_wait_q_required;
struct mwifiex_wait_queue cmd_wait_q;
u8 scan_wait_q_woken;
- struct cmd_ctrl_node *cmd_queued;
spinlock_t queue_lock; /* lock for tx queues */
struct completion fw_load;
u8 country_code[IEEE80211_COUNTRY_STRING_LEN];
struct mwifiex_multicast_list *mcast_list);
int mwifiex_copy_mcast_addr(struct mwifiex_multicast_list *mlist,
struct net_device *dev);
-int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter);
+int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter,
+ struct cmd_ctrl_node *cmd_queued);
int mwifiex_bss_start(struct mwifiex_private *priv, struct cfg80211_bss *bss,
struct cfg80211_ssid *req_ssid);
int mwifiex_cancel_hs(struct mwifiex_private *priv, int cmd_type);
list_del(&cmd_node->list);
spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
flags);
- adapter->cmd_queued = cmd_node;
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node,
true);
queue_work(adapter->workqueue, &adapter->main_work);
+
+ /* Perform internal scan synchronously */
+ if (!priv->scan_request)
+ mwifiex_wait_queue_complete(adapter, cmd_node);
} else {
spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
flags);
/* Normal scan */
ret = mwifiex_scan_networks(priv, NULL);
- if (!ret)
- ret = mwifiex_wait_queue_complete(priv->adapter);
-
up(&priv->async_sem);
return ret;
* This function waits on a cmd wait queue. It also cancels the pending
* request after waking up, in case of errors.
*/
-int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter)
+int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter,
+ struct cmd_ctrl_node *cmd_queued)
{
int status;
- struct cmd_ctrl_node *cmd_queued;
-
- if (!adapter->cmd_queued)
- return 0;
-
- cmd_queued = adapter->cmd_queued;
- adapter->cmd_queued = NULL;
dev_dbg(adapter->dev, "cmd pending\n");
atomic_inc(&adapter->cmd_pending);
config RT2800PCI
tristate "Ralink rt27xx/rt28xx/rt30xx (PCI/PCIe/PCMCIA) support"
- depends on PCI || RALINK_RT288X || RALINK_RT305X
+ depends on PCI || SOC_RT288X || SOC_RT305X
select RT2800_LIB
select RT2X00_LIB_PCI if PCI
- select RT2X00_LIB_SOC if RALINK_RT288X || RALINK_RT305X
+ select RT2X00_LIB_SOC if SOC_RT288X || SOC_RT305X
select RT2X00_LIB_FIRMWARE
select RT2X00_LIB_CRYPTO
select CRC_CCITT
rt2x00pci_register_write(rt2x00dev, H2M_MAILBOX_CID, ~0);
}
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
static int rt2800pci_read_eeprom_soc(struct rt2x00_dev *rt2x00dev)
{
void __iomem *base_addr = ioremap(0x1F040000, EEPROM_SIZE);
{
return -ENOMEM;
}
-#endif /* CONFIG_RALINK_RT288X || CONFIG_RALINK_RT305X */
+#endif /* CONFIG_SOC_RT288X || CONFIG_SOC_RT305X */
#ifdef CONFIG_PCI
static void rt2800pci_eepromregister_read(struct eeprom_93cx6 *eeprom)
#endif /* CONFIG_PCI */
MODULE_LICENSE("GPL");
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
static int rt2800soc_probe(struct platform_device *pdev)
{
return rt2x00soc_probe(pdev, &rt2800pci_ops);
.suspend = rt2x00soc_suspend,
.resume = rt2x00soc_resume,
};
-#endif /* CONFIG_RALINK_RT288X || CONFIG_RALINK_RT305X */
+#endif /* CONFIG_SOC_RT288X || CONFIG_SOC_RT305X */
#ifdef CONFIG_PCI
static int rt2800pci_probe(struct pci_dev *pci_dev,
{
int ret = 0;
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
ret = platform_driver_register(&rt2800soc_driver);
if (ret)
return ret;
#ifdef CONFIG_PCI
ret = pci_register_driver(&rt2800pci_driver);
if (ret) {
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
platform_driver_unregister(&rt2800soc_driver);
#endif
return ret;
#ifdef CONFIG_PCI
pci_unregister_driver(&rt2800pci_driver);
#endif
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
platform_driver_unregister(&rt2800soc_driver);
#endif
}
}
void rtl92cu_set_check_bssid(struct ieee80211_hw *hw, bool check_bssid)
-{
- /* dummy routine needed for callback from rtl_op_configure_filter() */
-}
-
-/*========================================================================== */
-
-static void _rtl92cu_set_check_bssid(struct ieee80211_hw *hw,
- enum nl80211_iftype type)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
- u32 reg_rcr = rtl_read_dword(rtlpriv, REG_RCR);
struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
- struct rtl_phy *rtlphy = &(rtlpriv->phy);
- u8 filterout_non_associated_bssid = false;
+ u32 reg_rcr = rtl_read_dword(rtlpriv, REG_RCR);
- switch (type) {
- case NL80211_IFTYPE_ADHOC:
- case NL80211_IFTYPE_STATION:
- filterout_non_associated_bssid = true;
- break;
- case NL80211_IFTYPE_UNSPECIFIED:
- case NL80211_IFTYPE_AP:
- default:
- break;
- }
- if (filterout_non_associated_bssid) {
+ if (rtlpriv->psc.rfpwr_state != ERFON)
+ return;
+
+ if (check_bssid) {
+ u8 tmp;
if (IS_NORMAL_CHIP(rtlhal->version)) {
- switch (rtlphy->current_io_type) {
- case IO_CMD_RESUME_DM_BY_SCAN:
- reg_rcr |= (RCR_CBSSID_DATA | RCR_CBSSID_BCN);
- rtlpriv->cfg->ops->set_hw_reg(hw,
- HW_VAR_RCR, (u8 *)(®_rcr));
- /* enable update TSF */
- _rtl92cu_set_bcn_ctrl_reg(hw, 0, BIT(4));
- break;
- case IO_CMD_PAUSE_DM_BY_SCAN:
- reg_rcr &= ~(RCR_CBSSID_DATA | RCR_CBSSID_BCN);
- rtlpriv->cfg->ops->set_hw_reg(hw,
- HW_VAR_RCR, (u8 *)(®_rcr));
- /* disable update TSF */
- _rtl92cu_set_bcn_ctrl_reg(hw, BIT(4), 0);
- break;
- }
+ reg_rcr |= (RCR_CBSSID_DATA | RCR_CBSSID_BCN);
+ tmp = BIT(4);
} else {
- reg_rcr |= (RCR_CBSSID);
- rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
- (u8 *)(®_rcr));
- _rtl92cu_set_bcn_ctrl_reg(hw, 0, (BIT(4)|BIT(5)));
+ reg_rcr |= RCR_CBSSID;
+ tmp = BIT(4) | BIT(5);
}
- } else if (filterout_non_associated_bssid == false) {
+ rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
+ (u8 *) (®_rcr));
+ _rtl92cu_set_bcn_ctrl_reg(hw, 0, tmp);
+ } else {
+ u8 tmp;
if (IS_NORMAL_CHIP(rtlhal->version)) {
- reg_rcr &= (~(RCR_CBSSID_DATA | RCR_CBSSID_BCN));
- rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
- (u8 *)(®_rcr));
- _rtl92cu_set_bcn_ctrl_reg(hw, BIT(4), 0);
+ reg_rcr &= ~(RCR_CBSSID_DATA | RCR_CBSSID_BCN);
+ tmp = BIT(4);
} else {
- reg_rcr &= (~RCR_CBSSID);
- rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
- (u8 *)(®_rcr));
- _rtl92cu_set_bcn_ctrl_reg(hw, (BIT(4)|BIT(5)), 0);
+ reg_rcr &= ~RCR_CBSSID;
+ tmp = BIT(4) | BIT(5);
}
+ reg_rcr &= (~(RCR_CBSSID_DATA | RCR_CBSSID_BCN));
+ rtlpriv->cfg->ops->set_hw_reg(hw,
+ HW_VAR_RCR, (u8 *) (®_rcr));
+ _rtl92cu_set_bcn_ctrl_reg(hw, tmp, 0);
}
}
+/*========================================================================== */
+
int rtl92cu_set_network_type(struct ieee80211_hw *hw, enum nl80211_iftype type)
{
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+
if (_rtl92cu_set_media_status(hw, type))
return -EOPNOTSUPP;
- _rtl92cu_set_check_bssid(hw, type);
+
+ if (rtlpriv->mac80211.link_state == MAC80211_LINKED) {
+ if (type != NL80211_IFTYPE_AP)
+ rtl92cu_set_check_bssid(hw, true);
+ } else {
+ rtl92cu_set_check_bssid(hw, false);
+ }
+
return 0;
}
(shortgi_rate << 4) | (shortgi_rate);
}
rtl_write_dword(rtlpriv, REG_ARFR0 + ratr_index * 4, ratr_value);
- RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, "%x\n",
- rtl_read_dword(rtlpriv, REG_ARFR0));
}
void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, u8 rssi_level)
if (unlikely(!_urb)) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
"Can't allocate urb. Drop skb!\n");
+ kfree_skb(skb);
return;
}
_rtl_submit_tx_urb(hw, _urb);
return min((size_t)(image - rom), size);
}
+static loff_t pci_find_rom(struct pci_dev *pdev, size_t *size)
+{
+ struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
+ loff_t start;
+
+ /* assign the ROM an address if it doesn't have one */
+ if (res->parent == NULL && pci_assign_resource(pdev, PCI_ROM_RESOURCE))
+ return 0;
+ start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
+ *size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
+
+ if (*size == 0)
+ return 0;
+
+ /* Enable ROM space decodes */
+ if (pci_enable_rom(pdev))
+ return 0;
+
+ return start;
+}
+
/**
* pci_map_rom - map a PCI ROM to kernel space
* @pdev: pointer to pci device struct
void __iomem *pci_map_rom(struct pci_dev *pdev, size_t *size)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
- loff_t start;
+ loff_t start = 0;
void __iomem *rom;
- /*
- * Some devices may provide ROMs via a source other than the BAR
- */
- if (pdev->rom && pdev->romlen) {
- *size = pdev->romlen;
- return phys_to_virt(pdev->rom);
/*
* IORESOURCE_ROM_SHADOW set on x86, x86_64 and IA64 supports legacy
* memory map if the VGA enable bit of the Bridge Control register is
* set for embedded VGA.
*/
- } else if (res->flags & IORESOURCE_ROM_SHADOW) {
+ if (res->flags & IORESOURCE_ROM_SHADOW) {
/* primary video rom always starts here */
start = (loff_t)0xC0000;
*size = 0x20000; /* cover C000:0 through E000:0 */
return (void __iomem *)(unsigned long)
pci_resource_start(pdev, PCI_ROM_RESOURCE);
} else {
- /* assign the ROM an address if it doesn't have one */
- if (res->parent == NULL &&
- pci_assign_resource(pdev,PCI_ROM_RESOURCE))
- return NULL;
- start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
- *size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
- if (*size == 0)
- return NULL;
-
- /* Enable ROM space decodes */
- if (pci_enable_rom(pdev))
- return NULL;
+ start = pci_find_rom(pdev, size);
}
}
+ /*
+ * Some devices may provide ROMs via a source other than the BAR
+ */
+ if (!start && pdev->rom && pdev->romlen) {
+ *size = pdev->romlen;
+ return phys_to_virt(pdev->rom);
+ }
+
+ if (!start)
+ return NULL;
+
rom = ioremap(start, *size);
if (!rom) {
/* restore enable if ioremap fails */
/* special soc specific control */
if (ctrl->mpp_get || ctrl->mpp_set) {
- if (!ctrl->name || !ctrl->mpp_set || !ctrl->mpp_set) {
+ if (!ctrl->name || !ctrl->mpp_get || !ctrl->mpp_set) {
dev_err(&pdev->dev, "wrong soc control info\n");
return -EINVAL;
}
static int pinconf_dbg_state_print(struct seq_file *s, void *d)
{
if (strlen(dbg_state_name))
- seq_printf(s, "%s\n", dbg_pinname);
+ seq_printf(s, "%s\n", dbg_state_name);
else
seq_printf(s, "No pin state set\n");
return 0;
* pin config.
*/
-#ifdef CONFIG_GENERIC_PINCONF
+#if defined(CONFIG_GENERIC_PINCONF) && defined(CONFIG_DEBUG_FS)
void pinconf_generic_dump_pin(struct pinctrl_dev *pctldev,
struct seq_file *s, unsigned pin);
}
/* check if pin use AlternateFunction register */
- if ((af.alt_bit1 == UNUSED) && (af.alt_bit1 == UNUSED))
+ if ((af.alt_bit1 == UNUSED) && (af.alt_bit2 == UNUSED))
return mode;
/*
* if pin GPIOSEL bit is set and pin supports alternate function,
}
#ifdef CONFIG_PM
+
+static u32 wakeups[MAX_GPIO_BANKS];
+static u32 backups[MAX_GPIO_BANKS];
+
static int gpio_irq_set_wake(struct irq_data *d, unsigned state)
{
struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d);
unsigned bank = at91_gpio->pioc_idx;
+ unsigned mask = 1 << d->hwirq;
if (unlikely(bank >= MAX_GPIO_BANKS))
return -EINVAL;
+ if (state)
+ wakeups[bank] |= mask;
+ else
+ wakeups[bank] &= ~mask;
+
irq_set_irq_wake(at91_gpio->pioc_virq, state);
return 0;
}
+
+void at91_pinctrl_gpio_suspend(void)
+{
+ int i;
+
+ for (i = 0; i < gpio_banks; i++) {
+ void __iomem *pio;
+
+ if (!gpio_chips[i])
+ continue;
+
+ pio = gpio_chips[i]->regbase;
+
+ backups[i] = __raw_readl(pio + PIO_IMR);
+ __raw_writel(backups[i], pio + PIO_IDR);
+ __raw_writel(wakeups[i], pio + PIO_IER);
+
+ if (!wakeups[i]) {
+ clk_unprepare(gpio_chips[i]->clock);
+ clk_disable(gpio_chips[i]->clock);
+ } else {
+ printk(KERN_DEBUG "GPIO-%c may wake for %08x\n",
+ 'A'+i, wakeups[i]);
+ }
+ }
+}
+
+void at91_pinctrl_gpio_resume(void)
+{
+ int i;
+
+ for (i = 0; i < gpio_banks; i++) {
+ void __iomem *pio;
+
+ if (!gpio_chips[i])
+ continue;
+
+ pio = gpio_chips[i]->regbase;
+
+ if (!wakeups[i]) {
+ if (clk_prepare(gpio_chips[i]->clock) == 0)
+ clk_enable(gpio_chips[i]->clock);
+ }
+
+ __raw_writel(wakeups[i], pio + PIO_IDR);
+ __raw_writel(backups[i], pio + PIO_IER);
+ }
+}
+
#else
#define gpio_irq_set_wake NULL
-#endif
+#endif /* CONFIG_PM */
static struct irq_chip gpio_irqchip = {
.name = "GPIO",
}
if (!gpio_range) {
+ /*
+ * A pin should not be freed more times than allocated.
+ */
+ if (WARN_ON(!desc->mux_usecount))
+ return NULL;
desc->mux_usecount--;
if (desc->mux_usecount)
return NULL;
static unsigned int at91_alarm_year = AT91_RTC_EPOCH;
static void __iomem *at91_rtc_regs;
static int irq;
+static u32 at91_rtc_imr;
/*
* Decode time/date into rtc_time structure
cr = at91_rtc_read(AT91_RTC_CR);
at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM);
+ at91_rtc_imr |= AT91_RTC_ACKUPD;
at91_rtc_write(AT91_RTC_IER, AT91_RTC_ACKUPD);
wait_for_completion(&at91_rtc_updated); /* wait for ACKUPD interrupt */
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD);
+ at91_rtc_imr &= ~AT91_RTC_ACKUPD;
at91_rtc_write(AT91_RTC_TIMR,
bin2bcd(tm->tm_sec) << 0
tm->tm_yday = rtc_year_days(tm->tm_mday, tm->tm_mon, tm->tm_year);
tm->tm_year = at91_alarm_year - 1900;
- alrm->enabled = (at91_rtc_read(AT91_RTC_IMR) & AT91_RTC_ALARM)
+ alrm->enabled = (at91_rtc_imr & AT91_RTC_ALARM)
? 1 : 0;
dev_dbg(dev, "%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __func__,
tm.tm_sec = alrm->time.tm_sec;
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM);
+ at91_rtc_imr &= ~AT91_RTC_ALARM;
at91_rtc_write(AT91_RTC_TIMALR,
bin2bcd(tm.tm_sec) << 0
| bin2bcd(tm.tm_min) << 8
if (alrm->enabled) {
at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM);
+ at91_rtc_imr |= AT91_RTC_ALARM;
at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM);
}
if (enabled) {
at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM);
+ at91_rtc_imr |= AT91_RTC_ALARM;
at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM);
- } else
+ } else {
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM);
+ at91_rtc_imr &= ~AT91_RTC_ALARM;
+ }
return 0;
}
*/
static int at91_rtc_proc(struct device *dev, struct seq_file *seq)
{
- unsigned long imr = at91_rtc_read(AT91_RTC_IMR);
-
seq_printf(seq, "update_IRQ\t: %s\n",
- (imr & AT91_RTC_ACKUPD) ? "yes" : "no");
+ (at91_rtc_imr & AT91_RTC_ACKUPD) ? "yes" : "no");
seq_printf(seq, "periodic_IRQ\t: %s\n",
- (imr & AT91_RTC_SECEV) ? "yes" : "no");
+ (at91_rtc_imr & AT91_RTC_SECEV) ? "yes" : "no");
return 0;
}
unsigned int rtsr;
unsigned long events = 0;
- rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read(AT91_RTC_IMR);
+ rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_imr;
if (rtsr) { /* this interrupt is shared! Is it ours? */
if (rtsr & AT91_RTC_ALARM)
events |= (RTC_AF | RTC_IRQF);
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM |
AT91_RTC_SECEV | AT91_RTC_TIMEV |
AT91_RTC_CALEV);
+ at91_rtc_imr = 0;
ret = request_irq(irq, at91_rtc_interrupt,
IRQF_SHARED,
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM |
AT91_RTC_SECEV | AT91_RTC_TIMEV |
AT91_RTC_CALEV);
+ at91_rtc_imr = 0;
free_irq(irq, pdev);
rtc_device_unregister(rtc);
/* AT91RM9200 RTC Power management control */
-static u32 at91_rtc_imr;
+static u32 at91_rtc_bkpimr;
+
static int at91_rtc_suspend(struct device *dev)
{
/* this IRQ is shared with DBGU and other hardware which isn't
* necessarily doing PM like we are...
*/
- at91_rtc_imr = at91_rtc_read(AT91_RTC_IMR)
- & (AT91_RTC_ALARM|AT91_RTC_SECEV);
- if (at91_rtc_imr) {
- if (device_may_wakeup(dev))
+ at91_rtc_bkpimr = at91_rtc_imr & (AT91_RTC_ALARM|AT91_RTC_SECEV);
+ if (at91_rtc_bkpimr) {
+ if (device_may_wakeup(dev)) {
enable_irq_wake(irq);
- else
- at91_rtc_write(AT91_RTC_IDR, at91_rtc_imr);
- }
+ } else {
+ at91_rtc_write(AT91_RTC_IDR, at91_rtc_bkpimr);
+ at91_rtc_imr &= ~at91_rtc_bkpimr;
+ }
+}
return 0;
}
static int at91_rtc_resume(struct device *dev)
{
- if (at91_rtc_imr) {
- if (device_may_wakeup(dev))
+ if (at91_rtc_bkpimr) {
+ if (device_may_wakeup(dev)) {
disable_irq_wake(irq);
- else
- at91_rtc_write(AT91_RTC_IER, at91_rtc_imr);
+ } else {
+ at91_rtc_imr |= at91_rtc_bkpimr;
+ at91_rtc_write(AT91_RTC_IER, at91_rtc_bkpimr);
+ }
}
return 0;
}
#define AT91_RTC_SCCR 0x1c /* Status Clear Command Register */
#define AT91_RTC_IER 0x20 /* Interrupt Enable Register */
#define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */
-#define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */
#define AT91_RTC_VER 0x2c /* Valid Entry Register */
#define AT91_RTC_NVTIM (1 << 0) /* Non valid Time */
rtc->da9052 = dev_get_drvdata(pdev->dev.parent);
platform_set_drvdata(pdev, rtc);
- rtc->irq = platform_get_irq_byname(pdev, "ALM");
- ret = devm_request_threaded_irq(&pdev->dev, rtc->irq, NULL,
- da9052_rtc_irq,
- IRQF_TRIGGER_LOW | IRQF_ONESHOT,
- "ALM", rtc);
+ rtc->irq = DA9052_IRQ_ALARM;
+ ret = da9052_request_irq(rtc->da9052, rtc->irq, "ALM",
+ da9052_rtc_irq, rtc);
if (ret != 0) {
rtc_err(rtc->da9052, "irq registration failed: %d\n", ret);
return ret;
.release = scm_release,
};
+static bool scm_permit_request(struct scm_blk_dev *bdev, struct request *req)
+{
+ return rq_data_dir(req) != WRITE || bdev->state != SCM_WR_PROHIBIT;
+}
+
static void scm_request_prepare(struct scm_request *scmrq)
{
struct scm_blk_dev *bdev = scmrq->bdev;
scm_release_cluster(scmrq);
blk_requeue_request(bdev->rq, scmrq->request);
+ atomic_dec(&bdev->queued_reqs);
scm_request_done(scmrq);
scm_ensure_queue_restart(bdev);
}
void scm_request_finish(struct scm_request *scmrq)
{
+ struct scm_blk_dev *bdev = scmrq->bdev;
+
scm_release_cluster(scmrq);
blk_end_request_all(scmrq->request, scmrq->error);
+ atomic_dec(&bdev->queued_reqs);
scm_request_done(scmrq);
}
if (req->cmd_type != REQ_TYPE_FS)
continue;
+ if (!scm_permit_request(bdev, req)) {
+ scm_ensure_queue_restart(bdev);
+ return;
+ }
scmrq = scm_request_fetch();
if (!scmrq) {
SCM_LOG(5, "no request");
return;
}
if (scm_need_cluster_request(scmrq)) {
+ atomic_inc(&bdev->queued_reqs);
blk_start_request(req);
scm_initiate_cluster_request(scmrq);
return;
}
scm_request_prepare(scmrq);
+ atomic_inc(&bdev->queued_reqs);
blk_start_request(req);
ret = scm_start_aob(scmrq->aob);
scm_request_requeue(scmrq);
return;
}
- atomic_inc(&bdev->queued_reqs);
}
}
tasklet_hi_schedule(&bdev->tasklet);
}
+static void scm_blk_handle_error(struct scm_request *scmrq)
+{
+ struct scm_blk_dev *bdev = scmrq->bdev;
+ unsigned long flags;
+
+ if (scmrq->error != -EIO)
+ goto restart;
+
+ /* For -EIO the response block is valid. */
+ switch (scmrq->aob->response.eqc) {
+ case EQC_WR_PROHIBIT:
+ spin_lock_irqsave(&bdev->lock, flags);
+ if (bdev->state != SCM_WR_PROHIBIT)
+ pr_info("%lu: Write access to the SCM increment is suspended\n",
+ (unsigned long) bdev->scmdev->address);
+ bdev->state = SCM_WR_PROHIBIT;
+ spin_unlock_irqrestore(&bdev->lock, flags);
+ goto requeue;
+ default:
+ break;
+ }
+
+restart:
+ if (!scm_start_aob(scmrq->aob))
+ return;
+
+requeue:
+ spin_lock_irqsave(&bdev->rq_lock, flags);
+ scm_request_requeue(scmrq);
+ spin_unlock_irqrestore(&bdev->rq_lock, flags);
+}
+
static void scm_blk_tasklet(struct scm_blk_dev *bdev)
{
struct scm_request *scmrq;
spin_unlock_irqrestore(&bdev->lock, flags);
if (scmrq->error && scmrq->retries-- > 0) {
- if (scm_start_aob(scmrq->aob)) {
- spin_lock_irqsave(&bdev->rq_lock, flags);
- scm_request_requeue(scmrq);
- spin_unlock_irqrestore(&bdev->rq_lock, flags);
- }
+ scm_blk_handle_error(scmrq);
+
/* Request restarted or requeued, handle next. */
spin_lock_irqsave(&bdev->lock, flags);
continue;
}
scm_request_finish(scmrq);
- atomic_dec(&bdev->queued_reqs);
spin_lock_irqsave(&bdev->lock, flags);
}
spin_unlock_irqrestore(&bdev->lock, flags);
}
bdev->scmdev = scmdev;
+ bdev->state = SCM_OPER;
spin_lock_init(&bdev->rq_lock);
spin_lock_init(&bdev->lock);
INIT_LIST_HEAD(&bdev->finished_requests);
put_disk(bdev->gendisk);
}
+void scm_blk_set_available(struct scm_blk_dev *bdev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&bdev->lock, flags);
+ if (bdev->state == SCM_WR_PROHIBIT)
+ pr_info("%lu: Write access to the SCM increment is restored\n",
+ (unsigned long) bdev->scmdev->address);
+ bdev->state = SCM_OPER;
+ spin_unlock_irqrestore(&bdev->lock, flags);
+}
+
static int __init scm_blk_init(void)
{
int ret = -EINVAL;
spinlock_t rq_lock; /* guard the request queue */
spinlock_t lock; /* guard the rest of the blockdev */
atomic_t queued_reqs;
+ enum {SCM_OPER, SCM_WR_PROHIBIT} state;
struct list_head finished_requests;
#ifdef CONFIG_SCM_BLOCK_CLUSTER_WRITE
struct list_head cluster_list;
int scm_blk_dev_setup(struct scm_blk_dev *, struct scm_device *);
void scm_blk_dev_cleanup(struct scm_blk_dev *);
+void scm_blk_set_available(struct scm_blk_dev *);
void scm_blk_irq(struct scm_device *, void *, int);
void scm_request_finish(struct scm_request *);
#include <asm/eadm.h>
#include "scm_blk.h"
-static void notify(struct scm_device *scmdev)
+static void scm_notify(struct scm_device *scmdev, enum scm_event event)
{
- pr_info("%lu: The capabilities of the SCM increment changed\n",
- (unsigned long) scmdev->address);
- SCM_LOG(2, "State changed");
- SCM_LOG_STATE(2, scmdev);
+ struct scm_blk_dev *bdev = dev_get_drvdata(&scmdev->dev);
+
+ switch (event) {
+ case SCM_CHANGE:
+ pr_info("%lu: The capabilities of the SCM increment changed\n",
+ (unsigned long) scmdev->address);
+ SCM_LOG(2, "State changed");
+ SCM_LOG_STATE(2, scmdev);
+ break;
+ case SCM_AVAIL:
+ SCM_LOG(2, "Increment available");
+ SCM_LOG_STATE(2, scmdev);
+ scm_blk_set_available(bdev);
+ break;
+ }
}
static int scm_probe(struct scm_device *scmdev)
.name = "scm_block",
.owner = THIS_MODULE,
},
- .notify = notify,
+ .notify = scm_notify,
.probe = scm_probe,
.remove = scm_remove,
.handler = scm_blk_irq,
struct read_storage_sccb *sccb;
int i, id, assigned, rc;
+ if (OLDMEM_BASE) /* No standby memory in kdump mode */
+ return 0;
if (!early_read_info_sccb_valid)
return 0;
if ((sclp_facilities & 0xe00000000000ULL) != 0xe00000000000ULL)
" failed (rc=%d).\n", ret);
}
+static void chsc_process_sei_scm_avail(struct chsc_sei_nt0_area *sei_area)
+{
+ int ret;
+
+ CIO_CRW_EVENT(4, "chsc: scm available information\n");
+ if (sei_area->rs != 7)
+ return;
+
+ ret = scm_process_availability_information();
+ if (ret)
+ CIO_CRW_EVENT(0, "chsc: process availability information"
+ " failed (rc=%d).\n", ret);
+}
+
static void chsc_process_sei_nt2(struct chsc_sei_nt2_area *sei_area)
{
switch (sei_area->cc) {
case 12: /* scm change notification */
chsc_process_sei_scm_change(sei_area);
break;
+ case 14: /* scm available notification */
+ chsc_process_sei_scm_avail(sei_area);
+ break;
default: /* other stuff */
CIO_CRW_EVENT(2, "chsc: sei nt0 unhandled cc=%d\n",
sei_area->cc);
#ifdef CONFIG_SCM_BUS
int scm_update_information(void);
+int scm_process_availability_information(void);
#else /* CONFIG_SCM_BUS */
static inline int scm_update_information(void) { return 0; }
+static inline int scm_process_availability_information(void) { return 0; }
#endif /* CONFIG_SCM_BUS */
goto out;
scmdrv = to_scm_drv(scmdev->dev.driver);
if (changed && scmdrv->notify)
- scmdrv->notify(scmdev);
+ scmdrv->notify(scmdev, SCM_CHANGE);
out:
device_unlock(&scmdev->dev);
if (changed)
return ret;
}
+static int scm_dev_avail(struct device *dev, void *unused)
+{
+ struct scm_driver *scmdrv = to_scm_drv(dev->driver);
+ struct scm_device *scmdev = to_scm_dev(dev);
+
+ if (dev->driver && scmdrv->notify)
+ scmdrv->notify(scmdev, SCM_AVAIL);
+
+ return 0;
+}
+
+int scm_process_availability_information(void)
+{
+ return bus_for_each_dev(&scm_bus_type, NULL, NULL, scm_dev_avail);
+}
+
static int __init scm_init(void)
{
int ret;
void *reply_param);
int qeth_get_priority_queue(struct qeth_card *, struct sk_buff *, int, int);
int qeth_get_elements_no(struct qeth_card *, void *, struct sk_buff *, int);
+int qeth_get_elements_for_frags(struct sk_buff *);
int qeth_do_send_packet_fast(struct qeth_card *, struct qeth_qdio_out_q *,
struct sk_buff *, struct qeth_hdr *, int, int, int);
int qeth_do_send_packet(struct qeth_card *, struct qeth_qdio_out_q *,
}
EXPORT_SYMBOL_GPL(qeth_get_priority_queue);
+int qeth_get_elements_for_frags(struct sk_buff *skb)
+{
+ int cnt, length, e, elements = 0;
+ struct skb_frag_struct *frag;
+ char *data;
+
+ for (cnt = 0; cnt < skb_shinfo(skb)->nr_frags; cnt++) {
+ frag = &skb_shinfo(skb)->frags[cnt];
+ data = (char *)page_to_phys(skb_frag_page(frag)) +
+ frag->page_offset;
+ length = frag->size;
+ e = PFN_UP((unsigned long)data + length - 1) -
+ PFN_DOWN((unsigned long)data);
+ elements += e;
+ }
+ return elements;
+}
+EXPORT_SYMBOL_GPL(qeth_get_elements_for_frags);
+
int qeth_get_elements_no(struct qeth_card *card, void *hdr,
struct sk_buff *skb, int elems)
{
int elements_needed = PFN_UP((unsigned long)skb->data + dlen - 1) -
PFN_DOWN((unsigned long)skb->data);
- elements_needed += skb_shinfo(skb)->nr_frags;
+ elements_needed += qeth_get_elements_for_frags(skb);
+
if ((elements_needed + elems) > QETH_MAX_BUFFER_ELEMENTS(card)) {
QETH_DBF_MESSAGE(2, "Invalid size of IP packet "
"(Number=%d / Length=%d). Discarded.\n",
for (cnt = 0; cnt < skb_shinfo(skb)->nr_frags; cnt++) {
frag = &skb_shinfo(skb)->frags[cnt];
- buffer->element[element].addr = (char *)
- page_to_phys(skb_frag_page(frag))
- + frag->page_offset;
- buffer->element[element].length = frag->size;
- buffer->element[element].eflags = SBAL_EFLAGS_MIDDLE_FRAG;
- element++;
+ data = (char *)page_to_phys(skb_frag_page(frag)) +
+ frag->page_offset;
+ length = frag->size;
+ while (length > 0) {
+ length_here = PAGE_SIZE -
+ ((unsigned long) data % PAGE_SIZE);
+ if (length < length_here)
+ length_here = length;
+
+ buffer->element[element].addr = data;
+ buffer->element[element].length = length_here;
+ buffer->element[element].eflags =
+ SBAL_EFLAGS_MIDDLE_FRAG;
+ length -= length_here;
+ data += length_here;
+ element++;
+ }
}
if (buffer->element[element - 1].eflags)
return rc;
}
-static void qeth_l3_correct_routing_type(struct qeth_card *card,
+static int qeth_l3_correct_routing_type(struct qeth_card *card,
enum qeth_routing_types *type, enum qeth_prot_versions prot)
{
if (card->info.type == QETH_CARD_TYPE_IQD) {
case PRIMARY_CONNECTOR:
case SECONDARY_CONNECTOR:
case MULTICAST_ROUTER:
- return;
+ return 0;
default:
goto out_inval;
}
case NO_ROUTER:
case PRIMARY_ROUTER:
case SECONDARY_ROUTER:
- return;
+ return 0;
case MULTICAST_ROUTER:
if (qeth_is_ipafunc_supported(card, prot,
IPA_OSA_MC_ROUTER))
- return;
+ return 0;
default:
goto out_inval;
}
}
out_inval:
*type = NO_ROUTER;
+ return -EINVAL;
}
int qeth_l3_setrouting_v4(struct qeth_card *card)
QETH_CARD_TEXT(card, 3, "setrtg4");
- qeth_l3_correct_routing_type(card, &card->options.route4.type,
+ rc = qeth_l3_correct_routing_type(card, &card->options.route4.type,
QETH_PROT_IPV4);
+ if (rc)
+ return rc;
rc = qeth_l3_send_setrouting(card, card->options.route4.type,
QETH_PROT_IPV4);
if (!qeth_is_supported(card, IPA_IPV6))
return 0;
- qeth_l3_correct_routing_type(card, &card->options.route6.type,
+ rc = qeth_l3_correct_routing_type(card, &card->options.route6.type,
QETH_PROT_IPV6);
+ if (rc)
+ return rc;
rc = qeth_l3_send_setrouting(card, card->options.route6.type,
QETH_PROT_IPV6);
tcp_hdr(skb)->doff * 4;
int tcpd_len = skb->len - (tcpd - (unsigned long)skb->data);
int elements = PFN_UP(tcpd + tcpd_len - 1) - PFN_DOWN(tcpd);
- elements += skb_shinfo(skb)->nr_frags;
+
+ elements += qeth_get_elements_for_frags(skb);
+
return elements;
}
rc = -ENODEV;
goto out_remove;
}
- qeth_trace_features(card);
if (!card->dev && qeth_l3_setup_netdev(card)) {
rc = -ENODEV;
qeth_l3_set_multicast_list(card->dev);
rtnl_unlock();
}
+ qeth_trace_features(card);
/* let user_space know that device is online */
kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE);
mutex_unlock(&card->conf_mutex);
rc = qeth_l3_setrouting_v6(card);
}
out:
+ if (rc)
+ route->type = old_route_type;
mutex_unlock(&card->conf_mutex);
return rc ? rc : count;
}
case TRIG_NONE:
/* continous acquisition */
devpriv->ai_continous = 1;
- devpriv->ai_sample_count = 0;
+ devpriv->ai_sample_count = 1;
break;
}
depends on CONFIGFS_FS=y && SYSFS=y && !HIGHMEM && ZCACHE=y
depends on NET
# must ensure struct page is 8-byte aligned
- select HAVE_ALIGNED_STRUCT_PAGE if !64_BIT
+ select HAVE_ALIGNED_STRUCT_PAGE if !64BIT
default n
help
RAMster allows RAM on other machines in a cluster to be utilized
{
char *endptr;
unsigned long id;
+ unsigned char id_as_uchar;
unsigned char digest[MD5_SIGNATURE_SIZE];
unsigned char type, response[MD5_SIGNATURE_SIZE * 2 + 2];
unsigned char identifier[10], *challenge = NULL;
goto out;
}
- sg_init_one(&sg, &id, 1);
+ /* To handle both endiannesses */
+ id_as_uchar = id;
+ sg_init_one(&sg, &id_as_uchar, 1);
ret = crypto_hash_update(&desc, &sg, 1);
if (ret < 0) {
pr_err("crypto_hash_update() failed for id\n");
#define FD_DEVICE_QUEUE_DEPTH 32
#define FD_MAX_DEVICE_QUEUE_DEPTH 128
#define FD_BLOCKSIZE 512
-#define FD_MAX_SECTORS 1024
+#define FD_MAX_SECTORS 2048
#define RRF_EMULATE_CDB 0x01
#define RRF_GOT_LBA 0x02
pr_debug("PSCSI: i: %d page: %p len: %d off: %d\n", i,
page, len, off);
- while (len > 0 && data_len > 0) {
+ /*
+ * We only have one page of data in each sg element,
+ * we can not cross a page boundary.
+ */
+ if (off + len > PAGE_SIZE)
+ goto fail;
+
+ if (len > 0 && data_len > 0) {
bytes = min_t(unsigned int, len, PAGE_SIZE - off);
bytes = min(bytes, data_len);
bio = NULL;
}
- len -= bytes;
data_len -= bytes;
- off = 0;
}
}
break;
case SYNCHRONIZE_CACHE:
case SYNCHRONIZE_CACHE_16:
- if (!ops->execute_sync_cache)
- return TCM_UNSUPPORTED_SCSI_OPCODE;
+ if (!ops->execute_sync_cache) {
+ size = 0;
+ cmd->execute_cmd = sbc_emulate_noop;
+ break;
+ }
/*
* Extract LBA and range to be flushed for emulated SYNCHRONIZE_CACHE
if (se_tpg->se_tpg_type == TRANSPORT_TPG_TYPE_NORMAL) {
if (core_tpg_setup_virtual_lun0(se_tpg) < 0) {
- kfree(se_tpg);
+ array_free(se_tpg->tpg_lun_list,
+ TRANSPORT_MAX_LUNS_PER_TPG);
return -ENOMEM;
}
}
return ret;
ret = target_check_reservation(cmd);
- if (ret)
+ if (ret) {
+ cmd->scsi_status = SAM_STAT_RESERVATION_CONFLICT;
return ret;
+ }
ret = dev->transport->parse_cdb(cmd);
if (ret)
if (!priv)
return -ENOMEM;
- priv->sensor = devm_request_and_ioremap(&pdev->dev, res);
- if (!priv->sensor) {
- dev_err(&pdev->dev, "Failed to request_ioremap memory\n");
- return -EADDRNOTAVAIL;
- }
+ priv->sensor = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->sensor))
+ return PTR_ERR(priv->sensor);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res) {
dev_err(&pdev->dev, "Failed to get platform resource\n");
return -ENODEV;
}
- priv->control = devm_request_and_ioremap(&pdev->dev, res);
- if (!priv->control) {
- dev_err(&pdev->dev, "Failed to request_ioremap memory\n");
- return -EADDRNOTAVAIL;
- }
+ priv->control = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->control))
+ return PTR_ERR(priv->control);
ret = dove_init_sensor(priv);
if (ret) {
if (IS_ERR(th_zone->therm_dev)) {
pr_err("Failed to register thermal zone device\n");
- ret = -EINVAL;
+ ret = PTR_ERR(th_zone->therm_dev);
goto err_unregister;
}
th_zone->mode = THERMAL_DEVICE_ENABLED;
if (!priv)
return -ENOMEM;
- priv->sensor = devm_request_and_ioremap(&pdev->dev, res);
- if (!priv->sensor) {
- dev_err(&pdev->dev, "Failed to request_ioremap memory\n");
- return -EADDRNOTAVAIL;
- }
+ priv->sensor = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->sensor))
+ return PTR_ERR(priv->sensor);
thermal = thermal_zone_device_register("kirkwood_thermal", 0, 0,
priv, &ops, NULL, 0, 0);
struct device *dev = rcar_priv_to_dev(priv);
int i;
int ctemp, old, new;
+ int ret = -EINVAL;
mutex_lock(&priv->lock);
if (!ctemp) {
dev_err(dev, "thermal sensor was broken\n");
- return -EINVAL;
+ goto err_out_unlock;
}
/*
dev_dbg(dev, "thermal%d %d -> %d\n", priv->id, priv->ctemp, ctemp);
priv->ctemp = ctemp;
-
+ ret = 0;
+err_out_unlock:
mutex_unlock(&priv->lock);
-
- return 0;
+ return ret;
}
static int rcar_thermal_get_temp(struct thermal_zone_device *zone,
struct resource *res, *irq;
int mres = 0;
int i;
+ int ret = -ENODEV;
int idle = IDLE_INTERVAL;
common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL);
/*
* rcar_has_irq_support() will be enabled
*/
- common->base = devm_request_and_ioremap(dev, res);
- if (!common->base) {
- dev_err(dev, "Unable to ioremap thermal register\n");
- return -ENOMEM;
- }
+ common->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(common->base))
+ return PTR_ERR(common->base);
/* enable temperature comparation */
rcar_thermal_common_write(common, ENR, 0x00030303);
return -ENOMEM;
}
- priv->base = devm_request_and_ioremap(dev, res);
- if (!priv->base) {
- dev_err(dev, "Unable to ioremap priv register\n");
- return -ENOMEM;
- }
+ priv->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(priv->base))
+ return PTR_ERR(priv->base);
priv->common = common;
priv->id = i;
idle);
if (IS_ERR(priv->zone)) {
dev_err(dev, "can't register thermal zone\n");
+ ret = PTR_ERR(priv->zone);
goto error_unregister;
}
rcar_thermal_for_each_priv(priv, common)
thermal_zone_device_unregister(priv->zone);
- return -ENODEV;
+ return ret;
}
static int rcar_thermal_remove(struct platform_device *pdev)
+++ /dev/null
-/*
- * Driver for 8250/16550-type serial ports
- *
- * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
- *
- * Copyright (C) 2001 Russell King.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * A note about mapbase / membase
- *
- * mapbase is the physical address of the IO port.
- * membase is an 'ioremapped' cookie.
- */
-
-#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
-#define SUPPORT_SYSRQ
-#endif
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/ioport.h>
-#include <linux/init.h>
-#include <linux/console.h>
-#include <linux/sysrq.h>
-#include <linux/delay.h>
-#include <linux/platform_device.h>
-#include <linux/tty.h>
-#include <linux/ratelimit.h>
-#include <linux/tty_flip.h>
-#include <linux/serial_reg.h>
-#include <linux/serial_core.h>
-#include <linux/serial.h>
-#include <linux/serial_8250.h>
-#include <linux/nmi.h>
-#include <linux/mutex.h>
-#include <linux/slab.h>
-#ifdef CONFIG_SPARC
-#include <linux/sunserialcore.h>
-#endif
-
-#include <asm/io.h>
-#include <asm/irq.h>
-
-#include "8250.h"
-
-/*
- * Configuration:
- * share_irqs - whether we pass IRQF_SHARED to request_irq(). This option
- * is unsafe when used on edge-triggered interrupts.
- */
-static unsigned int share_irqs = SERIAL8250_SHARE_IRQS;
-
-static unsigned int nr_uarts = CONFIG_SERIAL_8250_RUNTIME_UARTS;
-
-static struct uart_driver serial8250_reg;
-
-static int serial_index(struct uart_port *port)
-{
- return (serial8250_reg.minor - 64) + port->line;
-}
-
-static unsigned int skip_txen_test; /* force skip of txen test at init time */
-
-/*
- * Debugging.
- */
-#if 0
-#define DEBUG_AUTOCONF(fmt...) printk(fmt)
-#else
-#define DEBUG_AUTOCONF(fmt...) do { } while (0)
-#endif
-
-#if 0
-#define DEBUG_INTR(fmt...) printk(fmt)
-#else
-#define DEBUG_INTR(fmt...) do { } while (0)
-#endif
-
-#define PASS_LIMIT 512
-
-#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
-
-
-#ifdef CONFIG_SERIAL_8250_DETECT_IRQ
-#define CONFIG_SERIAL_DETECT_IRQ 1
-#endif
-#ifdef CONFIG_SERIAL_8250_MANY_PORTS
-#define CONFIG_SERIAL_MANY_PORTS 1
-#endif
-
-/*
- * HUB6 is always on. This will be removed once the header
- * files have been cleaned.
- */
-#define CONFIG_HUB6 1
-
-#include <asm/serial.h>
-/*
- * SERIAL_PORT_DFNS tells us about built-in ports that have no
- * standard enumeration mechanism. Platforms that can find all
- * serial ports via mechanisms like ACPI or PCI need not supply it.
- */
-#ifndef SERIAL_PORT_DFNS
-#define SERIAL_PORT_DFNS
-#endif
-
-static const struct old_serial_port old_serial_port[] = {
- SERIAL_PORT_DFNS /* defined in asm/serial.h */
-};
-
-#define UART_NR CONFIG_SERIAL_8250_NR_UARTS
-
-#ifdef CONFIG_SERIAL_8250_RSA
-
-#define PORT_RSA_MAX 4
-static unsigned long probe_rsa[PORT_RSA_MAX];
-static unsigned int probe_rsa_count;
-#endif /* CONFIG_SERIAL_8250_RSA */
-
-struct irq_info {
- struct hlist_node node;
- int irq;
- spinlock_t lock; /* Protects list not the hash */
- struct list_head *head;
-};
-
-#define NR_IRQ_HASH 32 /* Can be adjusted later */
-static struct hlist_head irq_lists[NR_IRQ_HASH];
-static DEFINE_MUTEX(hash_mutex); /* Used to walk the hash */
-
-/*
- * Here we define the default xmit fifo size used for each type of UART.
- */
-static const struct serial8250_config uart_config[] = {
- [PORT_UNKNOWN] = {
- .name = "unknown",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_8250] = {
- .name = "8250",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16450] = {
- .name = "16450",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16550] = {
- .name = "16550",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16550A] = {
- .name = "16550A",
- .fifo_size = 16,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO,
- },
- [PORT_CIRRUS] = {
- .name = "Cirrus",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16650] = {
- .name = "ST16650",
- .fifo_size = 1,
- .tx_loadsz = 1,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_16650V2] = {
- .name = "ST16650V2",
- .fifo_size = 32,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
- UART_FCR_T_TRIG_00,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_16750] = {
- .name = "TI16750",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
- UART_FCR7_64BYTE,
- .flags = UART_CAP_FIFO | UART_CAP_SLEEP | UART_CAP_AFE,
- },
- [PORT_STARTECH] = {
- .name = "Startech",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16C950] = {
- .name = "16C950/954",
- .fifo_size = 128,
- .tx_loadsz = 128,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- /* UART_CAP_EFR breaks billionon CF bluetooth card. */
- .flags = UART_CAP_FIFO | UART_CAP_SLEEP,
- },
- [PORT_16654] = {
- .name = "ST16654",
- .fifo_size = 64,
- .tx_loadsz = 32,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
- UART_FCR_T_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_16850] = {
- .name = "XR16850",
- .fifo_size = 128,
- .tx_loadsz = 128,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_RSA] = {
- .name = "RSA",
- .fifo_size = 2048,
- .tx_loadsz = 2048,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11,
- .flags = UART_CAP_FIFO,
- },
- [PORT_NS16550A] = {
- .name = "NS16550A",
- .fifo_size = 16,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_NATSEMI,
- },
- [PORT_XSCALE] = {
- .name = "XScale",
- .fifo_size = 32,
- .tx_loadsz = 32,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_UUE | UART_CAP_RTOIE,
- },
- [PORT_OCTEON] = {
- .name = "OCTEON",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO,
- },
- [PORT_AR7] = {
- .name = "AR7",
- .fifo_size = 16,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_U6_16550A] = {
- .name = "U6_16550A",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_TEGRA] = {
- .name = "Tegra",
- .fifo_size = 32,
- .tx_loadsz = 8,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
- UART_FCR_T_TRIG_01,
- .flags = UART_CAP_FIFO | UART_CAP_RTOIE,
- },
- [PORT_XR17D15X] = {
- .name = "XR17D15X",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP,
- },
- [PORT_XR17V35X] = {
- .name = "XR17V35X",
- .fifo_size = 256,
- .tx_loadsz = 256,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11 |
- UART_FCR_T_TRIG_11,
- .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP,
- },
- [PORT_LPC3220] = {
- .name = "LPC3220",
- .fifo_size = 64,
- .tx_loadsz = 32,
- .fcr = UART_FCR_DMA_SELECT | UART_FCR_ENABLE_FIFO |
- UART_FCR_R_TRIG_00 | UART_FCR_T_TRIG_00,
- .flags = UART_CAP_FIFO,
- },
- [PORT_BRCM_TRUMANAGE] = {
- .name = "TruManage",
- .fifo_size = 1,
- .tx_loadsz = 1024,
- .flags = UART_CAP_HFIFO,
- },
- [PORT_8250_CIR] = {
- .name = "CIR port"
- },
- [PORT_ALTR_16550_F32] = {
- .name = "Altera 16550 FIFO32",
- .fifo_size = 32,
- .tx_loadsz = 32,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_ALTR_16550_F64] = {
- .name = "Altera 16550 FIFO64",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_ALTR_16550_F128] = {
- .name = "Altera 16550 FIFO128",
- .fifo_size = 128,
- .tx_loadsz = 128,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
-};
-
-/* Uart divisor latch read */
-static int default_serial_dl_read(struct uart_8250_port *up)
-{
- return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
-}
-
-/* Uart divisor latch write */
-static void default_serial_dl_write(struct uart_8250_port *up, int value)
-{
- serial_out(up, UART_DLL, value & 0xff);
- serial_out(up, UART_DLM, value >> 8 & 0xff);
-}
-
-#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
-
-/* Au1x00/RT288x UART hardware has a weird register layout */
-static const u8 au_io_in_map[] = {
- [UART_RX] = 0,
- [UART_IER] = 2,
- [UART_IIR] = 3,
- [UART_LCR] = 5,
- [UART_MCR] = 6,
- [UART_LSR] = 7,
- [UART_MSR] = 8,
-};
-
-static const u8 au_io_out_map[] = {
- [UART_TX] = 1,
- [UART_IER] = 2,
- [UART_FCR] = 4,
- [UART_LCR] = 5,
- [UART_MCR] = 6,
-};
-
-static unsigned int au_serial_in(struct uart_port *p, int offset)
-{
- offset = au_io_in_map[offset] << p->regshift;
- return __raw_readl(p->membase + offset);
-}
-
-static void au_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = au_io_out_map[offset] << p->regshift;
- __raw_writel(value, p->membase + offset);
-}
-
-/* Au1x00 haven't got a standard divisor latch */
-static int au_serial_dl_read(struct uart_8250_port *up)
-{
- return __raw_readl(up->port.membase + 0x28);
-}
-
-static void au_serial_dl_write(struct uart_8250_port *up, int value)
-{
- __raw_writel(value, up->port.membase + 0x28);
-}
-
-#endif
-
-static unsigned int hub6_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- outb(p->hub6 - 1 + offset, p->iobase);
- return inb(p->iobase + 1);
-}
-
-static void hub6_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- outb(p->hub6 - 1 + offset, p->iobase);
- outb(value, p->iobase + 1);
-}
-
-static unsigned int mem_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- return readb(p->membase + offset);
-}
-
-static void mem_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- writeb(value, p->membase + offset);
-}
-
-static void mem32_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- writel(value, p->membase + offset);
-}
-
-static unsigned int mem32_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- return readl(p->membase + offset);
-}
-
-static unsigned int io_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- return inb(p->iobase + offset);
-}
-
-static void io_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- outb(value, p->iobase + offset);
-}
-
-static int serial8250_default_handle_irq(struct uart_port *port);
-static int exar_handle_irq(struct uart_port *port);
-
-static void set_io_from_upio(struct uart_port *p)
-{
- struct uart_8250_port *up =
- container_of(p, struct uart_8250_port, port);
-
- up->dl_read = default_serial_dl_read;
- up->dl_write = default_serial_dl_write;
-
- switch (p->iotype) {
- case UPIO_HUB6:
- p->serial_in = hub6_serial_in;
- p->serial_out = hub6_serial_out;
- break;
-
- case UPIO_MEM:
- p->serial_in = mem_serial_in;
- p->serial_out = mem_serial_out;
- break;
-
- case UPIO_MEM32:
- p->serial_in = mem32_serial_in;
- p->serial_out = mem32_serial_out;
- break;
-
-#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
- case UPIO_AU:
- p->serial_in = au_serial_in;
- p->serial_out = au_serial_out;
- up->dl_read = au_serial_dl_read;
- up->dl_write = au_serial_dl_write;
- break;
-#endif
-
- default:
- p->serial_in = io_serial_in;
- p->serial_out = io_serial_out;
- break;
- }
- /* Remember loaded iotype */
- up->cur_iotype = p->iotype;
- p->handle_irq = serial8250_default_handle_irq;
-}
-
-static void
-serial_port_out_sync(struct uart_port *p, int offset, int value)
-{
- switch (p->iotype) {
- case UPIO_MEM:
- case UPIO_MEM32:
- case UPIO_AU:
- p->serial_out(p, offset, value);
- p->serial_in(p, UART_LCR); /* safe, no side-effects */
- break;
- default:
- p->serial_out(p, offset, value);
- }
-}
-
-/*
- * For the 16C950
- */
-static void serial_icr_write(struct uart_8250_port *up, int offset, int value)
-{
- serial_out(up, UART_SCR, offset);
- serial_out(up, UART_ICR, value);
-}
-
-static unsigned int serial_icr_read(struct uart_8250_port *up, int offset)
-{
- unsigned int value;
-
- serial_icr_write(up, UART_ACR, up->acr | UART_ACR_ICRRD);
- serial_out(up, UART_SCR, offset);
- value = serial_in(up, UART_ICR);
- serial_icr_write(up, UART_ACR, up->acr);
-
- return value;
-}
-
-/*
- * FIFO support.
- */
-static void serial8250_clear_fifos(struct uart_8250_port *p)
-{
- if (p->capabilities & UART_CAP_FIFO) {
- serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO |
- UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_out(p, UART_FCR, 0);
- }
-}
-
-void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p)
-{
- unsigned char fcr;
-
- serial8250_clear_fifos(p);
- fcr = uart_config[p->port.type].fcr;
- serial_out(p, UART_FCR, fcr);
-}
-EXPORT_SYMBOL_GPL(serial8250_clear_and_reinit_fifos);
-
-/*
- * IER sleep support. UARTs which have EFRs need the "extended
- * capability" bit enabled. Note that on XR16C850s, we need to
- * reset LCR to write to IER.
- */
-static void serial8250_set_sleep(struct uart_8250_port *p, int sleep)
-{
- /*
- * Exar UARTs have a SLEEP register that enables or disables
- * each UART to enter sleep mode separately. On the XR17V35x the
- * register is accessible to each UART at the UART_EXAR_SLEEP
- * offset but the UART channel may only write to the corresponding
- * bit.
- */
- if ((p->port.type == PORT_XR17V35X) ||
- (p->port.type == PORT_XR17D15X)) {
- serial_out(p, UART_EXAR_SLEEP, 0xff);
- return;
- }
-
- if (p->capabilities & UART_CAP_SLEEP) {
- if (p->capabilities & UART_CAP_EFR) {
- serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(p, UART_EFR, UART_EFR_ECB);
- serial_out(p, UART_LCR, 0);
- }
- serial_out(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
- if (p->capabilities & UART_CAP_EFR) {
- serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(p, UART_EFR, 0);
- serial_out(p, UART_LCR, 0);
- }
- }
-}
-
-#ifdef CONFIG_SERIAL_8250_RSA
-/*
- * Attempts to turn on the RSA FIFO. Returns zero on failure.
- * We set the port uart clock rate if we succeed.
- */
-static int __enable_rsa(struct uart_8250_port *up)
-{
- unsigned char mode;
- int result;
-
- mode = serial_in(up, UART_RSA_MSR);
- result = mode & UART_RSA_MSR_FIFO;
-
- if (!result) {
- serial_out(up, UART_RSA_MSR, mode | UART_RSA_MSR_FIFO);
- mode = serial_in(up, UART_RSA_MSR);
- result = mode & UART_RSA_MSR_FIFO;
- }
-
- if (result)
- up->port.uartclk = SERIAL_RSA_BAUD_BASE * 16;
-
- return result;
-}
-
-static void enable_rsa(struct uart_8250_port *up)
-{
- if (up->port.type == PORT_RSA) {
- if (up->port.uartclk != SERIAL_RSA_BAUD_BASE * 16) {
- spin_lock_irq(&up->port.lock);
- __enable_rsa(up);
- spin_unlock_irq(&up->port.lock);
- }
- if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16)
- serial_out(up, UART_RSA_FRR, 0);
- }
-}
-
-/*
- * Attempts to turn off the RSA FIFO. Returns zero on failure.
- * It is unknown why interrupts were disabled in here. However,
- * the caller is expected to preserve this behaviour by grabbing
- * the spinlock before calling this function.
- */
-static void disable_rsa(struct uart_8250_port *up)
-{
- unsigned char mode;
- int result;
-
- if (up->port.type == PORT_RSA &&
- up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16) {
- spin_lock_irq(&up->port.lock);
-
- mode = serial_in(up, UART_RSA_MSR);
- result = !(mode & UART_RSA_MSR_FIFO);
-
- if (!result) {
- serial_out(up, UART_RSA_MSR, mode & ~UART_RSA_MSR_FIFO);
- mode = serial_in(up, UART_RSA_MSR);
- result = !(mode & UART_RSA_MSR_FIFO);
- }
-
- if (result)
- up->port.uartclk = SERIAL_RSA_BAUD_BASE_LO * 16;
- spin_unlock_irq(&up->port.lock);
- }
-}
-#endif /* CONFIG_SERIAL_8250_RSA */
-
-/*
- * This is a quickie test to see how big the FIFO is.
- * It doesn't work at all the time, more's the pity.
- */
-static int size_fifo(struct uart_8250_port *up)
-{
- unsigned char old_fcr, old_mcr, old_lcr;
- unsigned short old_dl;
- int count;
-
- old_lcr = serial_in(up, UART_LCR);
- serial_out(up, UART_LCR, 0);
- old_fcr = serial_in(up, UART_FCR);
- old_mcr = serial_in(up, UART_MCR);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
- UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_out(up, UART_MCR, UART_MCR_LOOP);
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- old_dl = serial_dl_read(up);
- serial_dl_write(up, 0x0001);
- serial_out(up, UART_LCR, 0x03);
- for (count = 0; count < 256; count++)
- serial_out(up, UART_TX, count);
- mdelay(20);/* FIXME - schedule_timeout */
- for (count = 0; (serial_in(up, UART_LSR) & UART_LSR_DR) &&
- (count < 256); count++)
- serial_in(up, UART_RX);
- serial_out(up, UART_FCR, old_fcr);
- serial_out(up, UART_MCR, old_mcr);
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- serial_dl_write(up, old_dl);
- serial_out(up, UART_LCR, old_lcr);
-
- return count;
-}
-
-/*
- * Read UART ID using the divisor method - set DLL and DLM to zero
- * and the revision will be in DLL and device type in DLM. We
- * preserve the device state across this.
- */
-static unsigned int autoconfig_read_divisor_id(struct uart_8250_port *p)
-{
- unsigned char old_dll, old_dlm, old_lcr;
- unsigned int id;
-
- old_lcr = serial_in(p, UART_LCR);
- serial_out(p, UART_LCR, UART_LCR_CONF_MODE_A);
-
- old_dll = serial_in(p, UART_DLL);
- old_dlm = serial_in(p, UART_DLM);
-
- serial_out(p, UART_DLL, 0);
- serial_out(p, UART_DLM, 0);
-
- id = serial_in(p, UART_DLL) | serial_in(p, UART_DLM) << 8;
-
- serial_out(p, UART_DLL, old_dll);
- serial_out(p, UART_DLM, old_dlm);
- serial_out(p, UART_LCR, old_lcr);
-
- return id;
-}
-
-/*
- * This is a helper routine to autodetect StarTech/Exar/Oxsemi UART's.
- * When this function is called we know it is at least a StarTech
- * 16650 V2, but it might be one of several StarTech UARTs, or one of
- * its clones. (We treat the broken original StarTech 16650 V1 as a
- * 16550, and why not? Startech doesn't seem to even acknowledge its
- * existence.)
- *
- * What evil have men's minds wrought...
- */
-static void autoconfig_has_efr(struct uart_8250_port *up)
-{
- unsigned int id1, id2, id3, rev;
-
- /*
- * Everything with an EFR has SLEEP
- */
- up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
-
- /*
- * First we check to see if it's an Oxford Semiconductor UART.
- *
- * If we have to do this here because some non-National
- * Semiconductor clone chips lock up if you try writing to the
- * LSR register (which serial_icr_read does)
- */
-
- /*
- * Check for Oxford Semiconductor 16C950.
- *
- * EFR [4] must be set else this test fails.
- *
- * This shouldn't be necessary, but Mike Hudson (Exoray@isys.ca)
- * claims that it's needed for 952 dual UART's (which are not
- * recommended for new designs).
- */
- up->acr = 0;
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(up, UART_EFR, UART_EFR_ECB);
- serial_out(up, UART_LCR, 0x00);
- id1 = serial_icr_read(up, UART_ID1);
- id2 = serial_icr_read(up, UART_ID2);
- id3 = serial_icr_read(up, UART_ID3);
- rev = serial_icr_read(up, UART_REV);
-
- DEBUG_AUTOCONF("950id=%02x:%02x:%02x:%02x ", id1, id2, id3, rev);
-
- if (id1 == 0x16 && id2 == 0xC9 &&
- (id3 == 0x50 || id3 == 0x52 || id3 == 0x54)) {
- up->port.type = PORT_16C950;
-
- /*
- * Enable work around for the Oxford Semiconductor 952 rev B
- * chip which causes it to seriously miscalculate baud rates
- * when DLL is 0.
- */
- if (id3 == 0x52 && rev == 0x01)
- up->bugs |= UART_BUG_QUOT;
- return;
- }
-
- /*
- * We check for a XR16C850 by setting DLL and DLM to 0, and then
- * reading back DLL and DLM. The chip type depends on the DLM
- * value read back:
- * 0x10 - XR16C850 and the DLL contains the chip revision.
- * 0x12 - XR16C2850.
- * 0x14 - XR16C854.
- */
- id1 = autoconfig_read_divisor_id(up);
- DEBUG_AUTOCONF("850id=%04x ", id1);
-
- id2 = id1 >> 8;
- if (id2 == 0x10 || id2 == 0x12 || id2 == 0x14) {
- up->port.type = PORT_16850;
- return;
- }
-
- /*
- * It wasn't an XR16C850.
- *
- * We distinguish between the '654 and the '650 by counting
- * how many bytes are in the FIFO. I'm using this for now,
- * since that's the technique that was sent to me in the
- * serial driver update, but I'm not convinced this works.
- * I've had problems doing this in the past. -TYT
- */
- if (size_fifo(up) == 64)
- up->port.type = PORT_16654;
- else
- up->port.type = PORT_16650V2;
-}
-
-/*
- * We detected a chip without a FIFO. Only two fall into
- * this category - the original 8250 and the 16450. The
- * 16450 has a scratch register (accessible with LCR=0)
- */
-static void autoconfig_8250(struct uart_8250_port *up)
-{
- unsigned char scratch, status1, status2;
-
- up->port.type = PORT_8250;
-
- scratch = serial_in(up, UART_SCR);
- serial_out(up, UART_SCR, 0xa5);
- status1 = serial_in(up, UART_SCR);
- serial_out(up, UART_SCR, 0x5a);
- status2 = serial_in(up, UART_SCR);
- serial_out(up, UART_SCR, scratch);
-
- if (status1 == 0xa5 && status2 == 0x5a)
- up->port.type = PORT_16450;
-}
-
-static int broken_efr(struct uart_8250_port *up)
-{
- /*
- * Exar ST16C2550 "A2" devices incorrectly detect as
- * having an EFR, and report an ID of 0x0201. See
- * http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-11/4812.html
- */
- if (autoconfig_read_divisor_id(up) == 0x0201 && size_fifo(up) == 16)
- return 1;
-
- return 0;
-}
-
-static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
-{
- unsigned char status;
-
- status = serial_in(up, 0x04); /* EXCR2 */
-#define PRESL(x) ((x) & 0x30)
- if (PRESL(status) == 0x10) {
- /* already in high speed mode */
- return 0;
- } else {
- status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
- status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
- serial_out(up, 0x04, status);
- }
- return 1;
-}
-
-/*
- * We know that the chip has FIFOs. Does it have an EFR? The
- * EFR is located in the same register position as the IIR and
- * we know the top two bits of the IIR are currently set. The
- * EFR should contain zero. Try to read the EFR.
- */
-static void autoconfig_16550a(struct uart_8250_port *up)
-{
- unsigned char status1, status2;
- unsigned int iersave;
-
- up->port.type = PORT_16550A;
- up->capabilities |= UART_CAP_FIFO;
-
- /*
- * XR17V35x UARTs have an extra divisor register, DLD
- * that gets enabled with when DLAB is set which will
- * cause the device to incorrectly match and assign
- * port type to PORT_16650. The EFR for this UART is
- * found at offset 0x09. Instead check the Deice ID (DVID)
- * register for a 2, 4 or 8 port UART.
- */
- if (up->port.flags & UPF_EXAR_EFR) {
- status1 = serial_in(up, UART_EXAR_DVID);
- if (status1 == 0x82 || status1 == 0x84 || status1 == 0x88) {
- DEBUG_AUTOCONF("Exar XR17V35x ");
- up->port.type = PORT_XR17V35X;
- up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP;
-
- return;
- }
-
- }
-
- /*
- * Check for presence of the EFR when DLAB is set.
- * Only ST16C650V1 UARTs pass this test.
- */
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- if (serial_in(up, UART_EFR) == 0) {
- serial_out(up, UART_EFR, 0xA8);
- if (serial_in(up, UART_EFR) != 0) {
- DEBUG_AUTOCONF("EFRv1 ");
- up->port.type = PORT_16650;
- up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
- } else {
- DEBUG_AUTOCONF("Motorola 8xxx DUART ");
- }
- serial_out(up, UART_EFR, 0);
- return;
- }
-
- /*
- * Maybe it requires 0xbf to be written to the LCR.
- * (other ST16C650V2 UARTs, TI16C752A, etc)
- */
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- if (serial_in(up, UART_EFR) == 0 && !broken_efr(up)) {
- DEBUG_AUTOCONF("EFRv2 ");
- autoconfig_has_efr(up);
- return;
- }
-
- /*
- * Check for a National Semiconductor SuperIO chip.
- * Attempt to switch to bank 2, read the value of the LOOP bit
- * from EXCR1. Switch back to bank 0, change it in MCR. Then
- * switch back to bank 2, read it from EXCR1 again and check
- * it's changed. If so, set baud_base in EXCR2 to 921600. -- dwmw2
- */
- serial_out(up, UART_LCR, 0);
- status1 = serial_in(up, UART_MCR);
- serial_out(up, UART_LCR, 0xE0);
- status2 = serial_in(up, 0x02); /* EXCR1 */
-
- if (!((status2 ^ status1) & UART_MCR_LOOP)) {
- serial_out(up, UART_LCR, 0);
- serial_out(up, UART_MCR, status1 ^ UART_MCR_LOOP);
- serial_out(up, UART_LCR, 0xE0);
- status2 = serial_in(up, 0x02); /* EXCR1 */
- serial_out(up, UART_LCR, 0);
- serial_out(up, UART_MCR, status1);
-
- if ((status2 ^ status1) & UART_MCR_LOOP) {
- unsigned short quot;
-
- serial_out(up, UART_LCR, 0xE0);
-
- quot = serial_dl_read(up);
- quot <<= 3;
-
- if (ns16550a_goto_highspeed(up))
- serial_dl_write(up, quot);
-
- serial_out(up, UART_LCR, 0);
-
- up->port.uartclk = 921600*16;
- up->port.type = PORT_NS16550A;
- up->capabilities |= UART_NATSEMI;
- return;
- }
- }
-
- /*
- * No EFR. Try to detect a TI16750, which only sets bit 5 of
- * the IIR when 64 byte FIFO mode is enabled when DLAB is set.
- * Try setting it with and without DLAB set. Cheap clones
- * set bit 5 without DLAB set.
- */
- serial_out(up, UART_LCR, 0);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
- status1 = serial_in(up, UART_IIR) >> 5;
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
- status2 = serial_in(up, UART_IIR) >> 5;
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(up, UART_LCR, 0);
-
- DEBUG_AUTOCONF("iir1=%d iir2=%d ", status1, status2);
-
- if (status1 == 6 && status2 == 7) {
- up->port.type = PORT_16750;
- up->capabilities |= UART_CAP_AFE | UART_CAP_SLEEP;
- return;
- }
-
- /*
- * Try writing and reading the UART_IER_UUE bit (b6).
- * If it works, this is probably one of the Xscale platform's
- * internal UARTs.
- * We're going to explicitly set the UUE bit to 0 before
- * trying to write and read a 1 just to make sure it's not
- * already a 1 and maybe locked there before we even start start.
- */
- iersave = serial_in(up, UART_IER);
- serial_out(up, UART_IER, iersave & ~UART_IER_UUE);
- if (!(serial_in(up, UART_IER) & UART_IER_UUE)) {
- /*
- * OK it's in a known zero state, try writing and reading
- * without disturbing the current state of the other bits.
- */
- serial_out(up, UART_IER, iersave | UART_IER_UUE);
- if (serial_in(up, UART_IER) & UART_IER_UUE) {
- /*
- * It's an Xscale.
- * We'll leave the UART_IER_UUE bit set to 1 (enabled).
- */
- DEBUG_AUTOCONF("Xscale ");
- up->port.type = PORT_XSCALE;
- up->capabilities |= UART_CAP_UUE | UART_CAP_RTOIE;
- return;
- }
- } else {
- /*
- * If we got here we couldn't force the IER_UUE bit to 0.
- * Log it and continue.
- */
- DEBUG_AUTOCONF("Couldn't force IER_UUE to 0 ");
- }
- serial_out(up, UART_IER, iersave);
-
- /*
- * Exar uarts have EFR in a weird location
- */
- if (up->port.flags & UPF_EXAR_EFR) {
- DEBUG_AUTOCONF("Exar XR17D15x ");
- up->port.type = PORT_XR17D15X;
- up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP;
-
- return;
- }
-
- /*
- * We distinguish between 16550A and U6 16550A by counting
- * how many bytes are in the FIFO.
- */
- if (up->port.type == PORT_16550A && size_fifo(up) == 64) {
- up->port.type = PORT_U6_16550A;
- up->capabilities |= UART_CAP_AFE;
- }
-}
-
-/*
- * This routine is called by rs_init() to initialize a specific serial
- * port. It determines what type of UART chip this serial port is
- * using: 8250, 16450, 16550, 16550A. The important question is
- * whether or not this UART is a 16550A or not, since this will
- * determine whether or not we can use its FIFO features or not.
- */
-static void autoconfig(struct uart_8250_port *up, unsigned int probeflags)
-{
- unsigned char status1, scratch, scratch2, scratch3;
- unsigned char save_lcr, save_mcr;
- struct uart_port *port = &up->port;
- unsigned long flags;
- unsigned int old_capabilities;
-
- if (!port->iobase && !port->mapbase && !port->membase)
- return;
-
- DEBUG_AUTOCONF("ttyS%d: autoconf (0x%04lx, 0x%p): ",
- serial_index(port), port->iobase, port->membase);
-
- /*
- * We really do need global IRQs disabled here - we're going to
- * be frobbing the chips IRQ enable register to see if it exists.
- */
- spin_lock_irqsave(&port->lock, flags);
-
- up->capabilities = 0;
- up->bugs = 0;
-
- if (!(port->flags & UPF_BUGGY_UART)) {
- /*
- * Do a simple existence test first; if we fail this,
- * there's no point trying anything else.
- *
- * 0x80 is used as a nonsense port to prevent against
- * false positives due to ISA bus float. The
- * assumption is that 0x80 is a non-existent port;
- * which should be safe since include/asm/io.h also
- * makes this assumption.
- *
- * Note: this is safe as long as MCR bit 4 is clear
- * and the device is in "PC" mode.
- */
- scratch = serial_in(up, UART_IER);
- serial_out(up, UART_IER, 0);
-#ifdef __i386__
- outb(0xff, 0x080);
-#endif
- /*
- * Mask out IER[7:4] bits for test as some UARTs (e.g. TL
- * 16C754B) allow only to modify them if an EFR bit is set.
- */
- scratch2 = serial_in(up, UART_IER) & 0x0f;
- serial_out(up, UART_IER, 0x0F);
-#ifdef __i386__
- outb(0, 0x080);
-#endif
- scratch3 = serial_in(up, UART_IER) & 0x0f;
- serial_out(up, UART_IER, scratch);
- if (scratch2 != 0 || scratch3 != 0x0F) {
- /*
- * We failed; there's nothing here
- */
- spin_unlock_irqrestore(&port->lock, flags);
- DEBUG_AUTOCONF("IER test failed (%02x, %02x) ",
- scratch2, scratch3);
- goto out;
- }
- }
-
- save_mcr = serial_in(up, UART_MCR);
- save_lcr = serial_in(up, UART_LCR);
-
- /*
- * Check to see if a UART is really there. Certain broken
- * internal modems based on the Rockwell chipset fail this
- * test, because they apparently don't implement the loopback
- * test mode. So this test is skipped on the COM 1 through
- * COM 4 ports. This *should* be safe, since no board
- * manufacturer would be stupid enough to design a board
- * that conflicts with COM 1-4 --- we hope!
- */
- if (!(port->flags & UPF_SKIP_TEST)) {
- serial_out(up, UART_MCR, UART_MCR_LOOP | 0x0A);
- status1 = serial_in(up, UART_MSR) & 0xF0;
- serial_out(up, UART_MCR, save_mcr);
- if (status1 != 0x90) {
- spin_unlock_irqrestore(&port->lock, flags);
- DEBUG_AUTOCONF("LOOP test failed (%02x) ",
- status1);
- goto out;
- }
- }
-
- /*
- * We're pretty sure there's a port here. Lets find out what
- * type of port it is. The IIR top two bits allows us to find
- * out if it's 8250 or 16450, 16550, 16550A or later. This
- * determines what we test for next.
- *
- * We also initialise the EFR (if any) to zero for later. The
- * EFR occupies the same register location as the FCR and IIR.
- */
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(up, UART_EFR, 0);
- serial_out(up, UART_LCR, 0);
-
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- scratch = serial_in(up, UART_IIR) >> 6;
-
- switch (scratch) {
- case 0:
- autoconfig_8250(up);
- break;
- case 1:
- port->type = PORT_UNKNOWN;
- break;
- case 2:
- port->type = PORT_16550;
- break;
- case 3:
- autoconfig_16550a(up);
- break;
- }
-
-#ifdef CONFIG_SERIAL_8250_RSA
- /*
- * Only probe for RSA ports if we got the region.
- */
- if (port->type == PORT_16550A && probeflags & PROBE_RSA) {
- int i;
-
- for (i = 0 ; i < probe_rsa_count; ++i) {
- if (probe_rsa[i] == port->iobase && __enable_rsa(up)) {
- port->type = PORT_RSA;
- break;
- }
- }
- }
-#endif
-
- serial_out(up, UART_LCR, save_lcr);
-
- port->fifosize = uart_config[up->port.type].fifo_size;
- old_capabilities = up->capabilities;
- up->capabilities = uart_config[port->type].flags;
- up->tx_loadsz = uart_config[port->type].tx_loadsz;
-
- if (port->type == PORT_UNKNOWN)
- goto out_lock;
-
- /*
- * Reset the UART.
- */
-#ifdef CONFIG_SERIAL_8250_RSA
- if (port->type == PORT_RSA)
- serial_out(up, UART_RSA_FRR, 0);
-#endif
- serial_out(up, UART_MCR, save_mcr);
- serial8250_clear_fifos(up);
- serial_in(up, UART_RX);
- if (up->capabilities & UART_CAP_UUE)
- serial_out(up, UART_IER, UART_IER_UUE);
- else
- serial_out(up, UART_IER, 0);
-
-out_lock:
- spin_unlock_irqrestore(&port->lock, flags);
- if (up->capabilities != old_capabilities) {
- printk(KERN_WARNING
- "ttyS%d: detected caps %08x should be %08x\n",
- serial_index(port), old_capabilities,
- up->capabilities);
- }
-out:
- DEBUG_AUTOCONF("iir=%d ", scratch);
- DEBUG_AUTOCONF("type=%s\n", uart_config[port->type].name);
-}
-
-static void autoconfig_irq(struct uart_8250_port *up)
-{
- struct uart_port *port = &up->port;
- unsigned char save_mcr, save_ier;
- unsigned char save_ICP = 0;
- unsigned int ICP = 0;
- unsigned long irqs;
- int irq;
-
- if (port->flags & UPF_FOURPORT) {
- ICP = (port->iobase & 0xfe0) | 0x1f;
- save_ICP = inb_p(ICP);
- outb_p(0x80, ICP);
- inb_p(ICP);
- }
-
- /* forget possible initially masked and pending IRQ */
- probe_irq_off(probe_irq_on());
- save_mcr = serial_in(up, UART_MCR);
- save_ier = serial_in(up, UART_IER);
- serial_out(up, UART_MCR, UART_MCR_OUT1 | UART_MCR_OUT2);
-
- irqs = probe_irq_on();
- serial_out(up, UART_MCR, 0);
- udelay(10);
- if (port->flags & UPF_FOURPORT) {
- serial_out(up, UART_MCR,
- UART_MCR_DTR | UART_MCR_RTS);
- } else {
- serial_out(up, UART_MCR,
- UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2);
- }
- serial_out(up, UART_IER, 0x0f); /* enable all intrs */
- serial_in(up, UART_LSR);
- serial_in(up, UART_RX);
- serial_in(up, UART_IIR);
- serial_in(up, UART_MSR);
- serial_out(up, UART_TX, 0xFF);
- udelay(20);
- irq = probe_irq_off(irqs);
-
- serial_out(up, UART_MCR, save_mcr);
- serial_out(up, UART_IER, save_ier);
-
- if (port->flags & UPF_FOURPORT)
- outb_p(save_ICP, ICP);
-
- port->irq = (irq > 0) ? irq : 0;
-}
-
-static inline void __stop_tx(struct uart_8250_port *p)
-{
- if (p->ier & UART_IER_THRI) {
- p->ier &= ~UART_IER_THRI;
- serial_out(p, UART_IER, p->ier);
- }
-}
-
-static void serial8250_stop_tx(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- __stop_tx(up);
-
- /*
- * We really want to stop the transmitter from sending.
- */
- if (port->type == PORT_16C950) {
- up->acr |= UART_ACR_TXDIS;
- serial_icr_write(up, UART_ACR, up->acr);
- }
-}
-
-static void serial8250_start_tx(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- if (up->dma && !serial8250_tx_dma(up)) {
- return;
- } else if (!(up->ier & UART_IER_THRI)) {
- up->ier |= UART_IER_THRI;
- serial_port_out(port, UART_IER, up->ier);
-
- if (up->bugs & UART_BUG_TXEN) {
- unsigned char lsr;
- lsr = serial_in(up, UART_LSR);
- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- if (lsr & UART_LSR_TEMT)
- serial8250_tx_chars(up);
- }
- }
-
- /*
- * Re-enable the transmitter if we disabled it.
- */
- if (port->type == PORT_16C950 && up->acr & UART_ACR_TXDIS) {
- up->acr &= ~UART_ACR_TXDIS;
- serial_icr_write(up, UART_ACR, up->acr);
- }
-}
-
-static void serial8250_stop_rx(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- up->ier &= ~UART_IER_RLSI;
- up->port.read_status_mask &= ~UART_LSR_DR;
- serial_port_out(port, UART_IER, up->ier);
-}
-
-static void serial8250_enable_ms(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- /* no MSR capabilities */
- if (up->bugs & UART_BUG_NOMSR)
- return;
-
- up->ier |= UART_IER_MSI;
- serial_port_out(port, UART_IER, up->ier);
-}
-
-/*
- * serial8250_rx_chars: processes according to the passed in LSR
- * value, and returns the remaining LSR bits not handled
- * by this Rx routine.
- */
-unsigned char
-serial8250_rx_chars(struct uart_8250_port *up, unsigned char lsr)
-{
- struct uart_port *port = &up->port;
- unsigned char ch;
- int max_count = 256;
- char flag;
-
- do {
- if (likely(lsr & UART_LSR_DR))
- ch = serial_in(up, UART_RX);
- else
- /*
- * Intel 82571 has a Serial Over Lan device that will
- * set UART_LSR_BI without setting UART_LSR_DR when
- * it receives a break. To avoid reading from the
- * receive buffer without UART_LSR_DR bit set, we
- * just force the read character to be 0
- */
- ch = 0;
-
- flag = TTY_NORMAL;
- port->icount.rx++;
-
- lsr |= up->lsr_saved_flags;
- up->lsr_saved_flags = 0;
-
- if (unlikely(lsr & UART_LSR_BRK_ERROR_BITS)) {
- if (lsr & UART_LSR_BI) {
- lsr &= ~(UART_LSR_FE | UART_LSR_PE);
- port->icount.brk++;
- /*
- * We do the SysRQ and SAK checking
- * here because otherwise the break
- * may get masked by ignore_status_mask
- * or read_status_mask.
- */
- if (uart_handle_break(port))
- goto ignore_char;
- } else if (lsr & UART_LSR_PE)
- port->icount.parity++;
- else if (lsr & UART_LSR_FE)
- port->icount.frame++;
- if (lsr & UART_LSR_OE)
- port->icount.overrun++;
-
- /*
- * Mask off conditions which should be ignored.
- */
- lsr &= port->read_status_mask;
-
- if (lsr & UART_LSR_BI) {
- DEBUG_INTR("handling break....");
- flag = TTY_BREAK;
- } else if (lsr & UART_LSR_PE)
- flag = TTY_PARITY;
- else if (lsr & UART_LSR_FE)
- flag = TTY_FRAME;
- }
- if (uart_handle_sysrq_char(port, ch))
- goto ignore_char;
-
- uart_insert_char(port, lsr, UART_LSR_OE, ch, flag);
-
-ignore_char:
- lsr = serial_in(up, UART_LSR);
- } while ((lsr & (UART_LSR_DR | UART_LSR_BI)) && (max_count-- > 0));
- spin_unlock(&port->lock);
- tty_flip_buffer_push(&port->state->port);
- spin_lock(&port->lock);
- return lsr;
-}
-EXPORT_SYMBOL_GPL(serial8250_rx_chars);
-
-void serial8250_tx_chars(struct uart_8250_port *up)
-{
- struct uart_port *port = &up->port;
- struct circ_buf *xmit = &port->state->xmit;
- int count;
-
- if (port->x_char) {
- serial_out(up, UART_TX, port->x_char);
- port->icount.tx++;
- port->x_char = 0;
- return;
- }
- if (uart_tx_stopped(port)) {
- serial8250_stop_tx(port);
- return;
- }
- if (uart_circ_empty(xmit)) {
- __stop_tx(up);
- return;
- }
-
- count = up->tx_loadsz;
- do {
- serial_out(up, UART_TX, xmit->buf[xmit->tail]);
- xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
- port->icount.tx++;
- if (uart_circ_empty(xmit))
- break;
- if (up->capabilities & UART_CAP_HFIFO) {
- if ((serial_port_in(port, UART_LSR) & BOTH_EMPTY) !=
- BOTH_EMPTY)
- break;
- }
- } while (--count > 0);
-
- if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- uart_write_wakeup(port);
-
- DEBUG_INTR("THRE...");
-
- if (uart_circ_empty(xmit))
- __stop_tx(up);
-}
-EXPORT_SYMBOL_GPL(serial8250_tx_chars);
-
-unsigned int serial8250_modem_status(struct uart_8250_port *up)
-{
- struct uart_port *port = &up->port;
- unsigned int status = serial_in(up, UART_MSR);
-
- status |= up->msr_saved_flags;
- up->msr_saved_flags = 0;
- if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI &&
- port->state != NULL) {
- if (status & UART_MSR_TERI)
- port->icount.rng++;
- if (status & UART_MSR_DDSR)
- port->icount.dsr++;
- if (status & UART_MSR_DDCD)
- uart_handle_dcd_change(port, status & UART_MSR_DCD);
- if (status & UART_MSR_DCTS)
- uart_handle_cts_change(port, status & UART_MSR_CTS);
-
- wake_up_interruptible(&port->state->port.delta_msr_wait);
- }
-
- return status;
-}
-EXPORT_SYMBOL_GPL(serial8250_modem_status);
-
-/*
- * This handles the interrupt from one port.
- */
-int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
-{
- unsigned char status;
- unsigned long flags;
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- int dma_err = 0;
-
- if (iir & UART_IIR_NO_INT)
- return 0;
-
- spin_lock_irqsave(&port->lock, flags);
-
- status = serial_port_in(port, UART_LSR);
-
- DEBUG_INTR("status = %x...", status);
-
- if (status & (UART_LSR_DR | UART_LSR_BI)) {
- if (up->dma)
- dma_err = serial8250_rx_dma(up, iir);
-
- if (!up->dma || dma_err)
- status = serial8250_rx_chars(up, status);
- }
- serial8250_modem_status(up);
- if (status & UART_LSR_THRE)
- serial8250_tx_chars(up);
-
- spin_unlock_irqrestore(&port->lock, flags);
- return 1;
-}
-EXPORT_SYMBOL_GPL(serial8250_handle_irq);
-
-static int serial8250_default_handle_irq(struct uart_port *port)
-{
- unsigned int iir = serial_port_in(port, UART_IIR);
-
- return serial8250_handle_irq(port, iir);
-}
-
-/*
- * These Exar UARTs have an extra interrupt indicator that could
- * fire for a few unimplemented interrupts. One of which is a
- * wakeup event when coming out of sleep. Put this here just
- * to be on the safe side that these interrupts don't go unhandled.
- */
-static int exar_handle_irq(struct uart_port *port)
-{
- unsigned char int0, int1, int2, int3;
- unsigned int iir = serial_port_in(port, UART_IIR);
- int ret;
-
- ret = serial8250_handle_irq(port, iir);
-
- if ((port->type == PORT_XR17V35X) ||
- (port->type == PORT_XR17D15X)) {
- int0 = serial_port_in(port, 0x80);
- int1 = serial_port_in(port, 0x81);
- int2 = serial_port_in(port, 0x82);
- int3 = serial_port_in(port, 0x83);
- }
-
- return ret;
-}
-
-/*
- * This is the serial driver's interrupt routine.
- *
- * Arjan thinks the old way was overly complex, so it got simplified.
- * Alan disagrees, saying that need the complexity to handle the weird
- * nature of ISA shared interrupts. (This is a special exception.)
- *
- * In order to handle ISA shared interrupts properly, we need to check
- * that all ports have been serviced, and therefore the ISA interrupt
- * line has been de-asserted.
- *
- * This means we need to loop through all ports. checking that they
- * don't have an interrupt pending.
- */
-static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
-{
- struct irq_info *i = dev_id;
- struct list_head *l, *end = NULL;
- int pass_counter = 0, handled = 0;
-
- DEBUG_INTR("serial8250_interrupt(%d)...", irq);
-
- spin_lock(&i->lock);
-
- l = i->head;
- do {
- struct uart_8250_port *up;
- struct uart_port *port;
-
- up = list_entry(l, struct uart_8250_port, list);
- port = &up->port;
-
- if (port->handle_irq(port)) {
- handled = 1;
- end = NULL;
- } else if (end == NULL)
- end = l;
-
- l = l->next;
-
- if (l == i->head && pass_counter++ > PASS_LIMIT) {
- /* If we hit this, we're dead. */
- printk_ratelimited(KERN_ERR
- "serial8250: too much work for irq%d\n", irq);
- break;
- }
- } while (l != end);
-
- spin_unlock(&i->lock);
-
- DEBUG_INTR("end.\n");
-
- return IRQ_RETVAL(handled);
-}
-
-/*
- * To support ISA shared interrupts, we need to have one interrupt
- * handler that ensures that the IRQ line has been deasserted
- * before returning. Failing to do this will result in the IRQ
- * line being stuck active, and, since ISA irqs are edge triggered,
- * no more IRQs will be seen.
- */
-static void serial_do_unlink(struct irq_info *i, struct uart_8250_port *up)
-{
- spin_lock_irq(&i->lock);
-
- if (!list_empty(i->head)) {
- if (i->head == &up->list)
- i->head = i->head->next;
- list_del(&up->list);
- } else {
- BUG_ON(i->head != &up->list);
- i->head = NULL;
- }
- spin_unlock_irq(&i->lock);
- /* List empty so throw away the hash node */
- if (i->head == NULL) {
- hlist_del(&i->node);
- kfree(i);
- }
-}
-
-static int serial_link_irq_chain(struct uart_8250_port *up)
-{
- struct hlist_head *h;
- struct hlist_node *n;
- struct irq_info *i;
- int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0;
-
- mutex_lock(&hash_mutex);
-
- h = &irq_lists[up->port.irq % NR_IRQ_HASH];
-
- hlist_for_each(n, h) {
- i = hlist_entry(n, struct irq_info, node);
- if (i->irq == up->port.irq)
- break;
- }
-
- if (n == NULL) {
- i = kzalloc(sizeof(struct irq_info), GFP_KERNEL);
- if (i == NULL) {
- mutex_unlock(&hash_mutex);
- return -ENOMEM;
- }
- spin_lock_init(&i->lock);
- i->irq = up->port.irq;
- hlist_add_head(&i->node, h);
- }
- mutex_unlock(&hash_mutex);
-
- spin_lock_irq(&i->lock);
-
- if (i->head) {
- list_add(&up->list, i->head);
- spin_unlock_irq(&i->lock);
-
- ret = 0;
- } else {
- INIT_LIST_HEAD(&up->list);
- i->head = &up->list;
- spin_unlock_irq(&i->lock);
- irq_flags |= up->port.irqflags;
- ret = request_irq(up->port.irq, serial8250_interrupt,
- irq_flags, "serial", i);
- if (ret < 0)
- serial_do_unlink(i, up);
- }
-
- return ret;
-}
-
-static void serial_unlink_irq_chain(struct uart_8250_port *up)
-{
- struct irq_info *i;
- struct hlist_node *n;
- struct hlist_head *h;
-
- mutex_lock(&hash_mutex);
-
- h = &irq_lists[up->port.irq % NR_IRQ_HASH];
-
- hlist_for_each(n, h) {
- i = hlist_entry(n, struct irq_info, node);
- if (i->irq == up->port.irq)
- break;
- }
-
- BUG_ON(n == NULL);
- BUG_ON(i->head == NULL);
-
- if (list_empty(i->head))
- free_irq(up->port.irq, i);
-
- serial_do_unlink(i, up);
- mutex_unlock(&hash_mutex);
-}
-
-/*
- * This function is used to handle ports that do not have an
- * interrupt. This doesn't work very well for 16450's, but gives
- * barely passable results for a 16550A. (Although at the expense
- * of much CPU overhead).
- */
-static void serial8250_timeout(unsigned long data)
-{
- struct uart_8250_port *up = (struct uart_8250_port *)data;
-
- up->port.handle_irq(&up->port);
- mod_timer(&up->timer, jiffies + uart_poll_timeout(&up->port));
-}
-
-static void serial8250_backup_timeout(unsigned long data)
-{
- struct uart_8250_port *up = (struct uart_8250_port *)data;
- unsigned int iir, ier = 0, lsr;
- unsigned long flags;
-
- spin_lock_irqsave(&up->port.lock, flags);
-
- /*
- * Must disable interrupts or else we risk racing with the interrupt
- * based handler.
- */
- if (up->port.irq) {
- ier = serial_in(up, UART_IER);
- serial_out(up, UART_IER, 0);
- }
-
- iir = serial_in(up, UART_IIR);
-
- /*
- * This should be a safe test for anyone who doesn't trust the
- * IIR bits on their UART, but it's specifically designed for
- * the "Diva" UART used on the management processor on many HP
- * ia64 and parisc boxes.
- */
- lsr = serial_in(up, UART_LSR);
- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) &&
- (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) &&
- (lsr & UART_LSR_THRE)) {
- iir &= ~(UART_IIR_ID | UART_IIR_NO_INT);
- iir |= UART_IIR_THRI;
- }
-
- if (!(iir & UART_IIR_NO_INT))
- serial8250_tx_chars(up);
-
- if (up->port.irq)
- serial_out(up, UART_IER, ier);
-
- spin_unlock_irqrestore(&up->port.lock, flags);
-
- /* Standard timer interval plus 0.2s to keep the port running */
- mod_timer(&up->timer,
- jiffies + uart_poll_timeout(&up->port) + HZ / 5);
-}
-
-static unsigned int serial8250_tx_empty(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
- unsigned int lsr;
-
- spin_lock_irqsave(&port->lock, flags);
- lsr = serial_port_in(port, UART_LSR);
- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- spin_unlock_irqrestore(&port->lock, flags);
-
- return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
-}
-
-static unsigned int serial8250_get_mctrl(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned int status;
- unsigned int ret;
-
- status = serial8250_modem_status(up);
-
- ret = 0;
- if (status & UART_MSR_DCD)
- ret |= TIOCM_CAR;
- if (status & UART_MSR_RI)
- ret |= TIOCM_RNG;
- if (status & UART_MSR_DSR)
- ret |= TIOCM_DSR;
- if (status & UART_MSR_CTS)
- ret |= TIOCM_CTS;
- return ret;
-}
-
-static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned char mcr = 0;
-
- if (mctrl & TIOCM_RTS)
- mcr |= UART_MCR_RTS;
- if (mctrl & TIOCM_DTR)
- mcr |= UART_MCR_DTR;
- if (mctrl & TIOCM_OUT1)
- mcr |= UART_MCR_OUT1;
- if (mctrl & TIOCM_OUT2)
- mcr |= UART_MCR_OUT2;
- if (mctrl & TIOCM_LOOP)
- mcr |= UART_MCR_LOOP;
-
- mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
-
- serial_port_out(port, UART_MCR, mcr);
-}
-
-static void serial8250_break_ctl(struct uart_port *port, int break_state)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
-
- spin_lock_irqsave(&port->lock, flags);
- if (break_state == -1)
- up->lcr |= UART_LCR_SBC;
- else
- up->lcr &= ~UART_LCR_SBC;
- serial_port_out(port, UART_LCR, up->lcr);
- spin_unlock_irqrestore(&port->lock, flags);
-}
-
-/*
- * Wait for transmitter & holding register to empty
- */
-static void wait_for_xmitr(struct uart_8250_port *up, int bits)
-{
- unsigned int status, tmout = 10000;
-
- /* Wait up to 10ms for the character(s) to be sent. */
- for (;;) {
- status = serial_in(up, UART_LSR);
-
- up->lsr_saved_flags |= status & LSR_SAVE_FLAGS;
-
- if ((status & bits) == bits)
- break;
- if (--tmout == 0)
- break;
- udelay(1);
- }
-
- /* Wait up to 1s for flow control if necessary */
- if (up->port.flags & UPF_CONS_FLOW) {
- unsigned int tmout;
- for (tmout = 1000000; tmout; tmout--) {
- unsigned int msr = serial_in(up, UART_MSR);
- up->msr_saved_flags |= msr & MSR_SAVE_FLAGS;
- if (msr & UART_MSR_CTS)
- break;
- udelay(1);
- touch_nmi_watchdog();
- }
- }
-}
-
-#ifdef CONFIG_CONSOLE_POLL
-/*
- * Console polling routines for writing and reading from the uart while
- * in an interrupt or debug context.
- */
-
-static int serial8250_get_poll_char(struct uart_port *port)
-{
- unsigned char lsr = serial_port_in(port, UART_LSR);
-
- if (!(lsr & UART_LSR_DR))
- return NO_POLL_CHAR;
-
- return serial_port_in(port, UART_RX);
-}
-
-
-static void serial8250_put_poll_char(struct uart_port *port,
- unsigned char c)
-{
- unsigned int ier;
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- /*
- * First save the IER then disable the interrupts
- */
- ier = serial_port_in(port, UART_IER);
- if (up->capabilities & UART_CAP_UUE)
- serial_port_out(port, UART_IER, UART_IER_UUE);
- else
- serial_port_out(port, UART_IER, 0);
-
- wait_for_xmitr(up, BOTH_EMPTY);
- /*
- * Send the character out.
- * If a LF, also do CR...
- */
- serial_port_out(port, UART_TX, c);
- if (c == 10) {
- wait_for_xmitr(up, BOTH_EMPTY);
- serial_port_out(port, UART_TX, 13);
- }
-
- /*
- * Finally, wait for transmitter to become empty
- * and restore the IER
- */
- wait_for_xmitr(up, BOTH_EMPTY);
- serial_port_out(port, UART_IER, ier);
-}
-
-#endif /* CONFIG_CONSOLE_POLL */
-
-static int serial8250_startup(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
- unsigned char lsr, iir;
- int retval;
-
- if (port->type == PORT_8250_CIR)
- return -ENODEV;
-
- if (!port->fifosize)
- port->fifosize = uart_config[port->type].fifo_size;
- if (!up->tx_loadsz)
- up->tx_loadsz = uart_config[port->type].tx_loadsz;
- if (!up->capabilities)
- up->capabilities = uart_config[port->type].flags;
- up->mcr = 0;
-
- if (port->iotype != up->cur_iotype)
- set_io_from_upio(port);
-
- if (port->type == PORT_16C950) {
- /* Wake up and initialize UART */
- up->acr = 0;
- serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_port_out(port, UART_EFR, UART_EFR_ECB);
- serial_port_out(port, UART_IER, 0);
- serial_port_out(port, UART_LCR, 0);
- serial_icr_write(up, UART_CSR, 0); /* Reset the UART */
- serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_port_out(port, UART_EFR, UART_EFR_ECB);
- serial_port_out(port, UART_LCR, 0);
- }
-
-#ifdef CONFIG_SERIAL_8250_RSA
- /*
- * If this is an RSA port, see if we can kick it up to the
- * higher speed clock.
- */
- enable_rsa(up);
-#endif
-
- /*
- * Clear the FIFO buffers and disable them.
- * (they will be reenabled in set_termios())
- */
- serial8250_clear_fifos(up);
-
- /*
- * Clear the interrupt registers.
- */
- serial_port_in(port, UART_LSR);
- serial_port_in(port, UART_RX);
- serial_port_in(port, UART_IIR);
- serial_port_in(port, UART_MSR);
-
- /*
- * At this point, there's no way the LSR could still be 0xff;
- * if it is, then bail out, because there's likely no UART
- * here.
- */
- if (!(port->flags & UPF_BUGGY_UART) &&
- (serial_port_in(port, UART_LSR) == 0xff)) {
- printk_ratelimited(KERN_INFO "ttyS%d: LSR safety check engaged!\n",
- serial_index(port));
- return -ENODEV;
- }
-
- /*
- * For a XR16C850, we need to set the trigger levels
- */
- if (port->type == PORT_16850) {
- unsigned char fctr;
-
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
-
- fctr = serial_in(up, UART_FCTR) & ~(UART_FCTR_RX|UART_FCTR_TX);
- serial_port_out(port, UART_FCTR,
- fctr | UART_FCTR_TRGD | UART_FCTR_RX);
- serial_port_out(port, UART_TRG, UART_TRG_96);
- serial_port_out(port, UART_FCTR,
- fctr | UART_FCTR_TRGD | UART_FCTR_TX);
- serial_port_out(port, UART_TRG, UART_TRG_96);
-
- serial_port_out(port, UART_LCR, 0);
- }
-
- if (port->irq) {
- unsigned char iir1;
- /*
- * Test for UARTs that do not reassert THRE when the
- * transmitter is idle and the interrupt has already
- * been cleared. Real 16550s should always reassert
- * this interrupt whenever the transmitter is idle and
- * the interrupt is enabled. Delays are necessary to
- * allow register changes to become visible.
- */
- spin_lock_irqsave(&port->lock, flags);
- if (up->port.irqflags & IRQF_SHARED)
- disable_irq_nosync(port->irq);
-
- wait_for_xmitr(up, UART_LSR_THRE);
- serial_port_out_sync(port, UART_IER, UART_IER_THRI);
- udelay(1); /* allow THRE to set */
- iir1 = serial_port_in(port, UART_IIR);
- serial_port_out(port, UART_IER, 0);
- serial_port_out_sync(port, UART_IER, UART_IER_THRI);
- udelay(1); /* allow a working UART time to re-assert THRE */
- iir = serial_port_in(port, UART_IIR);
- serial_port_out(port, UART_IER, 0);
-
- if (port->irqflags & IRQF_SHARED)
- enable_irq(port->irq);
- spin_unlock_irqrestore(&port->lock, flags);
-
- /*
- * If the interrupt is not reasserted, or we otherwise
- * don't trust the iir, setup a timer to kick the UART
- * on a regular basis.
- */
- if ((!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) ||
- up->port.flags & UPF_BUG_THRE) {
- up->bugs |= UART_BUG_THRE;
- pr_debug("ttyS%d - using backup timer\n",
- serial_index(port));
- }
- }
-
- /*
- * The above check will only give an accurate result the first time
- * the port is opened so this value needs to be preserved.
- */
- if (up->bugs & UART_BUG_THRE) {
- up->timer.function = serial8250_backup_timeout;
- up->timer.data = (unsigned long)up;
- mod_timer(&up->timer, jiffies +
- uart_poll_timeout(port) + HZ / 5);
- }
-
- /*
- * If the "interrupt" for this port doesn't correspond with any
- * hardware interrupt, we use a timer-based system. The original
- * driver used to do this with IRQ0.
- */
- if (!port->irq) {
- up->timer.data = (unsigned long)up;
- mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
- } else {
- retval = serial_link_irq_chain(up);
- if (retval)
- return retval;
- }
-
- /*
- * Now, initialize the UART
- */
- serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
-
- spin_lock_irqsave(&port->lock, flags);
- if (up->port.flags & UPF_FOURPORT) {
- if (!up->port.irq)
- up->port.mctrl |= TIOCM_OUT1;
- } else
- /*
- * Most PC uarts need OUT2 raised to enable interrupts.
- */
- if (port->irq)
- up->port.mctrl |= TIOCM_OUT2;
-
- serial8250_set_mctrl(port, port->mctrl);
-
- /* Serial over Lan (SoL) hack:
- Intel 8257x Gigabit ethernet chips have a
- 16550 emulation, to be used for Serial Over Lan.
- Those chips take a longer time than a normal
- serial device to signalize that a transmission
- data was queued. Due to that, the above test generally
- fails. One solution would be to delay the reading of
- iir. However, this is not reliable, since the timeout
- is variable. So, let's just don't test if we receive
- TX irq. This way, we'll never enable UART_BUG_TXEN.
- */
- if (skip_txen_test || up->port.flags & UPF_NO_TXEN_TEST)
- goto dont_test_tx_en;
-
- /*
- * Do a quick test to see if we receive an
- * interrupt when we enable the TX irq.
- */
- serial_port_out(port, UART_IER, UART_IER_THRI);
- lsr = serial_port_in(port, UART_LSR);
- iir = serial_port_in(port, UART_IIR);
- serial_port_out(port, UART_IER, 0);
-
- if (lsr & UART_LSR_TEMT && iir & UART_IIR_NO_INT) {
- if (!(up->bugs & UART_BUG_TXEN)) {
- up->bugs |= UART_BUG_TXEN;
- pr_debug("ttyS%d - enabling bad tx status workarounds\n",
- serial_index(port));
- }
- } else {
- up->bugs &= ~UART_BUG_TXEN;
- }
-
-dont_test_tx_en:
- spin_unlock_irqrestore(&port->lock, flags);
-
- /*
- * Clear the interrupt registers again for luck, and clear the
- * saved flags to avoid getting false values from polling
- * routines or the previous session.
- */
- serial_port_in(port, UART_LSR);
- serial_port_in(port, UART_RX);
- serial_port_in(port, UART_IIR);
- serial_port_in(port, UART_MSR);
- up->lsr_saved_flags = 0;
- up->msr_saved_flags = 0;
-
- /*
- * Request DMA channels for both RX and TX.
- */
- if (up->dma) {
- retval = serial8250_request_dma(up);
- if (retval) {
- pr_warn_ratelimited("ttyS%d - failed to request DMA\n",
- serial_index(port));
- up->dma = NULL;
- }
- }
-
- /*
- * Finally, enable interrupts. Note: Modem status interrupts
- * are set via set_termios(), which will be occurring imminently
- * anyway, so we don't enable them here.
- */
- up->ier = UART_IER_RLSI | UART_IER_RDI;
- serial_port_out(port, UART_IER, up->ier);
-
- if (port->flags & UPF_FOURPORT) {
- unsigned int icp;
- /*
- * Enable interrupts on the AST Fourport board
- */
- icp = (port->iobase & 0xfe0) | 0x01f;
- outb_p(0x80, icp);
- inb_p(icp);
- }
-
- return 0;
-}
-
-static void serial8250_shutdown(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
-
- /*
- * Disable interrupts from this port
- */
- up->ier = 0;
- serial_port_out(port, UART_IER, 0);
-
- if (up->dma)
- serial8250_release_dma(up);
-
- spin_lock_irqsave(&port->lock, flags);
- if (port->flags & UPF_FOURPORT) {
- /* reset interrupts on the AST Fourport board */
- inb((port->iobase & 0xfe0) | 0x1f);
- port->mctrl |= TIOCM_OUT1;
- } else
- port->mctrl &= ~TIOCM_OUT2;
-
- serial8250_set_mctrl(port, port->mctrl);
- spin_unlock_irqrestore(&port->lock, flags);
-
- /*
- * Disable break condition and FIFOs
- */
- serial_port_out(port, UART_LCR,
- serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);
- serial8250_clear_fifos(up);
-
-#ifdef CONFIG_SERIAL_8250_RSA
- /*
- * Reset the RSA board back to 115kbps compat mode.
- */
- disable_rsa(up);
-#endif
-
- /*
- * Read data port to reset things, and then unlink from
- * the IRQ chain.
- */
- serial_port_in(port, UART_RX);
-
- del_timer_sync(&up->timer);
- up->timer.function = serial8250_timeout;
- if (port->irq)
- serial_unlink_irq_chain(up);
-}
-
-static unsigned int serial8250_get_divisor(struct uart_port *port, unsigned int baud)
-{
- unsigned int quot;
-
- /*
- * Handle magic divisors for baud rates above baud_base on
- * SMSC SuperIO chips.
- */
- if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
- baud == (port->uartclk/4))
- quot = 0x8001;
- else if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
- baud == (port->uartclk/8))
- quot = 0x8002;
- else
- quot = uart_get_divisor(port, baud);
-
- return quot;
-}
-
-void
-serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
- struct ktermios *old)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned char cval, fcr = 0;
- unsigned long flags;
- unsigned int baud, quot;
- int fifo_bug = 0;
-
- switch (termios->c_cflag & CSIZE) {
- case CS5:
- cval = UART_LCR_WLEN5;
- break;
- case CS6:
- cval = UART_LCR_WLEN6;
- break;
- case CS7:
- cval = UART_LCR_WLEN7;
- break;
- default:
- case CS8:
- cval = UART_LCR_WLEN8;
- break;
- }
-
- if (termios->c_cflag & CSTOPB)
- cval |= UART_LCR_STOP;
- if (termios->c_cflag & PARENB) {
- cval |= UART_LCR_PARITY;
- if (up->bugs & UART_BUG_PARITY)
- fifo_bug = 1;
- }
- if (!(termios->c_cflag & PARODD))
- cval |= UART_LCR_EPAR;
-#ifdef CMSPAR
- if (termios->c_cflag & CMSPAR)
- cval |= UART_LCR_SPAR;
-#endif
-
- /*
- * Ask the core to calculate the divisor for us.
- */
- baud = uart_get_baud_rate(port, termios, old,
- port->uartclk / 16 / 0xffff,
- port->uartclk / 16);
- quot = serial8250_get_divisor(port, baud);
-
- /*
- * Oxford Semi 952 rev B workaround
- */
- if (up->bugs & UART_BUG_QUOT && (quot & 0xff) == 0)
- quot++;
-
- if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
- fcr = uart_config[port->type].fcr;
- if (baud < 2400 || fifo_bug) {
- fcr &= ~UART_FCR_TRIGGER_MASK;
- fcr |= UART_FCR_TRIGGER_1;
- }
- }
-
- /*
- * MCR-based auto flow control. When AFE is enabled, RTS will be
- * deasserted when the receive FIFO contains more characters than
- * the trigger, or the MCR RTS bit is cleared. In the case where
- * the remote UART is not using CTS auto flow control, we must
- * have sufficient FIFO entries for the latency of the remote
- * UART to respond. IOW, at least 32 bytes of FIFO.
- */
- if (up->capabilities & UART_CAP_AFE && port->fifosize >= 32) {
- up->mcr &= ~UART_MCR_AFE;
- if (termios->c_cflag & CRTSCTS)
- up->mcr |= UART_MCR_AFE;
- }
-
- /*
- * Ok, we're now changing the port state. Do it with
- * interrupts disabled.
- */
- spin_lock_irqsave(&port->lock, flags);
-
- /*
- * Update the per-port timeout.
- */
- uart_update_timeout(port, termios->c_cflag, baud);
-
- port->read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
- if (termios->c_iflag & INPCK)
- port->read_status_mask |= UART_LSR_FE | UART_LSR_PE;
- if (termios->c_iflag & (BRKINT | PARMRK))
- port->read_status_mask |= UART_LSR_BI;
-
- /*
- * Characteres to ignore
- */
- port->ignore_status_mask = 0;
- if (termios->c_iflag & IGNPAR)
- port->ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
- if (termios->c_iflag & IGNBRK) {
- port->ignore_status_mask |= UART_LSR_BI;
- /*
- * If we're ignoring parity and break indicators,
- * ignore overruns too (for real raw support).
- */
- if (termios->c_iflag & IGNPAR)
- port->ignore_status_mask |= UART_LSR_OE;
- }
-
- /*
- * ignore all characters if CREAD is not set
- */
- if ((termios->c_cflag & CREAD) == 0)
- port->ignore_status_mask |= UART_LSR_DR;
-
- /*
- * CTS flow control flag and modem status interrupts
- */
- up->ier &= ~UART_IER_MSI;
- if (!(up->bugs & UART_BUG_NOMSR) &&
- UART_ENABLE_MS(&up->port, termios->c_cflag))
- up->ier |= UART_IER_MSI;
- if (up->capabilities & UART_CAP_UUE)
- up->ier |= UART_IER_UUE;
- if (up->capabilities & UART_CAP_RTOIE)
- up->ier |= UART_IER_RTOIE;
-
- serial_port_out(port, UART_IER, up->ier);
-
- if (up->capabilities & UART_CAP_EFR) {
- unsigned char efr = 0;
- /*
- * TI16C752/Startech hardware flow control. FIXME:
- * - TI16C752 requires control thresholds to be set.
- * - UART_MCR_RTS is ineffective if auto-RTS mode is enabled.
- */
- if (termios->c_cflag & CRTSCTS)
- efr |= UART_EFR_CTS;
-
- serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
- if (port->flags & UPF_EXAR_EFR)
- serial_port_out(port, UART_XR_EFR, efr);
- else
- serial_port_out(port, UART_EFR, efr);
- }
-
- /* Workaround to enable 115200 baud on OMAP1510 internal ports */
- if (is_omap1510_8250(up)) {
- if (baud == 115200) {
- quot = 1;
- serial_port_out(port, UART_OMAP_OSC_12M_SEL, 1);
- } else
- serial_port_out(port, UART_OMAP_OSC_12M_SEL, 0);
- }
-
- /*
- * For NatSemi, switch to bank 2 not bank 1, to avoid resetting EXCR2,
- * otherwise just set DLAB
- */
- if (up->capabilities & UART_NATSEMI)
- serial_port_out(port, UART_LCR, 0xe0);
- else
- serial_port_out(port, UART_LCR, cval | UART_LCR_DLAB);
-
- serial_dl_write(up, quot);
-
- /*
- * LCR DLAB must be set to enable 64-byte FIFO mode. If the FCR
- * is written without DLAB set, this mode will be disabled.
- */
- if (port->type == PORT_16750)
- serial_port_out(port, UART_FCR, fcr);
-
- serial_port_out(port, UART_LCR, cval); /* reset DLAB */
- up->lcr = cval; /* Save LCR */
- if (port->type != PORT_16750) {
- /* emulated UARTs (Lucent Venus 167x) need two steps */
- if (fcr & UART_FCR_ENABLE_FIFO)
- serial_port_out(port, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_port_out(port, UART_FCR, fcr); /* set fcr */
- }
- serial8250_set_mctrl(port, port->mctrl);
- spin_unlock_irqrestore(&port->lock, flags);
- /* Don't rewrite B0 */
- if (tty_termios_baud_rate(termios))
- tty_termios_encode_baud_rate(termios, baud, baud);
-}
-EXPORT_SYMBOL(serial8250_do_set_termios);
-
-static void
-serial8250_set_termios(struct uart_port *port, struct ktermios *termios,
- struct ktermios *old)
-{
- if (port->set_termios)
- port->set_termios(port, termios, old);
- else
- serial8250_do_set_termios(port, termios, old);
-}
-
-static void
-serial8250_set_ldisc(struct uart_port *port, int new)
-{
- if (new == N_PPS) {
- port->flags |= UPF_HARDPPS_CD;
- serial8250_enable_ms(port);
- } else
- port->flags &= ~UPF_HARDPPS_CD;
-}
-
-
-void serial8250_do_pm(struct uart_port *port, unsigned int state,
- unsigned int oldstate)
-{
- struct uart_8250_port *p =
- container_of(port, struct uart_8250_port, port);
-
- serial8250_set_sleep(p, state != 0);
-}
-EXPORT_SYMBOL(serial8250_do_pm);
-
-static void
-serial8250_pm(struct uart_port *port, unsigned int state,
- unsigned int oldstate)
-{
- if (port->pm)
- port->pm(port, state, oldstate);
- else
- serial8250_do_pm(port, state, oldstate);
-}
-
-static unsigned int serial8250_port_size(struct uart_8250_port *pt)
-{
- if (pt->port.iotype == UPIO_AU)
- return 0x1000;
- if (is_omap1_8250(pt))
- return 0x16 << pt->port.regshift;
-
- return 8 << pt->port.regshift;
-}
-
-/*
- * Resource handling.
- */
-static int serial8250_request_std_resource(struct uart_8250_port *up)
-{
- unsigned int size = serial8250_port_size(up);
- struct uart_port *port = &up->port;
- int ret = 0;
-
- switch (port->iotype) {
- case UPIO_AU:
- case UPIO_TSI:
- case UPIO_MEM32:
- case UPIO_MEM:
- if (!port->mapbase)
- break;
-
- if (!request_mem_region(port->mapbase, size, "serial")) {
- ret = -EBUSY;
- break;
- }
-
- if (port->flags & UPF_IOREMAP) {
- port->membase = ioremap_nocache(port->mapbase, size);
- if (!port->membase) {
- release_mem_region(port->mapbase, size);
- ret = -ENOMEM;
- }
- }
- break;
-
- case UPIO_HUB6:
- case UPIO_PORT:
- if (!request_region(port->iobase, size, "serial"))
- ret = -EBUSY;
- break;
- }
- return ret;
-}
-
-static void serial8250_release_std_resource(struct uart_8250_port *up)
-{
- unsigned int size = serial8250_port_size(up);
- struct uart_port *port = &up->port;
-
- switch (port->iotype) {
- case UPIO_AU:
- case UPIO_TSI:
- case UPIO_MEM32:
- case UPIO_MEM:
- if (!port->mapbase)
- break;
-
- if (port->flags & UPF_IOREMAP) {
- iounmap(port->membase);
- port->membase = NULL;
- }
-
- release_mem_region(port->mapbase, size);
- break;
-
- case UPIO_HUB6:
- case UPIO_PORT:
- release_region(port->iobase, size);
- break;
- }
-}
-
-static int serial8250_request_rsa_resource(struct uart_8250_port *up)
-{
- unsigned long start = UART_RSA_BASE << up->port.regshift;
- unsigned int size = 8 << up->port.regshift;
- struct uart_port *port = &up->port;
- int ret = -EINVAL;
-
- switch (port->iotype) {
- case UPIO_HUB6:
- case UPIO_PORT:
- start += port->iobase;
- if (request_region(start, size, "serial-rsa"))
- ret = 0;
- else
- ret = -EBUSY;
- break;
- }
-
- return ret;
-}
-
-static void serial8250_release_rsa_resource(struct uart_8250_port *up)
-{
- unsigned long offset = UART_RSA_BASE << up->port.regshift;
- unsigned int size = 8 << up->port.regshift;
- struct uart_port *port = &up->port;
-
- switch (port->iotype) {
- case UPIO_HUB6:
- case UPIO_PORT:
- release_region(port->iobase + offset, size);
- break;
- }
-}
-
-static void serial8250_release_port(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- serial8250_release_std_resource(up);
- if (port->type == PORT_RSA)
- serial8250_release_rsa_resource(up);
-}
-
-static int serial8250_request_port(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- int ret;
-
- if (port->type == PORT_8250_CIR)
- return -ENODEV;
-
- ret = serial8250_request_std_resource(up);
- if (ret == 0 && port->type == PORT_RSA) {
- ret = serial8250_request_rsa_resource(up);
- if (ret < 0)
- serial8250_release_std_resource(up);
- }
-
- return ret;
-}
-
-static void serial8250_config_port(struct uart_port *port, int flags)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- int probeflags = PROBE_ANY;
- int ret;
-
- if (port->type == PORT_8250_CIR)
- return;
-
- /*
- * Find the region that we can probe for. This in turn
- * tells us whether we can probe for the type of port.
- */
- ret = serial8250_request_std_resource(up);
- if (ret < 0)
- return;
-
- ret = serial8250_request_rsa_resource(up);
- if (ret < 0)
- probeflags &= ~PROBE_RSA;
-
- if (port->iotype != up->cur_iotype)
- set_io_from_upio(port);
-
- if (flags & UART_CONFIG_TYPE)
- autoconfig(up, probeflags);
-
- /* if access method is AU, it is a 16550 with a quirk */
- if (port->type == PORT_16550A && port->iotype == UPIO_AU)
- up->bugs |= UART_BUG_NOMSR;
-
- if (port->type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
- autoconfig_irq(up);
-
- if (port->type != PORT_RSA && probeflags & PROBE_RSA)
- serial8250_release_rsa_resource(up);
- if (port->type == PORT_UNKNOWN)
- serial8250_release_std_resource(up);
-
- /* Fixme: probably not the best place for this */
- if ((port->type == PORT_XR17V35X) ||
- (port->type == PORT_XR17D15X))
- port->handle_irq = exar_handle_irq;
-}
-
-static int
-serial8250_verify_port(struct uart_port *port, struct serial_struct *ser)
-{
- if (ser->irq >= nr_irqs || ser->irq < 0 ||
- ser->baud_base < 9600 || ser->type < PORT_UNKNOWN ||
- ser->type >= ARRAY_SIZE(uart_config) || ser->type == PORT_CIRRUS ||
- ser->type == PORT_STARTECH)
- return -EINVAL;
- return 0;
-}
-
-static const char *
-serial8250_type(struct uart_port *port)
-{
- int type = port->type;
-
- if (type >= ARRAY_SIZE(uart_config))
- type = 0;
- return uart_config[type].name;
-}
-
-static struct uart_ops serial8250_pops = {
- .tx_empty = serial8250_tx_empty,
- .set_mctrl = serial8250_set_mctrl,
- .get_mctrl = serial8250_get_mctrl,
- .stop_tx = serial8250_stop_tx,
- .start_tx = serial8250_start_tx,
- .stop_rx = serial8250_stop_rx,
- .enable_ms = serial8250_enable_ms,
- .break_ctl = serial8250_break_ctl,
- .startup = serial8250_startup,
- .shutdown = serial8250_shutdown,
- .set_termios = serial8250_set_termios,
- .set_ldisc = serial8250_set_ldisc,
- .pm = serial8250_pm,
- .type = serial8250_type,
- .release_port = serial8250_release_port,
- .request_port = serial8250_request_port,
- .config_port = serial8250_config_port,
- .verify_port = serial8250_verify_port,
-#ifdef CONFIG_CONSOLE_POLL
- .poll_get_char = serial8250_get_poll_char,
- .poll_put_char = serial8250_put_poll_char,
-#endif
-};
-
-static struct uart_8250_port serial8250_ports[UART_NR];
-
-static void (*serial8250_isa_config)(int port, struct uart_port *up,
- unsigned short *capabilities);
-
-void serial8250_set_isa_configurator(
- void (*v)(int port, struct uart_port *up, unsigned short *capabilities))
-{
- serial8250_isa_config = v;
-}
-EXPORT_SYMBOL(serial8250_set_isa_configurator);
-
-static void __init serial8250_isa_init_ports(void)
-{
- struct uart_8250_port *up;
- static int first = 1;
- int i, irqflag = 0;
-
- if (!first)
- return;
- first = 0;
-
- if (nr_uarts > UART_NR)
- nr_uarts = UART_NR;
-
- for (i = 0; i < nr_uarts; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
- struct uart_port *port = &up->port;
-
- port->line = i;
- spin_lock_init(&port->lock);
-
- init_timer(&up->timer);
- up->timer.function = serial8250_timeout;
- up->cur_iotype = 0xFF;
-
- /*
- * ALPHA_KLUDGE_MCR needs to be killed.
- */
- up->mcr_mask = ~ALPHA_KLUDGE_MCR;
- up->mcr_force = ALPHA_KLUDGE_MCR;
-
- port->ops = &serial8250_pops;
- }
-
- if (share_irqs)
- irqflag = IRQF_SHARED;
-
- for (i = 0, up = serial8250_ports;
- i < ARRAY_SIZE(old_serial_port) && i < nr_uarts;
- i++, up++) {
- struct uart_port *port = &up->port;
-
- port->iobase = old_serial_port[i].port;
- port->irq = irq_canonicalize(old_serial_port[i].irq);
- port->irqflags = old_serial_port[i].irqflags;
- port->uartclk = old_serial_port[i].baud_base * 16;
- port->flags = old_serial_port[i].flags;
- port->hub6 = old_serial_port[i].hub6;
- port->membase = old_serial_port[i].iomem_base;
- port->iotype = old_serial_port[i].io_type;
- port->regshift = old_serial_port[i].iomem_reg_shift;
- set_io_from_upio(port);
- port->irqflags |= irqflag;
- if (serial8250_isa_config != NULL)
- serial8250_isa_config(i, &up->port, &up->capabilities);
-
- }
-}
-
-static void
-serial8250_init_fixed_type_port(struct uart_8250_port *up, unsigned int type)
-{
- up->port.type = type;
- if (!up->port.fifosize)
- up->port.fifosize = uart_config[type].fifo_size;
- if (!up->tx_loadsz)
- up->tx_loadsz = uart_config[type].tx_loadsz;
- if (!up->capabilities)
- up->capabilities = uart_config[type].flags;
-}
-
-static void __init
-serial8250_register_ports(struct uart_driver *drv, struct device *dev)
-{
- int i;
-
- for (i = 0; i < nr_uarts; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.dev)
- continue;
-
- up->port.dev = dev;
-
- if (up->port.flags & UPF_FIXED_TYPE)
- serial8250_init_fixed_type_port(up, up->port.type);
-
- uart_add_one_port(drv, &up->port);
- }
-}
-
-#ifdef CONFIG_SERIAL_8250_CONSOLE
-
-static void serial8250_console_putchar(struct uart_port *port, int ch)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- wait_for_xmitr(up, UART_LSR_THRE);
- serial_port_out(port, UART_TX, ch);
-}
-
-/*
- * Print a string to the serial port trying not to disturb
- * any possible real use of the port...
- *
- * The console_lock must be held when we get here.
- */
-static void
-serial8250_console_write(struct console *co, const char *s, unsigned int count)
-{
- struct uart_8250_port *up = &serial8250_ports[co->index];
- struct uart_port *port = &up->port;
- unsigned long flags;
- unsigned int ier;
- int locked = 1;
-
- touch_nmi_watchdog();
-
- local_irq_save(flags);
- if (port->sysrq) {
- /* serial8250_handle_irq() already took the lock */
- locked = 0;
- } else if (oops_in_progress) {
- locked = spin_trylock(&port->lock);
- } else
- spin_lock(&port->lock);
-
- /*
- * First save the IER then disable the interrupts
- */
- ier = serial_port_in(port, UART_IER);
-
- if (up->capabilities & UART_CAP_UUE)
- serial_port_out(port, UART_IER, UART_IER_UUE);
- else
- serial_port_out(port, UART_IER, 0);
-
- uart_console_write(port, s, count, serial8250_console_putchar);
-
- /*
- * Finally, wait for transmitter to become empty
- * and restore the IER
- */
- wait_for_xmitr(up, BOTH_EMPTY);
- serial_port_out(port, UART_IER, ier);
-
- /*
- * The receive handling will happen properly because the
- * receive ready bit will still be set; it is not cleared
- * on read. However, modem control will not, we must
- * call it if we have saved something in the saved flags
- * while processing with interrupts off.
- */
- if (up->msr_saved_flags)
- serial8250_modem_status(up);
-
- if (locked)
- spin_unlock(&port->lock);
- local_irq_restore(flags);
-}
-
-static int __init serial8250_console_setup(struct console *co, char *options)
-{
- struct uart_port *port;
- int baud = 9600;
- int bits = 8;
- int parity = 'n';
- int flow = 'n';
-
- /*
- * Check whether an invalid uart number has been specified, and
- * if so, search for the first available port that does have
- * console support.
- */
- if (co->index >= nr_uarts)
- co->index = 0;
- port = &serial8250_ports[co->index].port;
- if (!port->iobase && !port->membase)
- return -ENODEV;
-
- if (options)
- uart_parse_options(options, &baud, &parity, &bits, &flow);
-
- return uart_set_options(port, co, baud, parity, bits, flow);
-}
-
-static int serial8250_console_early_setup(void)
-{
- return serial8250_find_port_for_earlycon();
-}
-
-static struct console serial8250_console = {
- .name = "ttyS",
- .write = serial8250_console_write,
- .device = uart_console_device,
- .setup = serial8250_console_setup,
- .early_setup = serial8250_console_early_setup,
- .flags = CON_PRINTBUFFER | CON_ANYTIME,
- .index = -1,
- .data = &serial8250_reg,
-};
-
-static int __init serial8250_console_init(void)
-{
- serial8250_isa_init_ports();
- register_console(&serial8250_console);
- return 0;
-}
-console_initcall(serial8250_console_init);
-
-int serial8250_find_port(struct uart_port *p)
-{
- int line;
- struct uart_port *port;
-
- for (line = 0; line < nr_uarts; line++) {
- port = &serial8250_ports[line].port;
- if (uart_match_port(p, port))
- return line;
- }
- return -ENODEV;
-}
-
-#define SERIAL8250_CONSOLE &serial8250_console
-#else
-#define SERIAL8250_CONSOLE NULL
-#endif
-
-static struct uart_driver serial8250_reg = {
- .owner = THIS_MODULE,
- .driver_name = "serial",
- .dev_name = "ttyS",
- .major = TTY_MAJOR,
- .minor = 64,
- .cons = SERIAL8250_CONSOLE,
-};
-
-/*
- * early_serial_setup - early registration for 8250 ports
- *
- * Setup an 8250 port structure prior to console initialisation. Use
- * after console initialisation will cause undefined behaviour.
- */
-int __init early_serial_setup(struct uart_port *port)
-{
- struct uart_port *p;
-
- if (port->line >= ARRAY_SIZE(serial8250_ports))
- return -ENODEV;
-
- serial8250_isa_init_ports();
- p = &serial8250_ports[port->line].port;
- p->iobase = port->iobase;
- p->membase = port->membase;
- p->irq = port->irq;
- p->irqflags = port->irqflags;
- p->uartclk = port->uartclk;
- p->fifosize = port->fifosize;
- p->regshift = port->regshift;
- p->iotype = port->iotype;
- p->flags = port->flags;
- p->mapbase = port->mapbase;
- p->private_data = port->private_data;
- p->type = port->type;
- p->line = port->line;
-
- set_io_from_upio(p);
- if (port->serial_in)
- p->serial_in = port->serial_in;
- if (port->serial_out)
- p->serial_out = port->serial_out;
- if (port->handle_irq)
- p->handle_irq = port->handle_irq;
- else
- p->handle_irq = serial8250_default_handle_irq;
-
- return 0;
-}
-
-/**
- * serial8250_suspend_port - suspend one serial port
- * @line: serial line number
- *
- * Suspend one serial port.
- */
-void serial8250_suspend_port(int line)
-{
- uart_suspend_port(&serial8250_reg, &serial8250_ports[line].port);
-}
-
-/**
- * serial8250_resume_port - resume one serial port
- * @line: serial line number
- *
- * Resume one serial port.
- */
-void serial8250_resume_port(int line)
-{
- struct uart_8250_port *up = &serial8250_ports[line];
- struct uart_port *port = &up->port;
-
- if (up->capabilities & UART_NATSEMI) {
- /* Ensure it's still in high speed mode */
- serial_port_out(port, UART_LCR, 0xE0);
-
- ns16550a_goto_highspeed(up);
-
- serial_port_out(port, UART_LCR, 0);
- port->uartclk = 921600*16;
- }
- uart_resume_port(&serial8250_reg, port);
-}
-
-/*
- * Register a set of serial devices attached to a platform device. The
- * list is terminated with a zero flags entry, which means we expect
- * all entries to have at least UPF_BOOT_AUTOCONF set.
- */
-static int serial8250_probe(struct platform_device *dev)
-{
- struct plat_serial8250_port *p = dev->dev.platform_data;
- struct uart_8250_port uart;
- int ret, i, irqflag = 0;
-
- memset(&uart, 0, sizeof(uart));
-
- if (share_irqs)
- irqflag = IRQF_SHARED;
-
- for (i = 0; p && p->flags != 0; p++, i++) {
- uart.port.iobase = p->iobase;
- uart.port.membase = p->membase;
- uart.port.irq = p->irq;
- uart.port.irqflags = p->irqflags;
- uart.port.uartclk = p->uartclk;
- uart.port.regshift = p->regshift;
- uart.port.iotype = p->iotype;
- uart.port.flags = p->flags;
- uart.port.mapbase = p->mapbase;
- uart.port.hub6 = p->hub6;
- uart.port.private_data = p->private_data;
- uart.port.type = p->type;
- uart.port.serial_in = p->serial_in;
- uart.port.serial_out = p->serial_out;
- uart.port.handle_irq = p->handle_irq;
- uart.port.handle_break = p->handle_break;
- uart.port.set_termios = p->set_termios;
- uart.port.pm = p->pm;
- uart.port.dev = &dev->dev;
- uart.port.irqflags |= irqflag;
- ret = serial8250_register_8250_port(&uart);
- if (ret < 0) {
- dev_err(&dev->dev, "unable to register port at index %d "
- "(IO%lx MEM%llx IRQ%d): %d\n", i,
- p->iobase, (unsigned long long)p->mapbase,
- p->irq, ret);
- }
- }
- return 0;
-}
-
-/*
- * Remove serial ports registered against a platform device.
- */
-static int serial8250_remove(struct platform_device *dev)
-{
- int i;
-
- for (i = 0; i < nr_uarts; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.dev == &dev->dev)
- serial8250_unregister_port(i);
- }
- return 0;
-}
-
-static int serial8250_suspend(struct platform_device *dev, pm_message_t state)
-{
- int i;
-
- for (i = 0; i < UART_NR; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
- uart_suspend_port(&serial8250_reg, &up->port);
- }
-
- return 0;
-}
-
-static int serial8250_resume(struct platform_device *dev)
-{
- int i;
-
- for (i = 0; i < UART_NR; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
- serial8250_resume_port(i);
- }
-
- return 0;
-}
-
-static struct platform_driver serial8250_isa_driver = {
- .probe = serial8250_probe,
- .remove = serial8250_remove,
- .suspend = serial8250_suspend,
- .resume = serial8250_resume,
- .driver = {
- .name = "serial8250",
- .owner = THIS_MODULE,
- },
-};
-
-/*
- * This "device" covers _all_ ISA 8250-compatible serial devices listed
- * in the table in include/asm/serial.h
- */
-static struct platform_device *serial8250_isa_devs;
-
-/*
- * serial8250_register_8250_port and serial8250_unregister_port allows for
- * 16x50 serial ports to be configured at run-time, to support PCMCIA
- * modems and PCI multiport cards.
- */
-static DEFINE_MUTEX(serial_mutex);
-
-static struct uart_8250_port *serial8250_find_match_or_unused(struct uart_port *port)
-{
- int i;
-
- /*
- * First, find a port entry which matches.
- */
- for (i = 0; i < nr_uarts; i++)
- if (uart_match_port(&serial8250_ports[i].port, port))
- return &serial8250_ports[i];
-
- /*
- * We didn't find a matching entry, so look for the first
- * free entry. We look for one which hasn't been previously
- * used (indicated by zero iobase).
- */
- for (i = 0; i < nr_uarts; i++)
- if (serial8250_ports[i].port.type == PORT_UNKNOWN &&
- serial8250_ports[i].port.iobase == 0)
- return &serial8250_ports[i];
-
- /*
- * That also failed. Last resort is to find any entry which
- * doesn't have a real port associated with it.
- */
- for (i = 0; i < nr_uarts; i++)
- if (serial8250_ports[i].port.type == PORT_UNKNOWN)
- return &serial8250_ports[i];
-
- return NULL;
-}
-
-/**
- * serial8250_register_8250_port - register a serial port
- * @up: serial port template
- *
- * Configure the serial port specified by the request. If the
- * port exists and is in use, it is hung up and unregistered
- * first.
- *
- * The port is then probed and if necessary the IRQ is autodetected
- * If this fails an error is returned.
- *
- * On success the port is ready to use and the line number is returned.
- */
-int serial8250_register_8250_port(struct uart_8250_port *up)
-{
- struct uart_8250_port *uart;
- int ret = -ENOSPC;
-
- if (up->port.uartclk == 0)
- return -EINVAL;
-
- mutex_lock(&serial_mutex);
-
- uart = serial8250_find_match_or_unused(&up->port);
- if (uart && uart->port.type != PORT_8250_CIR) {
- if (uart->port.dev)
- uart_remove_one_port(&serial8250_reg, &uart->port);
-
- uart->port.iobase = up->port.iobase;
- uart->port.membase = up->port.membase;
- uart->port.irq = up->port.irq;
- uart->port.irqflags = up->port.irqflags;
- uart->port.uartclk = up->port.uartclk;
- uart->port.fifosize = up->port.fifosize;
- uart->port.regshift = up->port.regshift;
- uart->port.iotype = up->port.iotype;
- uart->port.flags = up->port.flags | UPF_BOOT_AUTOCONF;
- uart->bugs = up->bugs;
- uart->port.mapbase = up->port.mapbase;
- uart->port.private_data = up->port.private_data;
- uart->port.fifosize = up->port.fifosize;
- uart->tx_loadsz = up->tx_loadsz;
- uart->capabilities = up->capabilities;
-
- if (up->port.dev)
- uart->port.dev = up->port.dev;
-
- if (up->port.flags & UPF_FIXED_TYPE)
- serial8250_init_fixed_type_port(uart, up->port.type);
-
- set_io_from_upio(&uart->port);
- /* Possibly override default I/O functions. */
- if (up->port.serial_in)
- uart->port.serial_in = up->port.serial_in;
- if (up->port.serial_out)
- uart->port.serial_out = up->port.serial_out;
- if (up->port.handle_irq)
- uart->port.handle_irq = up->port.handle_irq;
- /* Possibly override set_termios call */
- if (up->port.set_termios)
- uart->port.set_termios = up->port.set_termios;
- if (up->port.pm)
- uart->port.pm = up->port.pm;
- if (up->port.handle_break)
- uart->port.handle_break = up->port.handle_break;
- if (up->dl_read)
- uart->dl_read = up->dl_read;
- if (up->dl_write)
- uart->dl_write = up->dl_write;
- if (up->dma)
- uart->dma = up->dma;
-
- if (serial8250_isa_config != NULL)
- serial8250_isa_config(0, &uart->port,
- &uart->capabilities);
-
- ret = uart_add_one_port(&serial8250_reg, &uart->port);
- if (ret == 0)
- ret = uart->port.line;
- }
- mutex_unlock(&serial_mutex);
-
- return ret;
-}
-EXPORT_SYMBOL(serial8250_register_8250_port);
-
-/**
- * serial8250_unregister_port - remove a 16x50 serial port at runtime
- * @line: serial line number
- *
- * Remove one serial port. This may not be called from interrupt
- * context. We hand the port back to the our control.
- */
-void serial8250_unregister_port(int line)
-{
- struct uart_8250_port *uart = &serial8250_ports[line];
-
- mutex_lock(&serial_mutex);
- uart_remove_one_port(&serial8250_reg, &uart->port);
- if (serial8250_isa_devs) {
- uart->port.flags &= ~UPF_BOOT_AUTOCONF;
- uart->port.type = PORT_UNKNOWN;
- uart->port.dev = &serial8250_isa_devs->dev;
- uart->capabilities = uart_config[uart->port.type].flags;
- uart_add_one_port(&serial8250_reg, &uart->port);
- } else {
- uart->port.dev = NULL;
- }
- mutex_unlock(&serial_mutex);
-}
-EXPORT_SYMBOL(serial8250_unregister_port);
-
-static int __init serial8250_init(void)
-{
- int ret;
-
- serial8250_isa_init_ports();
-
- printk(KERN_INFO "Serial: 8250/16550 driver, "
- "%d ports, IRQ sharing %sabled\n", nr_uarts,
- share_irqs ? "en" : "dis");
-
-#ifdef CONFIG_SPARC
- ret = sunserial_register_minors(&serial8250_reg, UART_NR);
-#else
- serial8250_reg.nr = UART_NR;
- ret = uart_register_driver(&serial8250_reg);
-#endif
- if (ret)
- goto out;
-
- ret = serial8250_pnp_init();
- if (ret)
- goto unreg_uart_drv;
-
- serial8250_isa_devs = platform_device_alloc("serial8250",
- PLAT8250_DEV_LEGACY);
- if (!serial8250_isa_devs) {
- ret = -ENOMEM;
- goto unreg_pnp;
- }
-
- ret = platform_device_add(serial8250_isa_devs);
- if (ret)
- goto put_dev;
-
- serial8250_register_ports(&serial8250_reg, &serial8250_isa_devs->dev);
-
- ret = platform_driver_register(&serial8250_isa_driver);
- if (ret == 0)
- goto out;
-
- platform_device_del(serial8250_isa_devs);
-put_dev:
- platform_device_put(serial8250_isa_devs);
-unreg_pnp:
- serial8250_pnp_exit();
-unreg_uart_drv:
-#ifdef CONFIG_SPARC
- sunserial_unregister_minors(&serial8250_reg, UART_NR);
-#else
- uart_unregister_driver(&serial8250_reg);
-#endif
-out:
- return ret;
-}
-
-static void __exit serial8250_exit(void)
-{
- struct platform_device *isa_dev = serial8250_isa_devs;
-
- /*
- * This tells serial8250_unregister_port() not to re-register
- * the ports (thereby making serial8250_isa_driver permanently
- * in use.)
- */
- serial8250_isa_devs = NULL;
-
- platform_driver_unregister(&serial8250_isa_driver);
- platform_device_unregister(isa_dev);
-
- serial8250_pnp_exit();
-
-#ifdef CONFIG_SPARC
- sunserial_unregister_minors(&serial8250_reg, UART_NR);
-#else
- uart_unregister_driver(&serial8250_reg);
-#endif
-}
-
-module_init(serial8250_init);
-module_exit(serial8250_exit);
-
-EXPORT_SYMBOL(serial8250_suspend_port);
-EXPORT_SYMBOL(serial8250_resume_port);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Generic 8250/16x50 serial driver");
-
-module_param(share_irqs, uint, 0644);
-MODULE_PARM_DESC(share_irqs, "Share IRQs with other non-8250/16x50 devices"
- " (unsafe)");
-
-module_param(nr_uarts, uint, 0644);
-MODULE_PARM_DESC(nr_uarts, "Maximum number of UARTs supported. (1-" __MODULE_STRING(CONFIG_SERIAL_8250_NR_UARTS) ")");
-
-module_param(skip_txen_test, uint, 0644);
-MODULE_PARM_DESC(skip_txen_test, "Skip checking for the TXEN bug at init time");
-
-#ifdef CONFIG_SERIAL_8250_RSA
-module_param_array(probe_rsa, ulong, &probe_rsa_count, 0444);
-MODULE_PARM_DESC(probe_rsa, "Probe I/O ports for RSA");
-#endif
-MODULE_ALIAS_CHARDEV_MAJOR(TTY_MAJOR);
-
-#ifndef MODULE
-/* This module was renamed to 8250_core in 3.7. Keep the old "8250" name
- * working as well for the module options so we don't break people. We
- * need to keep the names identical and the convenient macros will happily
- * refuse to let us do that by failing the build with redefinition errors
- * of global variables. So we stick them inside a dummy function to avoid
- * those conflicts. The options still get parsed, and the redefined
- * MODULE_PARAM_PREFIX lets us keep the "8250." syntax alive.
- *
- * This is hacky. I'm sorry.
- */
-static void __used s8250_options(void)
-{
-#undef MODULE_PARAM_PREFIX
-#define MODULE_PARAM_PREFIX "8250."
-
- module_param_cb(share_irqs, ¶m_ops_uint, &share_irqs, 0644);
- module_param_cb(nr_uarts, ¶m_ops_uint, &nr_uarts, 0644);
- module_param_cb(skip_txen_test, ¶m_ops_uint, &skip_txen_test, 0644);
-#ifdef CONFIG_SERIAL_8250_RSA
- __module_param_call(MODULE_PARAM_PREFIX, probe_rsa,
- ¶m_array_ops, .arr = &__param_arr_probe_rsa,
- 0444, -1);
-#endif
-}
-#else
-MODULE_ALIAS("8250");
-#endif
--- /dev/null
+/*
+ * Driver for 8250/16550-type serial ports
+ *
+ * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
+ *
+ * Copyright (C) 2001 Russell King.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * A note about mapbase / membase
+ *
+ * mapbase is the physical address of the IO port.
+ * membase is an 'ioremapped' cookie.
+ */
+
+#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/delay.h>
+#include <linux/platform_device.h>
+#include <linux/tty.h>
+#include <linux/ratelimit.h>
+#include <linux/tty_flip.h>
+#include <linux/serial_reg.h>
+#include <linux/serial_core.h>
+#include <linux/serial.h>
+#include <linux/serial_8250.h>
+#include <linux/nmi.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#ifdef CONFIG_SPARC
+#include <linux/sunserialcore.h>
+#endif
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "8250.h"
+
+/*
+ * Configuration:
+ * share_irqs - whether we pass IRQF_SHARED to request_irq(). This option
+ * is unsafe when used on edge-triggered interrupts.
+ */
+static unsigned int share_irqs = SERIAL8250_SHARE_IRQS;
+
+static unsigned int nr_uarts = CONFIG_SERIAL_8250_RUNTIME_UARTS;
+
+static struct uart_driver serial8250_reg;
+
+static int serial_index(struct uart_port *port)
+{
+ return (serial8250_reg.minor - 64) + port->line;
+}
+
+static unsigned int skip_txen_test; /* force skip of txen test at init time */
+
+/*
+ * Debugging.
+ */
+#if 0
+#define DEBUG_AUTOCONF(fmt...) printk(fmt)
+#else
+#define DEBUG_AUTOCONF(fmt...) do { } while (0)
+#endif
+
+#if 0
+#define DEBUG_INTR(fmt...) printk(fmt)
+#else
+#define DEBUG_INTR(fmt...) do { } while (0)
+#endif
+
+#define PASS_LIMIT 512
+
+#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
+
+
+#ifdef CONFIG_SERIAL_8250_DETECT_IRQ
+#define CONFIG_SERIAL_DETECT_IRQ 1
+#endif
+#ifdef CONFIG_SERIAL_8250_MANY_PORTS
+#define CONFIG_SERIAL_MANY_PORTS 1
+#endif
+
+/*
+ * HUB6 is always on. This will be removed once the header
+ * files have been cleaned.
+ */
+#define CONFIG_HUB6 1
+
+#include <asm/serial.h>
+/*
+ * SERIAL_PORT_DFNS tells us about built-in ports that have no
+ * standard enumeration mechanism. Platforms that can find all
+ * serial ports via mechanisms like ACPI or PCI need not supply it.
+ */
+#ifndef SERIAL_PORT_DFNS
+#define SERIAL_PORT_DFNS
+#endif
+
+static const struct old_serial_port old_serial_port[] = {
+ SERIAL_PORT_DFNS /* defined in asm/serial.h */
+};
+
+#define UART_NR CONFIG_SERIAL_8250_NR_UARTS
+
+#ifdef CONFIG_SERIAL_8250_RSA
+
+#define PORT_RSA_MAX 4
+static unsigned long probe_rsa[PORT_RSA_MAX];
+static unsigned int probe_rsa_count;
+#endif /* CONFIG_SERIAL_8250_RSA */
+
+struct irq_info {
+ struct hlist_node node;
+ int irq;
+ spinlock_t lock; /* Protects list not the hash */
+ struct list_head *head;
+};
+
+#define NR_IRQ_HASH 32 /* Can be adjusted later */
+static struct hlist_head irq_lists[NR_IRQ_HASH];
+static DEFINE_MUTEX(hash_mutex); /* Used to walk the hash */
+
+/*
+ * Here we define the default xmit fifo size used for each type of UART.
+ */
+static const struct serial8250_config uart_config[] = {
+ [PORT_UNKNOWN] = {
+ .name = "unknown",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_8250] = {
+ .name = "8250",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16450] = {
+ .name = "16450",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16550] = {
+ .name = "16550",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16550A] = {
+ .name = "16550A",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_CIRRUS] = {
+ .name = "Cirrus",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16650] = {
+ .name = "ST16650",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_16650V2] = {
+ .name = "ST16650V2",
+ .fifo_size = 32,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
+ UART_FCR_T_TRIG_00,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_16750] = {
+ .name = "TI16750",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
+ UART_FCR7_64BYTE,
+ .flags = UART_CAP_FIFO | UART_CAP_SLEEP | UART_CAP_AFE,
+ },
+ [PORT_STARTECH] = {
+ .name = "Startech",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16C950] = {
+ .name = "16C950/954",
+ .fifo_size = 128,
+ .tx_loadsz = 128,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ /* UART_CAP_EFR breaks billionon CF bluetooth card. */
+ .flags = UART_CAP_FIFO | UART_CAP_SLEEP,
+ },
+ [PORT_16654] = {
+ .name = "ST16654",
+ .fifo_size = 64,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
+ UART_FCR_T_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_16850] = {
+ .name = "XR16850",
+ .fifo_size = 128,
+ .tx_loadsz = 128,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_RSA] = {
+ .name = "RSA",
+ .fifo_size = 2048,
+ .tx_loadsz = 2048,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_NS16550A] = {
+ .name = "NS16550A",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_NATSEMI,
+ },
+ [PORT_XSCALE] = {
+ .name = "XScale",
+ .fifo_size = 32,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_UUE | UART_CAP_RTOIE,
+ },
+ [PORT_OCTEON] = {
+ .name = "OCTEON",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_AR7] = {
+ .name = "AR7",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_U6_16550A] = {
+ .name = "U6_16550A",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_TEGRA] = {
+ .name = "Tegra",
+ .fifo_size = 32,
+ .tx_loadsz = 8,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
+ UART_FCR_T_TRIG_01,
+ .flags = UART_CAP_FIFO | UART_CAP_RTOIE,
+ },
+ [PORT_XR17D15X] = {
+ .name = "XR17D15X",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP,
+ },
+ [PORT_XR17V35X] = {
+ .name = "XR17V35X",
+ .fifo_size = 256,
+ .tx_loadsz = 256,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11 |
+ UART_FCR_T_TRIG_11,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP,
+ },
+ [PORT_LPC3220] = {
+ .name = "LPC3220",
+ .fifo_size = 64,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_DMA_SELECT | UART_FCR_ENABLE_FIFO |
+ UART_FCR_R_TRIG_00 | UART_FCR_T_TRIG_00,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_BRCM_TRUMANAGE] = {
+ .name = "TruManage",
+ .fifo_size = 1,
+ .tx_loadsz = 1024,
+ .flags = UART_CAP_HFIFO,
+ },
+ [PORT_8250_CIR] = {
+ .name = "CIR port"
+ },
+ [PORT_ALTR_16550_F32] = {
+ .name = "Altera 16550 FIFO32",
+ .fifo_size = 32,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_ALTR_16550_F64] = {
+ .name = "Altera 16550 FIFO64",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_ALTR_16550_F128] = {
+ .name = "Altera 16550 FIFO128",
+ .fifo_size = 128,
+ .tx_loadsz = 128,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+};
+
+/* Uart divisor latch read */
+static int default_serial_dl_read(struct uart_8250_port *up)
+{
+ return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
+}
+
+/* Uart divisor latch write */
+static void default_serial_dl_write(struct uart_8250_port *up, int value)
+{
+ serial_out(up, UART_DLL, value & 0xff);
+ serial_out(up, UART_DLM, value >> 8 & 0xff);
+}
+
+#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
+
+/* Au1x00/RT288x UART hardware has a weird register layout */
+static const u8 au_io_in_map[] = {
+ [UART_RX] = 0,
+ [UART_IER] = 2,
+ [UART_IIR] = 3,
+ [UART_LCR] = 5,
+ [UART_MCR] = 6,
+ [UART_LSR] = 7,
+ [UART_MSR] = 8,
+};
+
+static const u8 au_io_out_map[] = {
+ [UART_TX] = 1,
+ [UART_IER] = 2,
+ [UART_FCR] = 4,
+ [UART_LCR] = 5,
+ [UART_MCR] = 6,
+};
+
+static unsigned int au_serial_in(struct uart_port *p, int offset)
+{
+ offset = au_io_in_map[offset] << p->regshift;
+ return __raw_readl(p->membase + offset);
+}
+
+static void au_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = au_io_out_map[offset] << p->regshift;
+ __raw_writel(value, p->membase + offset);
+}
+
+/* Au1x00 haven't got a standard divisor latch */
+static int au_serial_dl_read(struct uart_8250_port *up)
+{
+ return __raw_readl(up->port.membase + 0x28);
+}
+
+static void au_serial_dl_write(struct uart_8250_port *up, int value)
+{
+ __raw_writel(value, up->port.membase + 0x28);
+}
+
+#endif
+
+static unsigned int hub6_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ outb(p->hub6 - 1 + offset, p->iobase);
+ return inb(p->iobase + 1);
+}
+
+static void hub6_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ outb(p->hub6 - 1 + offset, p->iobase);
+ outb(value, p->iobase + 1);
+}
+
+static unsigned int mem_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ return readb(p->membase + offset);
+}
+
+static void mem_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ writeb(value, p->membase + offset);
+}
+
+static void mem32_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ writel(value, p->membase + offset);
+}
+
+static unsigned int mem32_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ return readl(p->membase + offset);
+}
+
+static unsigned int io_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ return inb(p->iobase + offset);
+}
+
+static void io_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ outb(value, p->iobase + offset);
+}
+
+static int serial8250_default_handle_irq(struct uart_port *port);
+static int exar_handle_irq(struct uart_port *port);
+
+static void set_io_from_upio(struct uart_port *p)
+{
+ struct uart_8250_port *up =
+ container_of(p, struct uart_8250_port, port);
+
+ up->dl_read = default_serial_dl_read;
+ up->dl_write = default_serial_dl_write;
+
+ switch (p->iotype) {
+ case UPIO_HUB6:
+ p->serial_in = hub6_serial_in;
+ p->serial_out = hub6_serial_out;
+ break;
+
+ case UPIO_MEM:
+ p->serial_in = mem_serial_in;
+ p->serial_out = mem_serial_out;
+ break;
+
+ case UPIO_MEM32:
+ p->serial_in = mem32_serial_in;
+ p->serial_out = mem32_serial_out;
+ break;
+
+#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
+ case UPIO_AU:
+ p->serial_in = au_serial_in;
+ p->serial_out = au_serial_out;
+ up->dl_read = au_serial_dl_read;
+ up->dl_write = au_serial_dl_write;
+ break;
+#endif
+
+ default:
+ p->serial_in = io_serial_in;
+ p->serial_out = io_serial_out;
+ break;
+ }
+ /* Remember loaded iotype */
+ up->cur_iotype = p->iotype;
+ p->handle_irq = serial8250_default_handle_irq;
+}
+
+static void
+serial_port_out_sync(struct uart_port *p, int offset, int value)
+{
+ switch (p->iotype) {
+ case UPIO_MEM:
+ case UPIO_MEM32:
+ case UPIO_AU:
+ p->serial_out(p, offset, value);
+ p->serial_in(p, UART_LCR); /* safe, no side-effects */
+ break;
+ default:
+ p->serial_out(p, offset, value);
+ }
+}
+
+/*
+ * For the 16C950
+ */
+static void serial_icr_write(struct uart_8250_port *up, int offset, int value)
+{
+ serial_out(up, UART_SCR, offset);
+ serial_out(up, UART_ICR, value);
+}
+
+static unsigned int serial_icr_read(struct uart_8250_port *up, int offset)
+{
+ unsigned int value;
+
+ serial_icr_write(up, UART_ACR, up->acr | UART_ACR_ICRRD);
+ serial_out(up, UART_SCR, offset);
+ value = serial_in(up, UART_ICR);
+ serial_icr_write(up, UART_ACR, up->acr);
+
+ return value;
+}
+
+/*
+ * FIFO support.
+ */
+static void serial8250_clear_fifos(struct uart_8250_port *p)
+{
+ if (p->capabilities & UART_CAP_FIFO) {
+ serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+ serial_out(p, UART_FCR, 0);
+ }
+}
+
+void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p)
+{
+ unsigned char fcr;
+
+ serial8250_clear_fifos(p);
+ fcr = uart_config[p->port.type].fcr;
+ serial_out(p, UART_FCR, fcr);
+}
+EXPORT_SYMBOL_GPL(serial8250_clear_and_reinit_fifos);
+
+/*
+ * IER sleep support. UARTs which have EFRs need the "extended
+ * capability" bit enabled. Note that on XR16C850s, we need to
+ * reset LCR to write to IER.
+ */
+static void serial8250_set_sleep(struct uart_8250_port *p, int sleep)
+{
+ /*
+ * Exar UARTs have a SLEEP register that enables or disables
+ * each UART to enter sleep mode separately. On the XR17V35x the
+ * register is accessible to each UART at the UART_EXAR_SLEEP
+ * offset but the UART channel may only write to the corresponding
+ * bit.
+ */
+ if ((p->port.type == PORT_XR17V35X) ||
+ (p->port.type == PORT_XR17D15X)) {
+ serial_out(p, UART_EXAR_SLEEP, 0xff);
+ return;
+ }
+
+ if (p->capabilities & UART_CAP_SLEEP) {
+ if (p->capabilities & UART_CAP_EFR) {
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(p, UART_EFR, UART_EFR_ECB);
+ serial_out(p, UART_LCR, 0);
+ }
+ serial_out(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
+ if (p->capabilities & UART_CAP_EFR) {
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(p, UART_EFR, 0);
+ serial_out(p, UART_LCR, 0);
+ }
+ }
+}
+
+#ifdef CONFIG_SERIAL_8250_RSA
+/*
+ * Attempts to turn on the RSA FIFO. Returns zero on failure.
+ * We set the port uart clock rate if we succeed.
+ */
+static int __enable_rsa(struct uart_8250_port *up)
+{
+ unsigned char mode;
+ int result;
+
+ mode = serial_in(up, UART_RSA_MSR);
+ result = mode & UART_RSA_MSR_FIFO;
+
+ if (!result) {
+ serial_out(up, UART_RSA_MSR, mode | UART_RSA_MSR_FIFO);
+ mode = serial_in(up, UART_RSA_MSR);
+ result = mode & UART_RSA_MSR_FIFO;
+ }
+
+ if (result)
+ up->port.uartclk = SERIAL_RSA_BAUD_BASE * 16;
+
+ return result;
+}
+
+static void enable_rsa(struct uart_8250_port *up)
+{
+ if (up->port.type == PORT_RSA) {
+ if (up->port.uartclk != SERIAL_RSA_BAUD_BASE * 16) {
+ spin_lock_irq(&up->port.lock);
+ __enable_rsa(up);
+ spin_unlock_irq(&up->port.lock);
+ }
+ if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16)
+ serial_out(up, UART_RSA_FRR, 0);
+ }
+}
+
+/*
+ * Attempts to turn off the RSA FIFO. Returns zero on failure.
+ * It is unknown why interrupts were disabled in here. However,
+ * the caller is expected to preserve this behaviour by grabbing
+ * the spinlock before calling this function.
+ */
+static void disable_rsa(struct uart_8250_port *up)
+{
+ unsigned char mode;
+ int result;
+
+ if (up->port.type == PORT_RSA &&
+ up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16) {
+ spin_lock_irq(&up->port.lock);
+
+ mode = serial_in(up, UART_RSA_MSR);
+ result = !(mode & UART_RSA_MSR_FIFO);
+
+ if (!result) {
+ serial_out(up, UART_RSA_MSR, mode & ~UART_RSA_MSR_FIFO);
+ mode = serial_in(up, UART_RSA_MSR);
+ result = !(mode & UART_RSA_MSR_FIFO);
+ }
+
+ if (result)
+ up->port.uartclk = SERIAL_RSA_BAUD_BASE_LO * 16;
+ spin_unlock_irq(&up->port.lock);
+ }
+}
+#endif /* CONFIG_SERIAL_8250_RSA */
+
+/*
+ * This is a quickie test to see how big the FIFO is.
+ * It doesn't work at all the time, more's the pity.
+ */
+static int size_fifo(struct uart_8250_port *up)
+{
+ unsigned char old_fcr, old_mcr, old_lcr;
+ unsigned short old_dl;
+ int count;
+
+ old_lcr = serial_in(up, UART_LCR);
+ serial_out(up, UART_LCR, 0);
+ old_fcr = serial_in(up, UART_FCR);
+ old_mcr = serial_in(up, UART_MCR);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+ serial_out(up, UART_MCR, UART_MCR_LOOP);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ old_dl = serial_dl_read(up);
+ serial_dl_write(up, 0x0001);
+ serial_out(up, UART_LCR, 0x03);
+ for (count = 0; count < 256; count++)
+ serial_out(up, UART_TX, count);
+ mdelay(20);/* FIXME - schedule_timeout */
+ for (count = 0; (serial_in(up, UART_LSR) & UART_LSR_DR) &&
+ (count < 256); count++)
+ serial_in(up, UART_RX);
+ serial_out(up, UART_FCR, old_fcr);
+ serial_out(up, UART_MCR, old_mcr);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_dl_write(up, old_dl);
+ serial_out(up, UART_LCR, old_lcr);
+
+ return count;
+}
+
+/*
+ * Read UART ID using the divisor method - set DLL and DLM to zero
+ * and the revision will be in DLL and device type in DLM. We
+ * preserve the device state across this.
+ */
+static unsigned int autoconfig_read_divisor_id(struct uart_8250_port *p)
+{
+ unsigned char old_dll, old_dlm, old_lcr;
+ unsigned int id;
+
+ old_lcr = serial_in(p, UART_LCR);
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_A);
+
+ old_dll = serial_in(p, UART_DLL);
+ old_dlm = serial_in(p, UART_DLM);
+
+ serial_out(p, UART_DLL, 0);
+ serial_out(p, UART_DLM, 0);
+
+ id = serial_in(p, UART_DLL) | serial_in(p, UART_DLM) << 8;
+
+ serial_out(p, UART_DLL, old_dll);
+ serial_out(p, UART_DLM, old_dlm);
+ serial_out(p, UART_LCR, old_lcr);
+
+ return id;
+}
+
+/*
+ * This is a helper routine to autodetect StarTech/Exar/Oxsemi UART's.
+ * When this function is called we know it is at least a StarTech
+ * 16650 V2, but it might be one of several StarTech UARTs, or one of
+ * its clones. (We treat the broken original StarTech 16650 V1 as a
+ * 16550, and why not? Startech doesn't seem to even acknowledge its
+ * existence.)
+ *
+ * What evil have men's minds wrought...
+ */
+static void autoconfig_has_efr(struct uart_8250_port *up)
+{
+ unsigned int id1, id2, id3, rev;
+
+ /*
+ * Everything with an EFR has SLEEP
+ */
+ up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
+
+ /*
+ * First we check to see if it's an Oxford Semiconductor UART.
+ *
+ * If we have to do this here because some non-National
+ * Semiconductor clone chips lock up if you try writing to the
+ * LSR register (which serial_icr_read does)
+ */
+
+ /*
+ * Check for Oxford Semiconductor 16C950.
+ *
+ * EFR [4] must be set else this test fails.
+ *
+ * This shouldn't be necessary, but Mike Hudson (Exoray@isys.ca)
+ * claims that it's needed for 952 dual UART's (which are not
+ * recommended for new designs).
+ */
+ up->acr = 0;
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_EFR, UART_EFR_ECB);
+ serial_out(up, UART_LCR, 0x00);
+ id1 = serial_icr_read(up, UART_ID1);
+ id2 = serial_icr_read(up, UART_ID2);
+ id3 = serial_icr_read(up, UART_ID3);
+ rev = serial_icr_read(up, UART_REV);
+
+ DEBUG_AUTOCONF("950id=%02x:%02x:%02x:%02x ", id1, id2, id3, rev);
+
+ if (id1 == 0x16 && id2 == 0xC9 &&
+ (id3 == 0x50 || id3 == 0x52 || id3 == 0x54)) {
+ up->port.type = PORT_16C950;
+
+ /*
+ * Enable work around for the Oxford Semiconductor 952 rev B
+ * chip which causes it to seriously miscalculate baud rates
+ * when DLL is 0.
+ */
+ if (id3 == 0x52 && rev == 0x01)
+ up->bugs |= UART_BUG_QUOT;
+ return;
+ }
+
+ /*
+ * We check for a XR16C850 by setting DLL and DLM to 0, and then
+ * reading back DLL and DLM. The chip type depends on the DLM
+ * value read back:
+ * 0x10 - XR16C850 and the DLL contains the chip revision.
+ * 0x12 - XR16C2850.
+ * 0x14 - XR16C854.
+ */
+ id1 = autoconfig_read_divisor_id(up);
+ DEBUG_AUTOCONF("850id=%04x ", id1);
+
+ id2 = id1 >> 8;
+ if (id2 == 0x10 || id2 == 0x12 || id2 == 0x14) {
+ up->port.type = PORT_16850;
+ return;
+ }
+
+ /*
+ * It wasn't an XR16C850.
+ *
+ * We distinguish between the '654 and the '650 by counting
+ * how many bytes are in the FIFO. I'm using this for now,
+ * since that's the technique that was sent to me in the
+ * serial driver update, but I'm not convinced this works.
+ * I've had problems doing this in the past. -TYT
+ */
+ if (size_fifo(up) == 64)
+ up->port.type = PORT_16654;
+ else
+ up->port.type = PORT_16650V2;
+}
+
+/*
+ * We detected a chip without a FIFO. Only two fall into
+ * this category - the original 8250 and the 16450. The
+ * 16450 has a scratch register (accessible with LCR=0)
+ */
+static void autoconfig_8250(struct uart_8250_port *up)
+{
+ unsigned char scratch, status1, status2;
+
+ up->port.type = PORT_8250;
+
+ scratch = serial_in(up, UART_SCR);
+ serial_out(up, UART_SCR, 0xa5);
+ status1 = serial_in(up, UART_SCR);
+ serial_out(up, UART_SCR, 0x5a);
+ status2 = serial_in(up, UART_SCR);
+ serial_out(up, UART_SCR, scratch);
+
+ if (status1 == 0xa5 && status2 == 0x5a)
+ up->port.type = PORT_16450;
+}
+
+static int broken_efr(struct uart_8250_port *up)
+{
+ /*
+ * Exar ST16C2550 "A2" devices incorrectly detect as
+ * having an EFR, and report an ID of 0x0201. See
+ * http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-11/4812.html
+ */
+ if (autoconfig_read_divisor_id(up) == 0x0201 && size_fifo(up) == 16)
+ return 1;
+
+ return 0;
+}
+
+static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+{
+ unsigned char status;
+
+ status = serial_in(up, 0x04); /* EXCR2 */
+#define PRESL(x) ((x) & 0x30)
+ if (PRESL(status) == 0x10) {
+ /* already in high speed mode */
+ return 0;
+ } else {
+ status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
+ status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
+ serial_out(up, 0x04, status);
+ }
+ return 1;
+}
+
+/*
+ * We know that the chip has FIFOs. Does it have an EFR? The
+ * EFR is located in the same register position as the IIR and
+ * we know the top two bits of the IIR are currently set. The
+ * EFR should contain zero. Try to read the EFR.
+ */
+static void autoconfig_16550a(struct uart_8250_port *up)
+{
+ unsigned char status1, status2;
+ unsigned int iersave;
+
+ up->port.type = PORT_16550A;
+ up->capabilities |= UART_CAP_FIFO;
+
+ /*
+ * XR17V35x UARTs have an extra divisor register, DLD
+ * that gets enabled with when DLAB is set which will
+ * cause the device to incorrectly match and assign
+ * port type to PORT_16650. The EFR for this UART is
+ * found at offset 0x09. Instead check the Deice ID (DVID)
+ * register for a 2, 4 or 8 port UART.
+ */
+ if (up->port.flags & UPF_EXAR_EFR) {
+ status1 = serial_in(up, UART_EXAR_DVID);
+ if (status1 == 0x82 || status1 == 0x84 || status1 == 0x88) {
+ DEBUG_AUTOCONF("Exar XR17V35x ");
+ up->port.type = PORT_XR17V35X;
+ up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP;
+
+ return;
+ }
+
+ }
+
+ /*
+ * Check for presence of the EFR when DLAB is set.
+ * Only ST16C650V1 UARTs pass this test.
+ */
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ if (serial_in(up, UART_EFR) == 0) {
+ serial_out(up, UART_EFR, 0xA8);
+ if (serial_in(up, UART_EFR) != 0) {
+ DEBUG_AUTOCONF("EFRv1 ");
+ up->port.type = PORT_16650;
+ up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
+ } else {
+ DEBUG_AUTOCONF("Motorola 8xxx DUART ");
+ }
+ serial_out(up, UART_EFR, 0);
+ return;
+ }
+
+ /*
+ * Maybe it requires 0xbf to be written to the LCR.
+ * (other ST16C650V2 UARTs, TI16C752A, etc)
+ */
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ if (serial_in(up, UART_EFR) == 0 && !broken_efr(up)) {
+ DEBUG_AUTOCONF("EFRv2 ");
+ autoconfig_has_efr(up);
+ return;
+ }
+
+ /*
+ * Check for a National Semiconductor SuperIO chip.
+ * Attempt to switch to bank 2, read the value of the LOOP bit
+ * from EXCR1. Switch back to bank 0, change it in MCR. Then
+ * switch back to bank 2, read it from EXCR1 again and check
+ * it's changed. If so, set baud_base in EXCR2 to 921600. -- dwmw2
+ */
+ serial_out(up, UART_LCR, 0);
+ status1 = serial_in(up, UART_MCR);
+ serial_out(up, UART_LCR, 0xE0);
+ status2 = serial_in(up, 0x02); /* EXCR1 */
+
+ if (!((status2 ^ status1) & UART_MCR_LOOP)) {
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_MCR, status1 ^ UART_MCR_LOOP);
+ serial_out(up, UART_LCR, 0xE0);
+ status2 = serial_in(up, 0x02); /* EXCR1 */
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_MCR, status1);
+
+ if ((status2 ^ status1) & UART_MCR_LOOP) {
+ unsigned short quot;
+
+ serial_out(up, UART_LCR, 0xE0);
+
+ quot = serial_dl_read(up);
+ quot <<= 3;
+
+ if (ns16550a_goto_highspeed(up))
+ serial_dl_write(up, quot);
+
+ serial_out(up, UART_LCR, 0);
+
+ up->port.uartclk = 921600*16;
+ up->port.type = PORT_NS16550A;
+ up->capabilities |= UART_NATSEMI;
+ return;
+ }
+ }
+
+ /*
+ * No EFR. Try to detect a TI16750, which only sets bit 5 of
+ * the IIR when 64 byte FIFO mode is enabled when DLAB is set.
+ * Try setting it with and without DLAB set. Cheap clones
+ * set bit 5 without DLAB set.
+ */
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
+ status1 = serial_in(up, UART_IIR) >> 5;
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
+ status2 = serial_in(up, UART_IIR) >> 5;
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_LCR, 0);
+
+ DEBUG_AUTOCONF("iir1=%d iir2=%d ", status1, status2);
+
+ if (status1 == 6 && status2 == 7) {
+ up->port.type = PORT_16750;
+ up->capabilities |= UART_CAP_AFE | UART_CAP_SLEEP;
+ return;
+ }
+
+ /*
+ * Try writing and reading the UART_IER_UUE bit (b6).
+ * If it works, this is probably one of the Xscale platform's
+ * internal UARTs.
+ * We're going to explicitly set the UUE bit to 0 before
+ * trying to write and read a 1 just to make sure it's not
+ * already a 1 and maybe locked there before we even start start.
+ */
+ iersave = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, iersave & ~UART_IER_UUE);
+ if (!(serial_in(up, UART_IER) & UART_IER_UUE)) {
+ /*
+ * OK it's in a known zero state, try writing and reading
+ * without disturbing the current state of the other bits.
+ */
+ serial_out(up, UART_IER, iersave | UART_IER_UUE);
+ if (serial_in(up, UART_IER) & UART_IER_UUE) {
+ /*
+ * It's an Xscale.
+ * We'll leave the UART_IER_UUE bit set to 1 (enabled).
+ */
+ DEBUG_AUTOCONF("Xscale ");
+ up->port.type = PORT_XSCALE;
+ up->capabilities |= UART_CAP_UUE | UART_CAP_RTOIE;
+ return;
+ }
+ } else {
+ /*
+ * If we got here we couldn't force the IER_UUE bit to 0.
+ * Log it and continue.
+ */
+ DEBUG_AUTOCONF("Couldn't force IER_UUE to 0 ");
+ }
+ serial_out(up, UART_IER, iersave);
+
+ /*
+ * Exar uarts have EFR in a weird location
+ */
+ if (up->port.flags & UPF_EXAR_EFR) {
+ DEBUG_AUTOCONF("Exar XR17D15x ");
+ up->port.type = PORT_XR17D15X;
+ up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP;
+
+ return;
+ }
+
+ /*
+ * We distinguish between 16550A and U6 16550A by counting
+ * how many bytes are in the FIFO.
+ */
+ if (up->port.type == PORT_16550A && size_fifo(up) == 64) {
+ up->port.type = PORT_U6_16550A;
+ up->capabilities |= UART_CAP_AFE;
+ }
+}
+
+/*
+ * This routine is called by rs_init() to initialize a specific serial
+ * port. It determines what type of UART chip this serial port is
+ * using: 8250, 16450, 16550, 16550A. The important question is
+ * whether or not this UART is a 16550A or not, since this will
+ * determine whether or not we can use its FIFO features or not.
+ */
+static void autoconfig(struct uart_8250_port *up, unsigned int probeflags)
+{
+ unsigned char status1, scratch, scratch2, scratch3;
+ unsigned char save_lcr, save_mcr;
+ struct uart_port *port = &up->port;
+ unsigned long flags;
+ unsigned int old_capabilities;
+
+ if (!port->iobase && !port->mapbase && !port->membase)
+ return;
+
+ DEBUG_AUTOCONF("ttyS%d: autoconf (0x%04lx, 0x%p): ",
+ serial_index(port), port->iobase, port->membase);
+
+ /*
+ * We really do need global IRQs disabled here - we're going to
+ * be frobbing the chips IRQ enable register to see if it exists.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+
+ up->capabilities = 0;
+ up->bugs = 0;
+
+ if (!(port->flags & UPF_BUGGY_UART)) {
+ /*
+ * Do a simple existence test first; if we fail this,
+ * there's no point trying anything else.
+ *
+ * 0x80 is used as a nonsense port to prevent against
+ * false positives due to ISA bus float. The
+ * assumption is that 0x80 is a non-existent port;
+ * which should be safe since include/asm/io.h also
+ * makes this assumption.
+ *
+ * Note: this is safe as long as MCR bit 4 is clear
+ * and the device is in "PC" mode.
+ */
+ scratch = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, 0);
+#ifdef __i386__
+ outb(0xff, 0x080);
+#endif
+ /*
+ * Mask out IER[7:4] bits for test as some UARTs (e.g. TL
+ * 16C754B) allow only to modify them if an EFR bit is set.
+ */
+ scratch2 = serial_in(up, UART_IER) & 0x0f;
+ serial_out(up, UART_IER, 0x0F);
+#ifdef __i386__
+ outb(0, 0x080);
+#endif
+ scratch3 = serial_in(up, UART_IER) & 0x0f;
+ serial_out(up, UART_IER, scratch);
+ if (scratch2 != 0 || scratch3 != 0x0F) {
+ /*
+ * We failed; there's nothing here
+ */
+ spin_unlock_irqrestore(&port->lock, flags);
+ DEBUG_AUTOCONF("IER test failed (%02x, %02x) ",
+ scratch2, scratch3);
+ goto out;
+ }
+ }
+
+ save_mcr = serial_in(up, UART_MCR);
+ save_lcr = serial_in(up, UART_LCR);
+
+ /*
+ * Check to see if a UART is really there. Certain broken
+ * internal modems based on the Rockwell chipset fail this
+ * test, because they apparently don't implement the loopback
+ * test mode. So this test is skipped on the COM 1 through
+ * COM 4 ports. This *should* be safe, since no board
+ * manufacturer would be stupid enough to design a board
+ * that conflicts with COM 1-4 --- we hope!
+ */
+ if (!(port->flags & UPF_SKIP_TEST)) {
+ serial_out(up, UART_MCR, UART_MCR_LOOP | 0x0A);
+ status1 = serial_in(up, UART_MSR) & 0xF0;
+ serial_out(up, UART_MCR, save_mcr);
+ if (status1 != 0x90) {
+ spin_unlock_irqrestore(&port->lock, flags);
+ DEBUG_AUTOCONF("LOOP test failed (%02x) ",
+ status1);
+ goto out;
+ }
+ }
+
+ /*
+ * We're pretty sure there's a port here. Lets find out what
+ * type of port it is. The IIR top two bits allows us to find
+ * out if it's 8250 or 16450, 16550, 16550A or later. This
+ * determines what we test for next.
+ *
+ * We also initialise the EFR (if any) to zero for later. The
+ * EFR occupies the same register location as the FCR and IIR.
+ */
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_EFR, 0);
+ serial_out(up, UART_LCR, 0);
+
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ scratch = serial_in(up, UART_IIR) >> 6;
+
+ switch (scratch) {
+ case 0:
+ autoconfig_8250(up);
+ break;
+ case 1:
+ port->type = PORT_UNKNOWN;
+ break;
+ case 2:
+ port->type = PORT_16550;
+ break;
+ case 3:
+ autoconfig_16550a(up);
+ break;
+ }
+
+#ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * Only probe for RSA ports if we got the region.
+ */
+ if (port->type == PORT_16550A && probeflags & PROBE_RSA) {
+ int i;
+
+ for (i = 0 ; i < probe_rsa_count; ++i) {
+ if (probe_rsa[i] == port->iobase && __enable_rsa(up)) {
+ port->type = PORT_RSA;
+ break;
+ }
+ }
+ }
+#endif
+
+ serial_out(up, UART_LCR, save_lcr);
+
+ port->fifosize = uart_config[up->port.type].fifo_size;
+ old_capabilities = up->capabilities;
+ up->capabilities = uart_config[port->type].flags;
+ up->tx_loadsz = uart_config[port->type].tx_loadsz;
+
+ if (port->type == PORT_UNKNOWN)
+ goto out_lock;
+
+ /*
+ * Reset the UART.
+ */
+#ifdef CONFIG_SERIAL_8250_RSA
+ if (port->type == PORT_RSA)
+ serial_out(up, UART_RSA_FRR, 0);
+#endif
+ serial_out(up, UART_MCR, save_mcr);
+ serial8250_clear_fifos(up);
+ serial_in(up, UART_RX);
+ if (up->capabilities & UART_CAP_UUE)
+ serial_out(up, UART_IER, UART_IER_UUE);
+ else
+ serial_out(up, UART_IER, 0);
+
+out_lock:
+ spin_unlock_irqrestore(&port->lock, flags);
+ if (up->capabilities != old_capabilities) {
+ printk(KERN_WARNING
+ "ttyS%d: detected caps %08x should be %08x\n",
+ serial_index(port), old_capabilities,
+ up->capabilities);
+ }
+out:
+ DEBUG_AUTOCONF("iir=%d ", scratch);
+ DEBUG_AUTOCONF("type=%s\n", uart_config[port->type].name);
+}
+
+static void autoconfig_irq(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ unsigned char save_mcr, save_ier;
+ unsigned char save_ICP = 0;
+ unsigned int ICP = 0;
+ unsigned long irqs;
+ int irq;
+
+ if (port->flags & UPF_FOURPORT) {
+ ICP = (port->iobase & 0xfe0) | 0x1f;
+ save_ICP = inb_p(ICP);
+ outb_p(0x80, ICP);
+ inb_p(ICP);
+ }
+
+ /* forget possible initially masked and pending IRQ */
+ probe_irq_off(probe_irq_on());
+ save_mcr = serial_in(up, UART_MCR);
+ save_ier = serial_in(up, UART_IER);
+ serial_out(up, UART_MCR, UART_MCR_OUT1 | UART_MCR_OUT2);
+
+ irqs = probe_irq_on();
+ serial_out(up, UART_MCR, 0);
+ udelay(10);
+ if (port->flags & UPF_FOURPORT) {
+ serial_out(up, UART_MCR,
+ UART_MCR_DTR | UART_MCR_RTS);
+ } else {
+ serial_out(up, UART_MCR,
+ UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2);
+ }
+ serial_out(up, UART_IER, 0x0f); /* enable all intrs */
+ serial_in(up, UART_LSR);
+ serial_in(up, UART_RX);
+ serial_in(up, UART_IIR);
+ serial_in(up, UART_MSR);
+ serial_out(up, UART_TX, 0xFF);
+ udelay(20);
+ irq = probe_irq_off(irqs);
+
+ serial_out(up, UART_MCR, save_mcr);
+ serial_out(up, UART_IER, save_ier);
+
+ if (port->flags & UPF_FOURPORT)
+ outb_p(save_ICP, ICP);
+
+ port->irq = (irq > 0) ? irq : 0;
+}
+
+static inline void __stop_tx(struct uart_8250_port *p)
+{
+ if (p->ier & UART_IER_THRI) {
+ p->ier &= ~UART_IER_THRI;
+ serial_out(p, UART_IER, p->ier);
+ }
+}
+
+static void serial8250_stop_tx(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ __stop_tx(up);
+
+ /*
+ * We really want to stop the transmitter from sending.
+ */
+ if (port->type == PORT_16C950) {
+ up->acr |= UART_ACR_TXDIS;
+ serial_icr_write(up, UART_ACR, up->acr);
+ }
+}
+
+static void serial8250_start_tx(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ if (up->dma && !serial8250_tx_dma(up)) {
+ return;
+ } else if (!(up->ier & UART_IER_THRI)) {
+ up->ier |= UART_IER_THRI;
+ serial_port_out(port, UART_IER, up->ier);
+
+ if (up->bugs & UART_BUG_TXEN) {
+ unsigned char lsr;
+ lsr = serial_in(up, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ if (lsr & UART_LSR_TEMT)
+ serial8250_tx_chars(up);
+ }
+ }
+
+ /*
+ * Re-enable the transmitter if we disabled it.
+ */
+ if (port->type == PORT_16C950 && up->acr & UART_ACR_TXDIS) {
+ up->acr &= ~UART_ACR_TXDIS;
+ serial_icr_write(up, UART_ACR, up->acr);
+ }
+}
+
+static void serial8250_stop_rx(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ up->ier &= ~UART_IER_RLSI;
+ up->port.read_status_mask &= ~UART_LSR_DR;
+ serial_port_out(port, UART_IER, up->ier);
+}
+
+static void serial8250_enable_ms(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ /* no MSR capabilities */
+ if (up->bugs & UART_BUG_NOMSR)
+ return;
+
+ up->ier |= UART_IER_MSI;
+ serial_port_out(port, UART_IER, up->ier);
+}
+
+/*
+ * serial8250_rx_chars: processes according to the passed in LSR
+ * value, and returns the remaining LSR bits not handled
+ * by this Rx routine.
+ */
+unsigned char
+serial8250_rx_chars(struct uart_8250_port *up, unsigned char lsr)
+{
+ struct uart_port *port = &up->port;
+ unsigned char ch;
+ int max_count = 256;
+ char flag;
+
+ do {
+ if (likely(lsr & UART_LSR_DR))
+ ch = serial_in(up, UART_RX);
+ else
+ /*
+ * Intel 82571 has a Serial Over Lan device that will
+ * set UART_LSR_BI without setting UART_LSR_DR when
+ * it receives a break. To avoid reading from the
+ * receive buffer without UART_LSR_DR bit set, we
+ * just force the read character to be 0
+ */
+ ch = 0;
+
+ flag = TTY_NORMAL;
+ port->icount.rx++;
+
+ lsr |= up->lsr_saved_flags;
+ up->lsr_saved_flags = 0;
+
+ if (unlikely(lsr & UART_LSR_BRK_ERROR_BITS)) {
+ if (lsr & UART_LSR_BI) {
+ lsr &= ~(UART_LSR_FE | UART_LSR_PE);
+ port->icount.brk++;
+ /*
+ * We do the SysRQ and SAK checking
+ * here because otherwise the break
+ * may get masked by ignore_status_mask
+ * or read_status_mask.
+ */
+ if (uart_handle_break(port))
+ goto ignore_char;
+ } else if (lsr & UART_LSR_PE)
+ port->icount.parity++;
+ else if (lsr & UART_LSR_FE)
+ port->icount.frame++;
+ if (lsr & UART_LSR_OE)
+ port->icount.overrun++;
+
+ /*
+ * Mask off conditions which should be ignored.
+ */
+ lsr &= port->read_status_mask;
+
+ if (lsr & UART_LSR_BI) {
+ DEBUG_INTR("handling break....");
+ flag = TTY_BREAK;
+ } else if (lsr & UART_LSR_PE)
+ flag = TTY_PARITY;
+ else if (lsr & UART_LSR_FE)
+ flag = TTY_FRAME;
+ }
+ if (uart_handle_sysrq_char(port, ch))
+ goto ignore_char;
+
+ uart_insert_char(port, lsr, UART_LSR_OE, ch, flag);
+
+ignore_char:
+ lsr = serial_in(up, UART_LSR);
+ } while ((lsr & (UART_LSR_DR | UART_LSR_BI)) && (max_count-- > 0));
+ spin_unlock(&port->lock);
+ tty_flip_buffer_push(&port->state->port);
+ spin_lock(&port->lock);
+ return lsr;
+}
+EXPORT_SYMBOL_GPL(serial8250_rx_chars);
+
+void serial8250_tx_chars(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ struct circ_buf *xmit = &port->state->xmit;
+ int count;
+
+ if (port->x_char) {
+ serial_out(up, UART_TX, port->x_char);
+ port->icount.tx++;
+ port->x_char = 0;
+ return;
+ }
+ if (uart_tx_stopped(port)) {
+ serial8250_stop_tx(port);
+ return;
+ }
+ if (uart_circ_empty(xmit)) {
+ __stop_tx(up);
+ return;
+ }
+
+ count = up->tx_loadsz;
+ do {
+ serial_out(up, UART_TX, xmit->buf[xmit->tail]);
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ port->icount.tx++;
+ if (uart_circ_empty(xmit))
+ break;
+ if (up->capabilities & UART_CAP_HFIFO) {
+ if ((serial_port_in(port, UART_LSR) & BOTH_EMPTY) !=
+ BOTH_EMPTY)
+ break;
+ }
+ } while (--count > 0);
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
+ DEBUG_INTR("THRE...");
+
+ if (uart_circ_empty(xmit))
+ __stop_tx(up);
+}
+EXPORT_SYMBOL_GPL(serial8250_tx_chars);
+
+unsigned int serial8250_modem_status(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ unsigned int status = serial_in(up, UART_MSR);
+
+ status |= up->msr_saved_flags;
+ up->msr_saved_flags = 0;
+ if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI &&
+ port->state != NULL) {
+ if (status & UART_MSR_TERI)
+ port->icount.rng++;
+ if (status & UART_MSR_DDSR)
+ port->icount.dsr++;
+ if (status & UART_MSR_DDCD)
+ uart_handle_dcd_change(port, status & UART_MSR_DCD);
+ if (status & UART_MSR_DCTS)
+ uart_handle_cts_change(port, status & UART_MSR_CTS);
+
+ wake_up_interruptible(&port->state->port.delta_msr_wait);
+ }
+
+ return status;
+}
+EXPORT_SYMBOL_GPL(serial8250_modem_status);
+
+/*
+ * This handles the interrupt from one port.
+ */
+int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+{
+ unsigned char status;
+ unsigned long flags;
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ int dma_err = 0;
+
+ if (iir & UART_IIR_NO_INT)
+ return 0;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ status = serial_port_in(port, UART_LSR);
+
+ DEBUG_INTR("status = %x...", status);
+
+ if (status & (UART_LSR_DR | UART_LSR_BI)) {
+ if (up->dma)
+ dma_err = serial8250_rx_dma(up, iir);
+
+ if (!up->dma || dma_err)
+ status = serial8250_rx_chars(up, status);
+ }
+ serial8250_modem_status(up);
+ if (status & UART_LSR_THRE)
+ serial8250_tx_chars(up);
+
+ spin_unlock_irqrestore(&port->lock, flags);
+ return 1;
+}
+EXPORT_SYMBOL_GPL(serial8250_handle_irq);
+
+static int serial8250_default_handle_irq(struct uart_port *port)
+{
+ unsigned int iir = serial_port_in(port, UART_IIR);
+
+ return serial8250_handle_irq(port, iir);
+}
+
+/*
+ * These Exar UARTs have an extra interrupt indicator that could
+ * fire for a few unimplemented interrupts. One of which is a
+ * wakeup event when coming out of sleep. Put this here just
+ * to be on the safe side that these interrupts don't go unhandled.
+ */
+static int exar_handle_irq(struct uart_port *port)
+{
+ unsigned char int0, int1, int2, int3;
+ unsigned int iir = serial_port_in(port, UART_IIR);
+ int ret;
+
+ ret = serial8250_handle_irq(port, iir);
+
+ if ((port->type == PORT_XR17V35X) ||
+ (port->type == PORT_XR17D15X)) {
+ int0 = serial_port_in(port, 0x80);
+ int1 = serial_port_in(port, 0x81);
+ int2 = serial_port_in(port, 0x82);
+ int3 = serial_port_in(port, 0x83);
+ }
+
+ return ret;
+}
+
+/*
+ * This is the serial driver's interrupt routine.
+ *
+ * Arjan thinks the old way was overly complex, so it got simplified.
+ * Alan disagrees, saying that need the complexity to handle the weird
+ * nature of ISA shared interrupts. (This is a special exception.)
+ *
+ * In order to handle ISA shared interrupts properly, we need to check
+ * that all ports have been serviced, and therefore the ISA interrupt
+ * line has been de-asserted.
+ *
+ * This means we need to loop through all ports. checking that they
+ * don't have an interrupt pending.
+ */
+static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
+{
+ struct irq_info *i = dev_id;
+ struct list_head *l, *end = NULL;
+ int pass_counter = 0, handled = 0;
+
+ DEBUG_INTR("serial8250_interrupt(%d)...", irq);
+
+ spin_lock(&i->lock);
+
+ l = i->head;
+ do {
+ struct uart_8250_port *up;
+ struct uart_port *port;
+
+ up = list_entry(l, struct uart_8250_port, list);
+ port = &up->port;
+
+ if (port->handle_irq(port)) {
+ handled = 1;
+ end = NULL;
+ } else if (end == NULL)
+ end = l;
+
+ l = l->next;
+
+ if (l == i->head && pass_counter++ > PASS_LIMIT) {
+ /* If we hit this, we're dead. */
+ printk_ratelimited(KERN_ERR
+ "serial8250: too much work for irq%d\n", irq);
+ break;
+ }
+ } while (l != end);
+
+ spin_unlock(&i->lock);
+
+ DEBUG_INTR("end.\n");
+
+ return IRQ_RETVAL(handled);
+}
+
+/*
+ * To support ISA shared interrupts, we need to have one interrupt
+ * handler that ensures that the IRQ line has been deasserted
+ * before returning. Failing to do this will result in the IRQ
+ * line being stuck active, and, since ISA irqs are edge triggered,
+ * no more IRQs will be seen.
+ */
+static void serial_do_unlink(struct irq_info *i, struct uart_8250_port *up)
+{
+ spin_lock_irq(&i->lock);
+
+ if (!list_empty(i->head)) {
+ if (i->head == &up->list)
+ i->head = i->head->next;
+ list_del(&up->list);
+ } else {
+ BUG_ON(i->head != &up->list);
+ i->head = NULL;
+ }
+ spin_unlock_irq(&i->lock);
+ /* List empty so throw away the hash node */
+ if (i->head == NULL) {
+ hlist_del(&i->node);
+ kfree(i);
+ }
+}
+
+static int serial_link_irq_chain(struct uart_8250_port *up)
+{
+ struct hlist_head *h;
+ struct hlist_node *n;
+ struct irq_info *i;
+ int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0;
+
+ mutex_lock(&hash_mutex);
+
+ h = &irq_lists[up->port.irq % NR_IRQ_HASH];
+
+ hlist_for_each(n, h) {
+ i = hlist_entry(n, struct irq_info, node);
+ if (i->irq == up->port.irq)
+ break;
+ }
+
+ if (n == NULL) {
+ i = kzalloc(sizeof(struct irq_info), GFP_KERNEL);
+ if (i == NULL) {
+ mutex_unlock(&hash_mutex);
+ return -ENOMEM;
+ }
+ spin_lock_init(&i->lock);
+ i->irq = up->port.irq;
+ hlist_add_head(&i->node, h);
+ }
+ mutex_unlock(&hash_mutex);
+
+ spin_lock_irq(&i->lock);
+
+ if (i->head) {
+ list_add(&up->list, i->head);
+ spin_unlock_irq(&i->lock);
+
+ ret = 0;
+ } else {
+ INIT_LIST_HEAD(&up->list);
+ i->head = &up->list;
+ spin_unlock_irq(&i->lock);
+ irq_flags |= up->port.irqflags;
+ ret = request_irq(up->port.irq, serial8250_interrupt,
+ irq_flags, "serial", i);
+ if (ret < 0)
+ serial_do_unlink(i, up);
+ }
+
+ return ret;
+}
+
+static void serial_unlink_irq_chain(struct uart_8250_port *up)
+{
+ struct irq_info *i;
+ struct hlist_node *n;
+ struct hlist_head *h;
+
+ mutex_lock(&hash_mutex);
+
+ h = &irq_lists[up->port.irq % NR_IRQ_HASH];
+
+ hlist_for_each(n, h) {
+ i = hlist_entry(n, struct irq_info, node);
+ if (i->irq == up->port.irq)
+ break;
+ }
+
+ BUG_ON(n == NULL);
+ BUG_ON(i->head == NULL);
+
+ if (list_empty(i->head))
+ free_irq(up->port.irq, i);
+
+ serial_do_unlink(i, up);
+ mutex_unlock(&hash_mutex);
+}
+
+/*
+ * This function is used to handle ports that do not have an
+ * interrupt. This doesn't work very well for 16450's, but gives
+ * barely passable results for a 16550A. (Although at the expense
+ * of much CPU overhead).
+ */
+static void serial8250_timeout(unsigned long data)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)data;
+
+ up->port.handle_irq(&up->port);
+ mod_timer(&up->timer, jiffies + uart_poll_timeout(&up->port));
+}
+
+static void serial8250_backup_timeout(unsigned long data)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)data;
+ unsigned int iir, ier = 0, lsr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ /*
+ * Must disable interrupts or else we risk racing with the interrupt
+ * based handler.
+ */
+ if (up->port.irq) {
+ ier = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, 0);
+ }
+
+ iir = serial_in(up, UART_IIR);
+
+ /*
+ * This should be a safe test for anyone who doesn't trust the
+ * IIR bits on their UART, but it's specifically designed for
+ * the "Diva" UART used on the management processor on many HP
+ * ia64 and parisc boxes.
+ */
+ lsr = serial_in(up, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) &&
+ (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) &&
+ (lsr & UART_LSR_THRE)) {
+ iir &= ~(UART_IIR_ID | UART_IIR_NO_INT);
+ iir |= UART_IIR_THRI;
+ }
+
+ if (!(iir & UART_IIR_NO_INT))
+ serial8250_tx_chars(up);
+
+ if (up->port.irq)
+ serial_out(up, UART_IER, ier);
+
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ /* Standard timer interval plus 0.2s to keep the port running */
+ mod_timer(&up->timer,
+ jiffies + uart_poll_timeout(&up->port) + HZ / 5);
+}
+
+static unsigned int serial8250_tx_empty(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+ unsigned int lsr;
+
+ spin_lock_irqsave(&port->lock, flags);
+ lsr = serial_port_in(port, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
+}
+
+static unsigned int serial8250_get_mctrl(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned int status;
+ unsigned int ret;
+
+ status = serial8250_modem_status(up);
+
+ ret = 0;
+ if (status & UART_MSR_DCD)
+ ret |= TIOCM_CAR;
+ if (status & UART_MSR_RI)
+ ret |= TIOCM_RNG;
+ if (status & UART_MSR_DSR)
+ ret |= TIOCM_DSR;
+ if (status & UART_MSR_CTS)
+ ret |= TIOCM_CTS;
+ return ret;
+}
+
+static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned char mcr = 0;
+
+ if (mctrl & TIOCM_RTS)
+ mcr |= UART_MCR_RTS;
+ if (mctrl & TIOCM_DTR)
+ mcr |= UART_MCR_DTR;
+ if (mctrl & TIOCM_OUT1)
+ mcr |= UART_MCR_OUT1;
+ if (mctrl & TIOCM_OUT2)
+ mcr |= UART_MCR_OUT2;
+ if (mctrl & TIOCM_LOOP)
+ mcr |= UART_MCR_LOOP;
+
+ mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
+
+ serial_port_out(port, UART_MCR, mcr);
+}
+
+static void serial8250_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (break_state == -1)
+ up->lcr |= UART_LCR_SBC;
+ else
+ up->lcr &= ~UART_LCR_SBC;
+ serial_port_out(port, UART_LCR, up->lcr);
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+/*
+ * Wait for transmitter & holding register to empty
+ */
+static void wait_for_xmitr(struct uart_8250_port *up, int bits)
+{
+ unsigned int status, tmout = 10000;
+
+ /* Wait up to 10ms for the character(s) to be sent. */
+ for (;;) {
+ status = serial_in(up, UART_LSR);
+
+ up->lsr_saved_flags |= status & LSR_SAVE_FLAGS;
+
+ if ((status & bits) == bits)
+ break;
+ if (--tmout == 0)
+ break;
+ udelay(1);
+ }
+
+ /* Wait up to 1s for flow control if necessary */
+ if (up->port.flags & UPF_CONS_FLOW) {
+ unsigned int tmout;
+ for (tmout = 1000000; tmout; tmout--) {
+ unsigned int msr = serial_in(up, UART_MSR);
+ up->msr_saved_flags |= msr & MSR_SAVE_FLAGS;
+ if (msr & UART_MSR_CTS)
+ break;
+ udelay(1);
+ touch_nmi_watchdog();
+ }
+ }
+}
+
+#ifdef CONFIG_CONSOLE_POLL
+/*
+ * Console polling routines for writing and reading from the uart while
+ * in an interrupt or debug context.
+ */
+
+static int serial8250_get_poll_char(struct uart_port *port)
+{
+ unsigned char lsr = serial_port_in(port, UART_LSR);
+
+ if (!(lsr & UART_LSR_DR))
+ return NO_POLL_CHAR;
+
+ return serial_port_in(port, UART_RX);
+}
+
+
+static void serial8250_put_poll_char(struct uart_port *port,
+ unsigned char c)
+{
+ unsigned int ier;
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ /*
+ * First save the IER then disable the interrupts
+ */
+ ier = serial_port_in(port, UART_IER);
+ if (up->capabilities & UART_CAP_UUE)
+ serial_port_out(port, UART_IER, UART_IER_UUE);
+ else
+ serial_port_out(port, UART_IER, 0);
+
+ wait_for_xmitr(up, BOTH_EMPTY);
+ /*
+ * Send the character out.
+ * If a LF, also do CR...
+ */
+ serial_port_out(port, UART_TX, c);
+ if (c == 10) {
+ wait_for_xmitr(up, BOTH_EMPTY);
+ serial_port_out(port, UART_TX, 13);
+ }
+
+ /*
+ * Finally, wait for transmitter to become empty
+ * and restore the IER
+ */
+ wait_for_xmitr(up, BOTH_EMPTY);
+ serial_port_out(port, UART_IER, ier);
+}
+
+#endif /* CONFIG_CONSOLE_POLL */
+
+static int serial8250_startup(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+ unsigned char lsr, iir;
+ int retval;
+
+ if (port->type == PORT_8250_CIR)
+ return -ENODEV;
+
+ if (!port->fifosize)
+ port->fifosize = uart_config[port->type].fifo_size;
+ if (!up->tx_loadsz)
+ up->tx_loadsz = uart_config[port->type].tx_loadsz;
+ if (!up->capabilities)
+ up->capabilities = uart_config[port->type].flags;
+ up->mcr = 0;
+
+ if (port->iotype != up->cur_iotype)
+ set_io_from_upio(port);
+
+ if (port->type == PORT_16C950) {
+ /* Wake up and initialize UART */
+ up->acr = 0;
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_port_out(port, UART_EFR, UART_EFR_ECB);
+ serial_port_out(port, UART_IER, 0);
+ serial_port_out(port, UART_LCR, 0);
+ serial_icr_write(up, UART_CSR, 0); /* Reset the UART */
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_port_out(port, UART_EFR, UART_EFR_ECB);
+ serial_port_out(port, UART_LCR, 0);
+ }
+
+#ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * If this is an RSA port, see if we can kick it up to the
+ * higher speed clock.
+ */
+ enable_rsa(up);
+#endif
+
+ /*
+ * Clear the FIFO buffers and disable them.
+ * (they will be reenabled in set_termios())
+ */
+ serial8250_clear_fifos(up);
+
+ /*
+ * Clear the interrupt registers.
+ */
+ serial_port_in(port, UART_LSR);
+ serial_port_in(port, UART_RX);
+ serial_port_in(port, UART_IIR);
+ serial_port_in(port, UART_MSR);
+
+ /*
+ * At this point, there's no way the LSR could still be 0xff;
+ * if it is, then bail out, because there's likely no UART
+ * here.
+ */
+ if (!(port->flags & UPF_BUGGY_UART) &&
+ (serial_port_in(port, UART_LSR) == 0xff)) {
+ printk_ratelimited(KERN_INFO "ttyS%d: LSR safety check engaged!\n",
+ serial_index(port));
+ return -ENODEV;
+ }
+
+ /*
+ * For a XR16C850, we need to set the trigger levels
+ */
+ if (port->type == PORT_16850) {
+ unsigned char fctr;
+
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+
+ fctr = serial_in(up, UART_FCTR) & ~(UART_FCTR_RX|UART_FCTR_TX);
+ serial_port_out(port, UART_FCTR,
+ fctr | UART_FCTR_TRGD | UART_FCTR_RX);
+ serial_port_out(port, UART_TRG, UART_TRG_96);
+ serial_port_out(port, UART_FCTR,
+ fctr | UART_FCTR_TRGD | UART_FCTR_TX);
+ serial_port_out(port, UART_TRG, UART_TRG_96);
+
+ serial_port_out(port, UART_LCR, 0);
+ }
+
+ if (port->irq) {
+ unsigned char iir1;
+ /*
+ * Test for UARTs that do not reassert THRE when the
+ * transmitter is idle and the interrupt has already
+ * been cleared. Real 16550s should always reassert
+ * this interrupt whenever the transmitter is idle and
+ * the interrupt is enabled. Delays are necessary to
+ * allow register changes to become visible.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+ if (up->port.irqflags & IRQF_SHARED)
+ disable_irq_nosync(port->irq);
+
+ wait_for_xmitr(up, UART_LSR_THRE);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
+ udelay(1); /* allow THRE to set */
+ iir1 = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
+ udelay(1); /* allow a working UART time to re-assert THRE */
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+
+ if (port->irqflags & IRQF_SHARED)
+ enable_irq(port->irq);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * If the interrupt is not reasserted, or we otherwise
+ * don't trust the iir, setup a timer to kick the UART
+ * on a regular basis.
+ */
+ if ((!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) ||
+ up->port.flags & UPF_BUG_THRE) {
+ up->bugs |= UART_BUG_THRE;
+ pr_debug("ttyS%d - using backup timer\n",
+ serial_index(port));
+ }
+ }
+
+ /*
+ * The above check will only give an accurate result the first time
+ * the port is opened so this value needs to be preserved.
+ */
+ if (up->bugs & UART_BUG_THRE) {
+ up->timer.function = serial8250_backup_timeout;
+ up->timer.data = (unsigned long)up;
+ mod_timer(&up->timer, jiffies +
+ uart_poll_timeout(port) + HZ / 5);
+ }
+
+ /*
+ * If the "interrupt" for this port doesn't correspond with any
+ * hardware interrupt, we use a timer-based system. The original
+ * driver used to do this with IRQ0.
+ */
+ if (!port->irq) {
+ up->timer.data = (unsigned long)up;
+ mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
+ } else {
+ retval = serial_link_irq_chain(up);
+ if (retval)
+ return retval;
+ }
+
+ /*
+ * Now, initialize the UART
+ */
+ serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (up->port.flags & UPF_FOURPORT) {
+ if (!up->port.irq)
+ up->port.mctrl |= TIOCM_OUT1;
+ } else
+ /*
+ * Most PC uarts need OUT2 raised to enable interrupts.
+ */
+ if (port->irq)
+ up->port.mctrl |= TIOCM_OUT2;
+
+ serial8250_set_mctrl(port, port->mctrl);
+
+ /* Serial over Lan (SoL) hack:
+ Intel 8257x Gigabit ethernet chips have a
+ 16550 emulation, to be used for Serial Over Lan.
+ Those chips take a longer time than a normal
+ serial device to signalize that a transmission
+ data was queued. Due to that, the above test generally
+ fails. One solution would be to delay the reading of
+ iir. However, this is not reliable, since the timeout
+ is variable. So, let's just don't test if we receive
+ TX irq. This way, we'll never enable UART_BUG_TXEN.
+ */
+ if (skip_txen_test || up->port.flags & UPF_NO_TXEN_TEST)
+ goto dont_test_tx_en;
+
+ /*
+ * Do a quick test to see if we receive an
+ * interrupt when we enable the TX irq.
+ */
+ serial_port_out(port, UART_IER, UART_IER_THRI);
+ lsr = serial_port_in(port, UART_LSR);
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+
+ if (lsr & UART_LSR_TEMT && iir & UART_IIR_NO_INT) {
+ if (!(up->bugs & UART_BUG_TXEN)) {
+ up->bugs |= UART_BUG_TXEN;
+ pr_debug("ttyS%d - enabling bad tx status workarounds\n",
+ serial_index(port));
+ }
+ } else {
+ up->bugs &= ~UART_BUG_TXEN;
+ }
+
+dont_test_tx_en:
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * Clear the interrupt registers again for luck, and clear the
+ * saved flags to avoid getting false values from polling
+ * routines or the previous session.
+ */
+ serial_port_in(port, UART_LSR);
+ serial_port_in(port, UART_RX);
+ serial_port_in(port, UART_IIR);
+ serial_port_in(port, UART_MSR);
+ up->lsr_saved_flags = 0;
+ up->msr_saved_flags = 0;
+
+ /*
+ * Request DMA channels for both RX and TX.
+ */
+ if (up->dma) {
+ retval = serial8250_request_dma(up);
+ if (retval) {
+ pr_warn_ratelimited("ttyS%d - failed to request DMA\n",
+ serial_index(port));
+ up->dma = NULL;
+ }
+ }
+
+ /*
+ * Finally, enable interrupts. Note: Modem status interrupts
+ * are set via set_termios(), which will be occurring imminently
+ * anyway, so we don't enable them here.
+ */
+ up->ier = UART_IER_RLSI | UART_IER_RDI;
+ serial_port_out(port, UART_IER, up->ier);
+
+ if (port->flags & UPF_FOURPORT) {
+ unsigned int icp;
+ /*
+ * Enable interrupts on the AST Fourport board
+ */
+ icp = (port->iobase & 0xfe0) | 0x01f;
+ outb_p(0x80, icp);
+ inb_p(icp);
+ }
+
+ return 0;
+}
+
+static void serial8250_shutdown(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+
+ /*
+ * Disable interrupts from this port
+ */
+ up->ier = 0;
+ serial_port_out(port, UART_IER, 0);
+
+ if (up->dma)
+ serial8250_release_dma(up);
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (port->flags & UPF_FOURPORT) {
+ /* reset interrupts on the AST Fourport board */
+ inb((port->iobase & 0xfe0) | 0x1f);
+ port->mctrl |= TIOCM_OUT1;
+ } else
+ port->mctrl &= ~TIOCM_OUT2;
+
+ serial8250_set_mctrl(port, port->mctrl);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * Disable break condition and FIFOs
+ */
+ serial_port_out(port, UART_LCR,
+ serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);
+ serial8250_clear_fifos(up);
+
+#ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * Reset the RSA board back to 115kbps compat mode.
+ */
+ disable_rsa(up);
+#endif
+
+ /*
+ * Read data port to reset things, and then unlink from
+ * the IRQ chain.
+ */
+ serial_port_in(port, UART_RX);
+
+ del_timer_sync(&up->timer);
+ up->timer.function = serial8250_timeout;
+ if (port->irq)
+ serial_unlink_irq_chain(up);
+}
+
+static unsigned int serial8250_get_divisor(struct uart_port *port, unsigned int baud)
+{
+ unsigned int quot;
+
+ /*
+ * Handle magic divisors for baud rates above baud_base on
+ * SMSC SuperIO chips.
+ */
+ if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
+ baud == (port->uartclk/4))
+ quot = 0x8001;
+ else if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
+ baud == (port->uartclk/8))
+ quot = 0x8002;
+ else
+ quot = uart_get_divisor(port, baud);
+
+ return quot;
+}
+
+void
+serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
+ struct ktermios *old)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned char cval, fcr = 0;
+ unsigned long flags;
+ unsigned int baud, quot;
+ int fifo_bug = 0;
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ cval = UART_LCR_WLEN5;
+ break;
+ case CS6:
+ cval = UART_LCR_WLEN6;
+ break;
+ case CS7:
+ cval = UART_LCR_WLEN7;
+ break;
+ default:
+ case CS8:
+ cval = UART_LCR_WLEN8;
+ break;
+ }
+
+ if (termios->c_cflag & CSTOPB)
+ cval |= UART_LCR_STOP;
+ if (termios->c_cflag & PARENB) {
+ cval |= UART_LCR_PARITY;
+ if (up->bugs & UART_BUG_PARITY)
+ fifo_bug = 1;
+ }
+ if (!(termios->c_cflag & PARODD))
+ cval |= UART_LCR_EPAR;
+#ifdef CMSPAR
+ if (termios->c_cflag & CMSPAR)
+ cval |= UART_LCR_SPAR;
+#endif
+
+ /*
+ * Ask the core to calculate the divisor for us.
+ */
+ baud = uart_get_baud_rate(port, termios, old,
+ port->uartclk / 16 / 0xffff,
+ port->uartclk / 16);
+ quot = serial8250_get_divisor(port, baud);
+
+ /*
+ * Oxford Semi 952 rev B workaround
+ */
+ if (up->bugs & UART_BUG_QUOT && (quot & 0xff) == 0)
+ quot++;
+
+ if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
+ fcr = uart_config[port->type].fcr;
+ if (baud < 2400 || fifo_bug) {
+ fcr &= ~UART_FCR_TRIGGER_MASK;
+ fcr |= UART_FCR_TRIGGER_1;
+ }
+ }
+
+ /*
+ * MCR-based auto flow control. When AFE is enabled, RTS will be
+ * deasserted when the receive FIFO contains more characters than
+ * the trigger, or the MCR RTS bit is cleared. In the case where
+ * the remote UART is not using CTS auto flow control, we must
+ * have sufficient FIFO entries for the latency of the remote
+ * UART to respond. IOW, at least 32 bytes of FIFO.
+ */
+ if (up->capabilities & UART_CAP_AFE && port->fifosize >= 32) {
+ up->mcr &= ~UART_MCR_AFE;
+ if (termios->c_cflag & CRTSCTS)
+ up->mcr |= UART_MCR_AFE;
+ }
+
+ /*
+ * Ok, we're now changing the port state. Do it with
+ * interrupts disabled.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+
+ /*
+ * Update the per-port timeout.
+ */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ port->read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= UART_LSR_FE | UART_LSR_PE;
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ port->read_status_mask |= UART_LSR_BI;
+
+ /*
+ * Characteres to ignore
+ */
+ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
+ if (termios->c_iflag & IGNBRK) {
+ port->ignore_status_mask |= UART_LSR_BI;
+ /*
+ * If we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= UART_LSR_OE;
+ }
+
+ /*
+ * ignore all characters if CREAD is not set
+ */
+ if ((termios->c_cflag & CREAD) == 0)
+ port->ignore_status_mask |= UART_LSR_DR;
+
+ /*
+ * CTS flow control flag and modem status interrupts
+ */
+ up->ier &= ~UART_IER_MSI;
+ if (!(up->bugs & UART_BUG_NOMSR) &&
+ UART_ENABLE_MS(&up->port, termios->c_cflag))
+ up->ier |= UART_IER_MSI;
+ if (up->capabilities & UART_CAP_UUE)
+ up->ier |= UART_IER_UUE;
+ if (up->capabilities & UART_CAP_RTOIE)
+ up->ier |= UART_IER_RTOIE;
+
+ serial_port_out(port, UART_IER, up->ier);
+
+ if (up->capabilities & UART_CAP_EFR) {
+ unsigned char efr = 0;
+ /*
+ * TI16C752/Startech hardware flow control. FIXME:
+ * - TI16C752 requires control thresholds to be set.
+ * - UART_MCR_RTS is ineffective if auto-RTS mode is enabled.
+ */
+ if (termios->c_cflag & CRTSCTS)
+ efr |= UART_EFR_CTS;
+
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ if (port->flags & UPF_EXAR_EFR)
+ serial_port_out(port, UART_XR_EFR, efr);
+ else
+ serial_port_out(port, UART_EFR, efr);
+ }
+
+ /* Workaround to enable 115200 baud on OMAP1510 internal ports */
+ if (is_omap1510_8250(up)) {
+ if (baud == 115200) {
+ quot = 1;
+ serial_port_out(port, UART_OMAP_OSC_12M_SEL, 1);
+ } else
+ serial_port_out(port, UART_OMAP_OSC_12M_SEL, 0);
+ }
+
+ /*
+ * For NatSemi, switch to bank 2 not bank 1, to avoid resetting EXCR2,
+ * otherwise just set DLAB
+ */
+ if (up->capabilities & UART_NATSEMI)
+ serial_port_out(port, UART_LCR, 0xe0);
+ else
+ serial_port_out(port, UART_LCR, cval | UART_LCR_DLAB);
+
+ serial_dl_write(up, quot);
+
+ /*
+ * LCR DLAB must be set to enable 64-byte FIFO mode. If the FCR
+ * is written without DLAB set, this mode will be disabled.
+ */
+ if (port->type == PORT_16750)
+ serial_port_out(port, UART_FCR, fcr);
+
+ serial_port_out(port, UART_LCR, cval); /* reset DLAB */
+ up->lcr = cval; /* Save LCR */
+ if (port->type != PORT_16750) {
+ /* emulated UARTs (Lucent Venus 167x) need two steps */
+ if (fcr & UART_FCR_ENABLE_FIFO)
+ serial_port_out(port, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_port_out(port, UART_FCR, fcr); /* set fcr */
+ }
+ serial8250_set_mctrl(port, port->mctrl);
+ spin_unlock_irqrestore(&port->lock, flags);
+ /* Don't rewrite B0 */
+ if (tty_termios_baud_rate(termios))
+ tty_termios_encode_baud_rate(termios, baud, baud);
+}
+EXPORT_SYMBOL(serial8250_do_set_termios);
+
+static void
+serial8250_set_termios(struct uart_port *port, struct ktermios *termios,
+ struct ktermios *old)
+{
+ if (port->set_termios)
+ port->set_termios(port, termios, old);
+ else
+ serial8250_do_set_termios(port, termios, old);
+}
+
+static void
+serial8250_set_ldisc(struct uart_port *port, int new)
+{
+ if (new == N_PPS) {
+ port->flags |= UPF_HARDPPS_CD;
+ serial8250_enable_ms(port);
+ } else
+ port->flags &= ~UPF_HARDPPS_CD;
+}
+
+
+void serial8250_do_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
+{
+ struct uart_8250_port *p =
+ container_of(port, struct uart_8250_port, port);
+
+ serial8250_set_sleep(p, state != 0);
+}
+EXPORT_SYMBOL(serial8250_do_pm);
+
+static void
+serial8250_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
+{
+ if (port->pm)
+ port->pm(port, state, oldstate);
+ else
+ serial8250_do_pm(port, state, oldstate);
+}
+
+static unsigned int serial8250_port_size(struct uart_8250_port *pt)
+{
+ if (pt->port.iotype == UPIO_AU)
+ return 0x1000;
+ if (is_omap1_8250(pt))
+ return 0x16 << pt->port.regshift;
+
+ return 8 << pt->port.regshift;
+}
+
+/*
+ * Resource handling.
+ */
+static int serial8250_request_std_resource(struct uart_8250_port *up)
+{
+ unsigned int size = serial8250_port_size(up);
+ struct uart_port *port = &up->port;
+ int ret = 0;
+
+ switch (port->iotype) {
+ case UPIO_AU:
+ case UPIO_TSI:
+ case UPIO_MEM32:
+ case UPIO_MEM:
+ if (!port->mapbase)
+ break;
+
+ if (!request_mem_region(port->mapbase, size, "serial")) {
+ ret = -EBUSY;
+ break;
+ }
+
+ if (port->flags & UPF_IOREMAP) {
+ port->membase = ioremap_nocache(port->mapbase, size);
+ if (!port->membase) {
+ release_mem_region(port->mapbase, size);
+ ret = -ENOMEM;
+ }
+ }
+ break;
+
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ if (!request_region(port->iobase, size, "serial"))
+ ret = -EBUSY;
+ break;
+ }
+ return ret;
+}
+
+static void serial8250_release_std_resource(struct uart_8250_port *up)
+{
+ unsigned int size = serial8250_port_size(up);
+ struct uart_port *port = &up->port;
+
+ switch (port->iotype) {
+ case UPIO_AU:
+ case UPIO_TSI:
+ case UPIO_MEM32:
+ case UPIO_MEM:
+ if (!port->mapbase)
+ break;
+
+ if (port->flags & UPF_IOREMAP) {
+ iounmap(port->membase);
+ port->membase = NULL;
+ }
+
+ release_mem_region(port->mapbase, size);
+ break;
+
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ release_region(port->iobase, size);
+ break;
+ }
+}
+
+static int serial8250_request_rsa_resource(struct uart_8250_port *up)
+{
+ unsigned long start = UART_RSA_BASE << up->port.regshift;
+ unsigned int size = 8 << up->port.regshift;
+ struct uart_port *port = &up->port;
+ int ret = -EINVAL;
+
+ switch (port->iotype) {
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ start += port->iobase;
+ if (request_region(start, size, "serial-rsa"))
+ ret = 0;
+ else
+ ret = -EBUSY;
+ break;
+ }
+
+ return ret;
+}
+
+static void serial8250_release_rsa_resource(struct uart_8250_port *up)
+{
+ unsigned long offset = UART_RSA_BASE << up->port.regshift;
+ unsigned int size = 8 << up->port.regshift;
+ struct uart_port *port = &up->port;
+
+ switch (port->iotype) {
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ release_region(port->iobase + offset, size);
+ break;
+ }
+}
+
+static void serial8250_release_port(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ serial8250_release_std_resource(up);
+ if (port->type == PORT_RSA)
+ serial8250_release_rsa_resource(up);
+}
+
+static int serial8250_request_port(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ int ret;
+
+ if (port->type == PORT_8250_CIR)
+ return -ENODEV;
+
+ ret = serial8250_request_std_resource(up);
+ if (ret == 0 && port->type == PORT_RSA) {
+ ret = serial8250_request_rsa_resource(up);
+ if (ret < 0)
+ serial8250_release_std_resource(up);
+ }
+
+ return ret;
+}
+
+static void serial8250_config_port(struct uart_port *port, int flags)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ int probeflags = PROBE_ANY;
+ int ret;
+
+ if (port->type == PORT_8250_CIR)
+ return;
+
+ /*
+ * Find the region that we can probe for. This in turn
+ * tells us whether we can probe for the type of port.
+ */
+ ret = serial8250_request_std_resource(up);
+ if (ret < 0)
+ return;
+
+ ret = serial8250_request_rsa_resource(up);
+ if (ret < 0)
+ probeflags &= ~PROBE_RSA;
+
+ if (port->iotype != up->cur_iotype)
+ set_io_from_upio(port);
+
+ if (flags & UART_CONFIG_TYPE)
+ autoconfig(up, probeflags);
+
+ /* if access method is AU, it is a 16550 with a quirk */
+ if (port->type == PORT_16550A && port->iotype == UPIO_AU)
+ up->bugs |= UART_BUG_NOMSR;
+
+ if (port->type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
+ autoconfig_irq(up);
+
+ if (port->type != PORT_RSA && probeflags & PROBE_RSA)
+ serial8250_release_rsa_resource(up);
+ if (port->type == PORT_UNKNOWN)
+ serial8250_release_std_resource(up);
+
+ /* Fixme: probably not the best place for this */
+ if ((port->type == PORT_XR17V35X) ||
+ (port->type == PORT_XR17D15X))
+ port->handle_irq = exar_handle_irq;
+}
+
+static int
+serial8250_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ if (ser->irq >= nr_irqs || ser->irq < 0 ||
+ ser->baud_base < 9600 || ser->type < PORT_UNKNOWN ||
+ ser->type >= ARRAY_SIZE(uart_config) || ser->type == PORT_CIRRUS ||
+ ser->type == PORT_STARTECH)
+ return -EINVAL;
+ return 0;
+}
+
+static const char *
+serial8250_type(struct uart_port *port)
+{
+ int type = port->type;
+
+ if (type >= ARRAY_SIZE(uart_config))
+ type = 0;
+ return uart_config[type].name;
+}
+
+static struct uart_ops serial8250_pops = {
+ .tx_empty = serial8250_tx_empty,
+ .set_mctrl = serial8250_set_mctrl,
+ .get_mctrl = serial8250_get_mctrl,
+ .stop_tx = serial8250_stop_tx,
+ .start_tx = serial8250_start_tx,
+ .stop_rx = serial8250_stop_rx,
+ .enable_ms = serial8250_enable_ms,
+ .break_ctl = serial8250_break_ctl,
+ .startup = serial8250_startup,
+ .shutdown = serial8250_shutdown,
+ .set_termios = serial8250_set_termios,
+ .set_ldisc = serial8250_set_ldisc,
+ .pm = serial8250_pm,
+ .type = serial8250_type,
+ .release_port = serial8250_release_port,
+ .request_port = serial8250_request_port,
+ .config_port = serial8250_config_port,
+ .verify_port = serial8250_verify_port,
+#ifdef CONFIG_CONSOLE_POLL
+ .poll_get_char = serial8250_get_poll_char,
+ .poll_put_char = serial8250_put_poll_char,
+#endif
+};
+
+static struct uart_8250_port serial8250_ports[UART_NR];
+
+static void (*serial8250_isa_config)(int port, struct uart_port *up,
+ unsigned short *capabilities);
+
+void serial8250_set_isa_configurator(
+ void (*v)(int port, struct uart_port *up, unsigned short *capabilities))
+{
+ serial8250_isa_config = v;
+}
+EXPORT_SYMBOL(serial8250_set_isa_configurator);
+
+static void __init serial8250_isa_init_ports(void)
+{
+ struct uart_8250_port *up;
+ static int first = 1;
+ int i, irqflag = 0;
+
+ if (!first)
+ return;
+ first = 0;
+
+ if (nr_uarts > UART_NR)
+ nr_uarts = UART_NR;
+
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+ struct uart_port *port = &up->port;
+
+ port->line = i;
+ spin_lock_init(&port->lock);
+
+ init_timer(&up->timer);
+ up->timer.function = serial8250_timeout;
+ up->cur_iotype = 0xFF;
+
+ /*
+ * ALPHA_KLUDGE_MCR needs to be killed.
+ */
+ up->mcr_mask = ~ALPHA_KLUDGE_MCR;
+ up->mcr_force = ALPHA_KLUDGE_MCR;
+
+ port->ops = &serial8250_pops;
+ }
+
+ if (share_irqs)
+ irqflag = IRQF_SHARED;
+
+ for (i = 0, up = serial8250_ports;
+ i < ARRAY_SIZE(old_serial_port) && i < nr_uarts;
+ i++, up++) {
+ struct uart_port *port = &up->port;
+
+ port->iobase = old_serial_port[i].port;
+ port->irq = irq_canonicalize(old_serial_port[i].irq);
+ port->irqflags = old_serial_port[i].irqflags;
+ port->uartclk = old_serial_port[i].baud_base * 16;
+ port->flags = old_serial_port[i].flags;
+ port->hub6 = old_serial_port[i].hub6;
+ port->membase = old_serial_port[i].iomem_base;
+ port->iotype = old_serial_port[i].io_type;
+ port->regshift = old_serial_port[i].iomem_reg_shift;
+ set_io_from_upio(port);
+ port->irqflags |= irqflag;
+ if (serial8250_isa_config != NULL)
+ serial8250_isa_config(i, &up->port, &up->capabilities);
+
+ }
+}
+
+static void
+serial8250_init_fixed_type_port(struct uart_8250_port *up, unsigned int type)
+{
+ up->port.type = type;
+ if (!up->port.fifosize)
+ up->port.fifosize = uart_config[type].fifo_size;
+ if (!up->tx_loadsz)
+ up->tx_loadsz = uart_config[type].tx_loadsz;
+ if (!up->capabilities)
+ up->capabilities = uart_config[type].flags;
+}
+
+static void __init
+serial8250_register_ports(struct uart_driver *drv, struct device *dev)
+{
+ int i;
+
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.dev)
+ continue;
+
+ up->port.dev = dev;
+
+ if (up->port.flags & UPF_FIXED_TYPE)
+ serial8250_init_fixed_type_port(up, up->port.type);
+
+ uart_add_one_port(drv, &up->port);
+ }
+}
+
+#ifdef CONFIG_SERIAL_8250_CONSOLE
+
+static void serial8250_console_putchar(struct uart_port *port, int ch)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ wait_for_xmitr(up, UART_LSR_THRE);
+ serial_port_out(port, UART_TX, ch);
+}
+
+/*
+ * Print a string to the serial port trying not to disturb
+ * any possible real use of the port...
+ *
+ * The console_lock must be held when we get here.
+ */
+static void
+serial8250_console_write(struct console *co, const char *s, unsigned int count)
+{
+ struct uart_8250_port *up = &serial8250_ports[co->index];
+ struct uart_port *port = &up->port;
+ unsigned long flags;
+ unsigned int ier;
+ int locked = 1;
+
+ touch_nmi_watchdog();
+
+ local_irq_save(flags);
+ if (port->sysrq) {
+ /* serial8250_handle_irq() already took the lock */
+ locked = 0;
+ } else if (oops_in_progress) {
+ locked = spin_trylock(&port->lock);
+ } else
+ spin_lock(&port->lock);
+
+ /*
+ * First save the IER then disable the interrupts
+ */
+ ier = serial_port_in(port, UART_IER);
+
+ if (up->capabilities & UART_CAP_UUE)
+ serial_port_out(port, UART_IER, UART_IER_UUE);
+ else
+ serial_port_out(port, UART_IER, 0);
+
+ uart_console_write(port, s, count, serial8250_console_putchar);
+
+ /*
+ * Finally, wait for transmitter to become empty
+ * and restore the IER
+ */
+ wait_for_xmitr(up, BOTH_EMPTY);
+ serial_port_out(port, UART_IER, ier);
+
+ /*
+ * The receive handling will happen properly because the
+ * receive ready bit will still be set; it is not cleared
+ * on read. However, modem control will not, we must
+ * call it if we have saved something in the saved flags
+ * while processing with interrupts off.
+ */
+ if (up->msr_saved_flags)
+ serial8250_modem_status(up);
+
+ if (locked)
+ spin_unlock(&port->lock);
+ local_irq_restore(flags);
+}
+
+static int __init serial8250_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ /*
+ * Check whether an invalid uart number has been specified, and
+ * if so, search for the first available port that does have
+ * console support.
+ */
+ if (co->index >= nr_uarts)
+ co->index = 0;
+ port = &serial8250_ports[co->index].port;
+ if (!port->iobase && !port->membase)
+ return -ENODEV;
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ return uart_set_options(port, co, baud, parity, bits, flow);
+}
+
+static int serial8250_console_early_setup(void)
+{
+ return serial8250_find_port_for_earlycon();
+}
+
+static struct console serial8250_console = {
+ .name = "ttyS",
+ .write = serial8250_console_write,
+ .device = uart_console_device,
+ .setup = serial8250_console_setup,
+ .early_setup = serial8250_console_early_setup,
+ .flags = CON_PRINTBUFFER | CON_ANYTIME,
+ .index = -1,
+ .data = &serial8250_reg,
+};
+
+static int __init serial8250_console_init(void)
+{
+ serial8250_isa_init_ports();
+ register_console(&serial8250_console);
+ return 0;
+}
+console_initcall(serial8250_console_init);
+
+int serial8250_find_port(struct uart_port *p)
+{
+ int line;
+ struct uart_port *port;
+
+ for (line = 0; line < nr_uarts; line++) {
+ port = &serial8250_ports[line].port;
+ if (uart_match_port(p, port))
+ return line;
+ }
+ return -ENODEV;
+}
+
+#define SERIAL8250_CONSOLE &serial8250_console
+#else
+#define SERIAL8250_CONSOLE NULL
+#endif
+
+static struct uart_driver serial8250_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "serial",
+ .dev_name = "ttyS",
+ .major = TTY_MAJOR,
+ .minor = 64,
+ .cons = SERIAL8250_CONSOLE,
+};
+
+/*
+ * early_serial_setup - early registration for 8250 ports
+ *
+ * Setup an 8250 port structure prior to console initialisation. Use
+ * after console initialisation will cause undefined behaviour.
+ */
+int __init early_serial_setup(struct uart_port *port)
+{
+ struct uart_port *p;
+
+ if (port->line >= ARRAY_SIZE(serial8250_ports))
+ return -ENODEV;
+
+ serial8250_isa_init_ports();
+ p = &serial8250_ports[port->line].port;
+ p->iobase = port->iobase;
+ p->membase = port->membase;
+ p->irq = port->irq;
+ p->irqflags = port->irqflags;
+ p->uartclk = port->uartclk;
+ p->fifosize = port->fifosize;
+ p->regshift = port->regshift;
+ p->iotype = port->iotype;
+ p->flags = port->flags;
+ p->mapbase = port->mapbase;
+ p->private_data = port->private_data;
+ p->type = port->type;
+ p->line = port->line;
+
+ set_io_from_upio(p);
+ if (port->serial_in)
+ p->serial_in = port->serial_in;
+ if (port->serial_out)
+ p->serial_out = port->serial_out;
+ if (port->handle_irq)
+ p->handle_irq = port->handle_irq;
+ else
+ p->handle_irq = serial8250_default_handle_irq;
+
+ return 0;
+}
+
+/**
+ * serial8250_suspend_port - suspend one serial port
+ * @line: serial line number
+ *
+ * Suspend one serial port.
+ */
+void serial8250_suspend_port(int line)
+{
+ uart_suspend_port(&serial8250_reg, &serial8250_ports[line].port);
+}
+
+/**
+ * serial8250_resume_port - resume one serial port
+ * @line: serial line number
+ *
+ * Resume one serial port.
+ */
+void serial8250_resume_port(int line)
+{
+ struct uart_8250_port *up = &serial8250_ports[line];
+ struct uart_port *port = &up->port;
+
+ if (up->capabilities & UART_NATSEMI) {
+ /* Ensure it's still in high speed mode */
+ serial_port_out(port, UART_LCR, 0xE0);
+
+ ns16550a_goto_highspeed(up);
+
+ serial_port_out(port, UART_LCR, 0);
+ port->uartclk = 921600*16;
+ }
+ uart_resume_port(&serial8250_reg, port);
+}
+
+/*
+ * Register a set of serial devices attached to a platform device. The
+ * list is terminated with a zero flags entry, which means we expect
+ * all entries to have at least UPF_BOOT_AUTOCONF set.
+ */
+static int serial8250_probe(struct platform_device *dev)
+{
+ struct plat_serial8250_port *p = dev->dev.platform_data;
+ struct uart_8250_port uart;
+ int ret, i, irqflag = 0;
+
+ memset(&uart, 0, sizeof(uart));
+
+ if (share_irqs)
+ irqflag = IRQF_SHARED;
+
+ for (i = 0; p && p->flags != 0; p++, i++) {
+ uart.port.iobase = p->iobase;
+ uart.port.membase = p->membase;
+ uart.port.irq = p->irq;
+ uart.port.irqflags = p->irqflags;
+ uart.port.uartclk = p->uartclk;
+ uart.port.regshift = p->regshift;
+ uart.port.iotype = p->iotype;
+ uart.port.flags = p->flags;
+ uart.port.mapbase = p->mapbase;
+ uart.port.hub6 = p->hub6;
+ uart.port.private_data = p->private_data;
+ uart.port.type = p->type;
+ uart.port.serial_in = p->serial_in;
+ uart.port.serial_out = p->serial_out;
+ uart.port.handle_irq = p->handle_irq;
+ uart.port.handle_break = p->handle_break;
+ uart.port.set_termios = p->set_termios;
+ uart.port.pm = p->pm;
+ uart.port.dev = &dev->dev;
+ uart.port.irqflags |= irqflag;
+ ret = serial8250_register_8250_port(&uart);
+ if (ret < 0) {
+ dev_err(&dev->dev, "unable to register port at index %d "
+ "(IO%lx MEM%llx IRQ%d): %d\n", i,
+ p->iobase, (unsigned long long)p->mapbase,
+ p->irq, ret);
+ }
+ }
+ return 0;
+}
+
+/*
+ * Remove serial ports registered against a platform device.
+ */
+static int serial8250_remove(struct platform_device *dev)
+{
+ int i;
+
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.dev == &dev->dev)
+ serial8250_unregister_port(i);
+ }
+ return 0;
+}
+
+static int serial8250_suspend(struct platform_device *dev, pm_message_t state)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
+ uart_suspend_port(&serial8250_reg, &up->port);
+ }
+
+ return 0;
+}
+
+static int serial8250_resume(struct platform_device *dev)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
+ serial8250_resume_port(i);
+ }
+
+ return 0;
+}
+
+static struct platform_driver serial8250_isa_driver = {
+ .probe = serial8250_probe,
+ .remove = serial8250_remove,
+ .suspend = serial8250_suspend,
+ .resume = serial8250_resume,
+ .driver = {
+ .name = "serial8250",
+ .owner = THIS_MODULE,
+ },
+};
+
+/*
+ * This "device" covers _all_ ISA 8250-compatible serial devices listed
+ * in the table in include/asm/serial.h
+ */
+static struct platform_device *serial8250_isa_devs;
+
+/*
+ * serial8250_register_8250_port and serial8250_unregister_port allows for
+ * 16x50 serial ports to be configured at run-time, to support PCMCIA
+ * modems and PCI multiport cards.
+ */
+static DEFINE_MUTEX(serial_mutex);
+
+static struct uart_8250_port *serial8250_find_match_or_unused(struct uart_port *port)
+{
+ int i;
+
+ /*
+ * First, find a port entry which matches.
+ */
+ for (i = 0; i < nr_uarts; i++)
+ if (uart_match_port(&serial8250_ports[i].port, port))
+ return &serial8250_ports[i];
+
+ /*
+ * We didn't find a matching entry, so look for the first
+ * free entry. We look for one which hasn't been previously
+ * used (indicated by zero iobase).
+ */
+ for (i = 0; i < nr_uarts; i++)
+ if (serial8250_ports[i].port.type == PORT_UNKNOWN &&
+ serial8250_ports[i].port.iobase == 0)
+ return &serial8250_ports[i];
+
+ /*
+ * That also failed. Last resort is to find any entry which
+ * doesn't have a real port associated with it.
+ */
+ for (i = 0; i < nr_uarts; i++)
+ if (serial8250_ports[i].port.type == PORT_UNKNOWN)
+ return &serial8250_ports[i];
+
+ return NULL;
+}
+
+/**
+ * serial8250_register_8250_port - register a serial port
+ * @up: serial port template
+ *
+ * Configure the serial port specified by the request. If the
+ * port exists and is in use, it is hung up and unregistered
+ * first.
+ *
+ * The port is then probed and if necessary the IRQ is autodetected
+ * If this fails an error is returned.
+ *
+ * On success the port is ready to use and the line number is returned.
+ */
+int serial8250_register_8250_port(struct uart_8250_port *up)
+{
+ struct uart_8250_port *uart;
+ int ret = -ENOSPC;
+
+ if (up->port.uartclk == 0)
+ return -EINVAL;
+
+ mutex_lock(&serial_mutex);
+
+ uart = serial8250_find_match_or_unused(&up->port);
+ if (uart && uart->port.type != PORT_8250_CIR) {
+ if (uart->port.dev)
+ uart_remove_one_port(&serial8250_reg, &uart->port);
+
+ uart->port.iobase = up->port.iobase;
+ uart->port.membase = up->port.membase;
+ uart->port.irq = up->port.irq;
+ uart->port.irqflags = up->port.irqflags;
+ uart->port.uartclk = up->port.uartclk;
+ uart->port.fifosize = up->port.fifosize;
+ uart->port.regshift = up->port.regshift;
+ uart->port.iotype = up->port.iotype;
+ uart->port.flags = up->port.flags | UPF_BOOT_AUTOCONF;
+ uart->bugs = up->bugs;
+ uart->port.mapbase = up->port.mapbase;
+ uart->port.private_data = up->port.private_data;
+ uart->port.fifosize = up->port.fifosize;
+ uart->tx_loadsz = up->tx_loadsz;
+ uart->capabilities = up->capabilities;
+
+ if (up->port.dev)
+ uart->port.dev = up->port.dev;
+
+ if (up->port.flags & UPF_FIXED_TYPE)
+ serial8250_init_fixed_type_port(uart, up->port.type);
+
+ set_io_from_upio(&uart->port);
+ /* Possibly override default I/O functions. */
+ if (up->port.serial_in)
+ uart->port.serial_in = up->port.serial_in;
+ if (up->port.serial_out)
+ uart->port.serial_out = up->port.serial_out;
+ if (up->port.handle_irq)
+ uart->port.handle_irq = up->port.handle_irq;
+ /* Possibly override set_termios call */
+ if (up->port.set_termios)
+ uart->port.set_termios = up->port.set_termios;
+ if (up->port.pm)
+ uart->port.pm = up->port.pm;
+ if (up->port.handle_break)
+ uart->port.handle_break = up->port.handle_break;
+ if (up->dl_read)
+ uart->dl_read = up->dl_read;
+ if (up->dl_write)
+ uart->dl_write = up->dl_write;
+ if (up->dma)
+ uart->dma = up->dma;
+
+ if (serial8250_isa_config != NULL)
+ serial8250_isa_config(0, &uart->port,
+ &uart->capabilities);
+
+ ret = uart_add_one_port(&serial8250_reg, &uart->port);
+ if (ret == 0)
+ ret = uart->port.line;
+ }
+ mutex_unlock(&serial_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL(serial8250_register_8250_port);
+
+/**
+ * serial8250_unregister_port - remove a 16x50 serial port at runtime
+ * @line: serial line number
+ *
+ * Remove one serial port. This may not be called from interrupt
+ * context. We hand the port back to the our control.
+ */
+void serial8250_unregister_port(int line)
+{
+ struct uart_8250_port *uart = &serial8250_ports[line];
+
+ mutex_lock(&serial_mutex);
+ uart_remove_one_port(&serial8250_reg, &uart->port);
+ if (serial8250_isa_devs) {
+ uart->port.flags &= ~UPF_BOOT_AUTOCONF;
+ uart->port.type = PORT_UNKNOWN;
+ uart->port.dev = &serial8250_isa_devs->dev;
+ uart->capabilities = uart_config[uart->port.type].flags;
+ uart_add_one_port(&serial8250_reg, &uart->port);
+ } else {
+ uart->port.dev = NULL;
+ }
+ mutex_unlock(&serial_mutex);
+}
+EXPORT_SYMBOL(serial8250_unregister_port);
+
+static int __init serial8250_init(void)
+{
+ int ret;
+
+ serial8250_isa_init_ports();
+
+ printk(KERN_INFO "Serial: 8250/16550 driver, "
+ "%d ports, IRQ sharing %sabled\n", nr_uarts,
+ share_irqs ? "en" : "dis");
+
+#ifdef CONFIG_SPARC
+ ret = sunserial_register_minors(&serial8250_reg, UART_NR);
+#else
+ serial8250_reg.nr = UART_NR;
+ ret = uart_register_driver(&serial8250_reg);
+#endif
+ if (ret)
+ goto out;
+
+ ret = serial8250_pnp_init();
+ if (ret)
+ goto unreg_uart_drv;
+
+ serial8250_isa_devs = platform_device_alloc("serial8250",
+ PLAT8250_DEV_LEGACY);
+ if (!serial8250_isa_devs) {
+ ret = -ENOMEM;
+ goto unreg_pnp;
+ }
+
+ ret = platform_device_add(serial8250_isa_devs);
+ if (ret)
+ goto put_dev;
+
+ serial8250_register_ports(&serial8250_reg, &serial8250_isa_devs->dev);
+
+ ret = platform_driver_register(&serial8250_isa_driver);
+ if (ret == 0)
+ goto out;
+
+ platform_device_del(serial8250_isa_devs);
+put_dev:
+ platform_device_put(serial8250_isa_devs);
+unreg_pnp:
+ serial8250_pnp_exit();
+unreg_uart_drv:
+#ifdef CONFIG_SPARC
+ sunserial_unregister_minors(&serial8250_reg, UART_NR);
+#else
+ uart_unregister_driver(&serial8250_reg);
+#endif
+out:
+ return ret;
+}
+
+static void __exit serial8250_exit(void)
+{
+ struct platform_device *isa_dev = serial8250_isa_devs;
+
+ /*
+ * This tells serial8250_unregister_port() not to re-register
+ * the ports (thereby making serial8250_isa_driver permanently
+ * in use.)
+ */
+ serial8250_isa_devs = NULL;
+
+ platform_driver_unregister(&serial8250_isa_driver);
+ platform_device_unregister(isa_dev);
+
+ serial8250_pnp_exit();
+
+#ifdef CONFIG_SPARC
+ sunserial_unregister_minors(&serial8250_reg, UART_NR);
+#else
+ uart_unregister_driver(&serial8250_reg);
+#endif
+}
+
+module_init(serial8250_init);
+module_exit(serial8250_exit);
+
+EXPORT_SYMBOL(serial8250_suspend_port);
+EXPORT_SYMBOL(serial8250_resume_port);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Generic 8250/16x50 serial driver");
+
+module_param(share_irqs, uint, 0644);
+MODULE_PARM_DESC(share_irqs, "Share IRQs with other non-8250/16x50 devices"
+ " (unsafe)");
+
+module_param(nr_uarts, uint, 0644);
+MODULE_PARM_DESC(nr_uarts, "Maximum number of UARTs supported. (1-" __MODULE_STRING(CONFIG_SERIAL_8250_NR_UARTS) ")");
+
+module_param(skip_txen_test, uint, 0644);
+MODULE_PARM_DESC(skip_txen_test, "Skip checking for the TXEN bug at init time");
+
+#ifdef CONFIG_SERIAL_8250_RSA
+module_param_array(probe_rsa, ulong, &probe_rsa_count, 0444);
+MODULE_PARM_DESC(probe_rsa, "Probe I/O ports for RSA");
+#endif
+MODULE_ALIAS_CHARDEV_MAJOR(TTY_MAJOR);
+
+#ifdef CONFIG_SERIAL_8250_DEPRECATED_OPTIONS
+#ifndef MODULE
+/* This module was renamed to 8250_core in 3.7. Keep the old "8250" name
+ * working as well for the module options so we don't break people. We
+ * need to keep the names identical and the convenient macros will happily
+ * refuse to let us do that by failing the build with redefinition errors
+ * of global variables. So we stick them inside a dummy function to avoid
+ * those conflicts. The options still get parsed, and the redefined
+ * MODULE_PARAM_PREFIX lets us keep the "8250." syntax alive.
+ *
+ * This is hacky. I'm sorry.
+ */
+static void __used s8250_options(void)
+{
+#undef MODULE_PARAM_PREFIX
+#define MODULE_PARAM_PREFIX "8250_core."
+
+ module_param_cb(share_irqs, ¶m_ops_uint, &share_irqs, 0644);
+ module_param_cb(nr_uarts, ¶m_ops_uint, &nr_uarts, 0644);
+ module_param_cb(skip_txen_test, ¶m_ops_uint, &skip_txen_test, 0644);
+#ifdef CONFIG_SERIAL_8250_RSA
+ __module_param_call(MODULE_PARAM_PREFIX, probe_rsa,
+ ¶m_array_ops, .arr = &__param_arr_probe_rsa,
+ 0444, -1);
+#endif
+}
+#else
+MODULE_ALIAS("8250_core");
+#endif
+#endif
#define PCI_DEVICE_ID_PLX_CRONYX_OMEGA 0xc001
#define PCI_DEVICE_ID_INTEL_PATSBURG_KT 0x1d3d
#define PCI_VENDOR_ID_WCH 0x4348
+#define PCI_DEVICE_ID_WCH_CH352_2S 0x3253
#define PCI_DEVICE_ID_WCH_CH353_4S 0x3453
#define PCI_DEVICE_ID_WCH_CH353_2S1PF 0x5046
#define PCI_DEVICE_ID_WCH_CH353_2S1P 0x7053
.subdevice = PCI_ANY_ID,
.setup = pci_wch_ch353_setup,
},
+ /* WCH CH352 2S card (16550 clone) */
+ {
+ .vendor = PCI_VENDOR_ID_WCH,
+ .device = PCI_DEVICE_ID_WCH_CH352_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
/*
* ASIX devices with FIFO bug
*/
PCI_ANY_ID, PCI_ANY_ID,
0, 0, pbn_b0_bt_2_115200 },
+ { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH352_2S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_2_115200 },
+
/*
* Commtech, Inc. Fastcom adapters
*/
Most people will say Y or M here, so that they can use serial mice,
modems and similar devices connecting to the standard serial ports.
+config SERIAL_8250_DEPRECATED_OPTIONS
+ bool "Support 8250_core.* kernel options (DEPRECATED)"
+ depends on SERIAL_8250
+ default y
+ ---help---
+ In 3.7 we renamed 8250 to 8250_core by mistake, so now we have to
+ accept kernel parameters in both forms like 8250_core.nr_uarts=4 and
+ 8250.nr_uarts=4. We now renamed the module back to 8250, but if
+ anybody noticed in 3.7 and changed their userspace we still have to
+ keep the 8350_core.* options around until they revert the changes
+ they already did.
+
+ If 8250 is built as a module, this adds 8250_core alias instead.
+
+ If you did not notice yet and/or you have userspace from pre-3.7, it
+ is safe (and recommended) to say N here.
+
config SERIAL_8250_PNP
bool "8250/16550 PNP device support" if EXPERT
depends on SERIAL_8250 && PNP
# Makefile for the 8250 serial device drivers.
#
-obj-$(CONFIG_SERIAL_8250) += 8250_core.o
-8250_core-y := 8250.o
-8250_core-$(CONFIG_SERIAL_8250_PNP) += 8250_pnp.o
-8250_core-$(CONFIG_SERIAL_8250_DMA) += 8250_dma.o
+obj-$(CONFIG_SERIAL_8250) += 8250.o
+8250-y := 8250_core.o
+8250-$(CONFIG_SERIAL_8250_PNP) += 8250_pnp.o
+8250-$(CONFIG_SERIAL_8250_DMA) += 8250_dma.o
obj-$(CONFIG_SERIAL_8250_GSC) += 8250_gsc.o
obj-$(CONFIG_SERIAL_8250_PCI) += 8250_pci.o
obj-$(CONFIG_SERIAL_8250_HP300) += 8250_hp300.o
};
static struct atmel_uart_port atmel_ports[ATMEL_MAX_UART];
-static unsigned long atmel_ports_in_use;
+static DECLARE_BITMAP(atmel_ports_in_use, ATMEL_MAX_UART);
#ifdef SUPPORT_SYSRQ
static struct console atmel_console;
if (ret < 0)
/* port id not found in platform data nor device-tree aliases:
* auto-enumerate it */
- ret = find_first_zero_bit(&atmel_ports_in_use,
- sizeof(atmel_ports_in_use));
+ ret = find_first_zero_bit(atmel_ports_in_use, ATMEL_MAX_UART);
- if (ret > ATMEL_MAX_UART) {
+ if (ret >= ATMEL_MAX_UART) {
ret = -ENODEV;
goto err;
}
- if (test_and_set_bit(ret, &atmel_ports_in_use)) {
+ if (test_and_set_bit(ret, atmel_ports_in_use)) {
/* port already in use */
ret = -EBUSY;
goto err;
/* "port" is allocated statically, so we shouldn't free it */
- clear_bit(port->line, &atmel_ports_in_use);
+ clear_bit(port->line, atmel_ports_in_use);
clk_put(atmel_port->clk);
#define UART_NR 4
static struct uart_sunsu_port sunsu_ports[UART_NR];
+static int nr_inst; /* Number of already registered ports */
#ifdef CONFIG_SERIO
printk("Console: ttyS%d (SU)\n",
(sunsu_reg.minor - 64) + co->index);
- /*
- * Check whether an invalid uart number has been specified, and
- * if so, search for the first available port that does have
- * console support.
- */
- if (co->index >= UART_NR)
- co->index = 0;
+ if (co->index > nr_inst)
+ return -ENODEV;
port = &sunsu_ports[co->index].port;
/*
static int su_probe(struct platform_device *op)
{
- static int inst;
struct device_node *dp = op->dev.of_node;
struct uart_sunsu_port *up;
struct resource *rp;
type = su_get_type(dp);
if (type == SU_PORT_PORT) {
- if (inst >= UART_NR)
+ if (nr_inst >= UART_NR)
return -EINVAL;
- up = &sunsu_ports[inst];
+ up = &sunsu_ports[nr_inst];
} else {
up = kzalloc(sizeof(*up), GFP_KERNEL);
if (!up)
return -ENOMEM;
}
- up->port.line = inst;
+ up->port.line = nr_inst;
spin_lock_init(&up->port.lock);
}
dev_set_drvdata(&op->dev, up);
+ nr_inst++;
+
return 0;
}
dev_set_drvdata(&op->dev, up);
- inst++;
+ nr_inst++;
return 0;
/* Receive Timeout register is enabled with value of 10 */
xuartps_writel(10, XUARTPS_RXTOUT_OFFSET);
+ /* Clear out any pending interrupts before enabling them */
+ xuartps_writel(xuartps_readl(XUARTPS_ISR_OFFSET), XUARTPS_ISR_OFFSET);
/* Set the Interrupt Registers with desired interrupts */
xuartps_writel(XUARTPS_IXR_TXEMPTY | XUARTPS_IXR_PARITY |
static struct vcs_poll_data *
vcs_poll_data_get(struct file *file)
{
- struct vcs_poll_data *poll = file->private_data;
+ struct vcs_poll_data *poll = file->private_data, *kill = NULL;
if (poll)
return poll;
file->private_data = poll;
} else {
/* someone else raced ahead of us */
- vcs_poll_data_free(poll);
+ kill = poll;
poll = file->private_data;
}
spin_unlock(&file->f_lock);
+ if (kill)
+ vcs_poll_data_free(kill);
return poll;
}
dev_dbg(&acm->control->dev, "%s\n", __func__);
- tty_unregister_device(acm_tty_driver, acm->minor);
acm_release_minor(acm);
usb_put_intf(acm->control);
kfree(acm->country_codes);
int num_rx_buf;
int i;
int combined_interfaces = 0;
+ struct device *tty_dev;
+ int rv = -ENOMEM;
/* normal quirks */
quirks = (unsigned long)id->driver_info;
usb_set_intfdata(data_interface, acm);
usb_get_intf(control_interface);
- tty_port_register_device(&acm->port, acm_tty_driver, minor,
+ tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
&control_interface->dev);
+ if (IS_ERR(tty_dev)) {
+ rv = PTR_ERR(tty_dev);
+ goto alloc_fail8;
+ }
return 0;
+alloc_fail8:
+ if (acm->country_codes) {
+ device_remove_file(&acm->control->dev,
+ &dev_attr_wCountryCodes);
+ device_remove_file(&acm->control->dev,
+ &dev_attr_iCountryCodeRelDate);
+ }
+ device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
alloc_fail7:
+ usb_set_intfdata(intf, NULL);
for (i = 0; i < ACM_NW; i++)
usb_free_urb(acm->wb[i].urb);
alloc_fail6:
acm_release_minor(acm);
kfree(acm);
alloc_fail:
- return -ENOMEM;
+ return rv;
}
static void stop_data_traffic(struct acm *acm)
stop_data_traffic(acm);
+ tty_unregister_device(acm_tty_driver, acm->minor);
+
usb_free_urb(acm->ctrlurb);
for (i = 0; i < ACM_NW; i++)
usb_free_urb(acm->wb[i].urb);
struct hc_driver *driver;
struct usb_hcd *hcd;
int retval;
+ int hcd_irq = 0;
if (usb_disabled())
return -ENODEV;
return -ENODEV;
dev->current_state = PCI_D0;
- /* The xHCI driver supports MSI and MSI-X,
- * so don't fail if the BIOS doesn't provide a legacy IRQ.
+ /*
+ * The xHCI driver has its own irq management
+ * make sure irq setup is not touched for xhci in generic hcd code
*/
- if (!dev->irq && (driver->flags & HCD_MASK) != HCD_USB3) {
- dev_err(&dev->dev,
- "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
- pci_name(dev));
- retval = -ENODEV;
- goto disable_pci;
+ if ((driver->flags & HCD_MASK) != HCD_USB3) {
+ if (!dev->irq) {
+ dev_err(&dev->dev,
+ "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
+ pci_name(dev));
+ retval = -ENODEV;
+ goto disable_pci;
+ }
+ hcd_irq = dev->irq;
}
hcd = usb_create_hcd(driver, &dev->dev, pci_name(dev));
pci_set_master(dev);
- retval = usb_add_hcd(hcd, dev->irq, IRQF_SHARED);
+ retval = usb_add_hcd(hcd, hcd_irq, IRQF_SHARED);
if (retval != 0)
goto unmap_registers;
set_hs_companion(dev, hcd);
}
EXPORT_SYMBOL_GPL(usb_hcd_is_primary_hcd);
+int usb_hcd_find_raw_port_number(struct usb_hcd *hcd, int port1)
+{
+ if (!hcd->driver->find_raw_port_number)
+ return port1;
+
+ return hcd->driver->find_raw_port_number(hcd, port1);
+}
+
static int usb_hcd_request_irqs(struct usb_hcd *hcd,
unsigned int irqnum, unsigned long irqflags)
{
#include <linux/kernel.h>
#include <linux/acpi.h>
#include <linux/pci.h>
+#include <linux/usb/hcd.h>
#include <acpi/acpi_bus.h>
#include "usb.h"
* connected to.
*/
if (!udev->parent) {
- *handle = acpi_get_child(DEVICE_ACPI_HANDLE(&udev->dev),
+ struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+ int raw_port_num;
+
+ raw_port_num = usb_hcd_find_raw_port_number(hcd,
port_num);
+ *handle = acpi_get_child(DEVICE_ACPI_HANDLE(&udev->dev),
+ raw_port_num);
if (!*handle)
return -ENODEV;
} else {
tristate "LPC32XX USB Peripheral Controller"
depends on ARCH_LPC32XX
select USB_ISP1301
+ select USB_OTG_UTILS
help
This option selects the USB device controller in the LPC32xx SoC.
static void rndis_command_complete(struct usb_ep *ep, struct usb_request *req)
{
struct f_rndis *rndis = req->context;
- struct usb_composite_dev *cdev = rndis->port.func.config->cdev;
int status;
/* received RNDIS command from USB_CDC_SEND_ENCAPSULATED_COMMAND */
// spin_lock(&dev->lock);
status = rndis_msg_parser(rndis->config, (u8 *) req->buf);
if (status < 0)
- ERROR(cdev, "RNDIS command error %d, %d/%d\n",
+ pr_err("RNDIS command error %d, %d/%d\n",
status, req->actual, req->length);
// spin_unlock(&dev->lock);
}
goto error;
gfs_dev_desc.iProduct = gfs_strings[USB_GADGET_PRODUCT_IDX].id;
- for (i = func_num; --i; ) {
+ for (i = func_num; i--; ) {
ret = functionfs_bind(ffs_tab[i].ffs_data, cdev);
if (unlikely(ret < 0)) {
while (++i < func_num)
gether_cleanup();
gfs_ether_setup = false;
- for (i = func_num; --i; )
+ for (i = func_num; i--; )
if (ffs_tab[i].ffs_data)
functionfs_unbind(ffs_tab[i].ffs_data);
};
#define DMA_ADDR_INVALID (~(dma_addr_t)0)
-#ifdef CONFIG_USB_GADGET_NET2272_DMA
+#ifdef CONFIG_USB_NET2272_DMA
/*
* use_dma: the NET2272 can use an external DMA controller.
* Note that since there is no generic DMA api, some functions,
for (i = 0; i < 4; ++i)
net2272_dequeue_all(&dev->ep[i]);
+ /* report disconnect; the driver is already quiesced */
+ if (driver) {
+ spin_unlock(&dev->lock);
+ driver->disconnect(&dev->gadget);
+ spin_lock(&dev->lock);
+ }
+
net2272_usb_reinit(dev);
}
err_func:
device_remove_file (&dev->pdev->dev, &dev_attr_function);
err_unbind:
- driver->unbind (&dev->gadget);
dev->gadget.dev.driver = NULL;
dev->driver = NULL;
return retval;
for (i = 0; i < 7; i++)
nuke (&dev->ep [i]);
+ /* report disconnect; the driver is already quiesced */
+ if (driver) {
+ spin_unlock(&dev->lock);
+ driver->disconnect(&dev->gadget);
+ spin_lock(&dev->lock);
+ }
+
usb_reinit (dev);
}
pr_debug(fmt, ##arg)
#endif /* pr_vdebug */
#else
-#ifndef pr_vdebig
+#ifndef pr_vdebug
#define pr_vdebug(fmt, arg...) \
({ if (0) pr_debug(fmt, ##arg); })
#endif /* pr_vdebug */
usb_gadget_disconnect(udc->gadget);
udc->driver->disconnect(udc->gadget);
udc->driver->unbind(udc->gadget);
- usb_gadget_udc_stop(udc->gadget, udc->driver);
+ usb_gadget_udc_stop(udc->gadget, NULL);
udc->driver = NULL;
udc->dev.driver = NULL;
static void end_unlink_async(struct ehci_hcd *ehci);
static void unlink_empty_async(struct ehci_hcd *ehci);
+static void unlink_empty_async_suspended(struct ehci_hcd *ehci);
static void ehci_work(struct ehci_hcd *ehci);
static void start_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh);
static void end_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh);
ehci->rh_state = EHCI_RH_SUSPENDED;
end_unlink_async(ehci);
- unlink_empty_async(ehci);
+ unlink_empty_async_suspended(ehci);
ehci_handle_intr_unlinks(ehci);
end_free_itds(ehci);
}
}
+/* The root hub is suspended; unlink all the async QHs */
+static void unlink_empty_async_suspended(struct ehci_hcd *ehci)
+{
+ struct ehci_qh *qh;
+
+ while (ehci->async->qh_next.qh) {
+ qh = ehci->async->qh_next.qh;
+ WARN_ON(!list_empty(&qh->qtd_list));
+ single_unlink_async(ehci, qh);
+ }
+ start_iaa_cycle(ehci, false);
+}
+
/* makes sure the async qh will become idle */
/* caller must own ehci->lock */
memset (itd, 0, sizeof *itd);
itd->itd_dma = itd_dma;
+ itd->frame = 9999; /* an invalid value */
list_add (&itd->itd_list, &sched->td_list);
}
spin_unlock_irqrestore (&ehci->lock, flags);
memset (sitd, 0, sizeof *sitd);
sitd->sitd_dma = sitd_dma;
+ sitd->frame = 9999; /* an invalid value */
list_add (&sitd->sitd_list, &iso_sched->td_list);
}
* (a) SMP races against real IAA firing and retriggering, and
* (b) clean HC shutdown, when IAA watchdog was pending.
*/
- if (ehci->async_iaa) {
+ if (1) {
u32 cmd, status;
/* If we get here, IAA is *REALLY* late. It's barely
* is attached to (or the roothub port its ancestor hub is attached to). All we
* know is the index of that port under either the USB 2.0 or the USB 3.0
* roothub, but that doesn't give us the real index into the HW port status
- * registers. Scan through the xHCI roothub port array, looking for the Nth
- * entry of the correct port speed. Return the port number of that entry.
+ * registers. Call xhci_find_raw_port_number() to get real index.
*/
static u32 xhci_find_real_port_number(struct xhci_hcd *xhci,
struct usb_device *udev)
{
struct usb_device *top_dev;
- unsigned int num_similar_speed_ports;
- unsigned int faked_port_num;
- int i;
+ struct usb_hcd *hcd;
+
+ if (udev->speed == USB_SPEED_SUPER)
+ hcd = xhci->shared_hcd;
+ else
+ hcd = xhci->main_hcd;
for (top_dev = udev; top_dev->parent && top_dev->parent->parent;
top_dev = top_dev->parent)
/* Found device below root hub */;
- faked_port_num = top_dev->portnum;
- for (i = 0, num_similar_speed_ports = 0;
- i < HCS_MAX_PORTS(xhci->hcs_params1); i++) {
- u8 port_speed = xhci->port_array[i];
-
- /*
- * Skip ports that don't have known speeds, or have duplicate
- * Extended Capabilities port speed entries.
- */
- if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)
- continue;
- /*
- * USB 3.0 ports are always under a USB 3.0 hub. USB 2.0 and
- * 1.1 ports are under the USB 2.0 hub. If the port speed
- * matches the device speed, it's a similar speed port.
- */
- if ((port_speed == 0x03) == (udev->speed == USB_SPEED_SUPER))
- num_similar_speed_ports++;
- if (num_similar_speed_ports == faked_port_num)
- /* Roothub ports are numbered from 1 to N */
- return i+1;
- }
- return 0;
+ return xhci_find_raw_port_number(hcd, top_dev->portnum);
}
/* Setup an xHCI virtual device for a Set Address command */
.set_usb2_hw_lpm = xhci_set_usb2_hardware_lpm,
.enable_usb3_lpm_timeout = xhci_enable_usb3_lpm_timeout,
.disable_usb3_lpm_timeout = xhci_disable_usb3_lpm_timeout,
+ .find_raw_port_number = xhci_find_raw_port_number,
};
/*-------------------------------------------------------------------------*/
max_ports = HCS_MAX_PORTS(xhci->hcs_params1);
if ((port_id <= 0) || (port_id > max_ports)) {
xhci_warn(xhci, "Invalid port id %d\n", port_id);
- bogus_port_status = true;
- goto cleanup;
+ inc_deq(xhci, xhci->event_ring);
+ return;
}
/* Figure out which usb_hcd this port is attached to:
* is it a USB 3.0 port or a USB 2.0/1.1 port?
*/
major_revision = xhci->port_array[port_id - 1];
+
+ /* Find the right roothub. */
+ hcd = xhci_to_hcd(xhci);
+ if ((major_revision == 0x03) != (hcd->speed == HCD_USB3))
+ hcd = xhci->shared_hcd;
+
if (major_revision == 0) {
xhci_warn(xhci, "Event for port %u not in "
"Extended Capabilities, ignoring.\n",
* into the index into the ports on the correct split roothub, and the
* correct bus_state structure.
*/
- /* Find the right roothub. */
- hcd = xhci_to_hcd(xhci);
- if ((major_revision == 0x03) != (hcd->speed == HCD_USB3))
- hcd = xhci->shared_hcd;
bus_state = &xhci->bus_state[hcd_index(hcd)];
if (hcd->speed == HCD_USB3)
port_array = xhci->usb3_ports;
if (event_trb != ep_ring->dequeue &&
event_trb != td->last_trb)
td->urb->actual_length =
- td->urb->transfer_buffer_length
- - TRB_LEN(le32_to_cpu(event->transfer_len));
+ td->urb->transfer_buffer_length -
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
else
td->urb->actual_length = 0;
/* Maybe the event was for the data stage? */
td->urb->actual_length =
td->urb->transfer_buffer_length -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
xhci_dbg(xhci, "Waiting for status "
"stage event\n");
return 0;
/* handle completion code */
switch (trb_comp_code) {
case COMP_SUCCESS:
- if (TRB_LEN(le32_to_cpu(event->transfer_len)) == 0) {
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0) {
frame->status = 0;
break;
}
len += TRB_LEN(le32_to_cpu(cur_trb->generic.field[2]));
}
len += TRB_LEN(le32_to_cpu(cur_trb->generic.field[2])) -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
if (trb_comp_code != COMP_STOP_INVAL) {
frame->actual_length = len;
case COMP_SUCCESS:
/* Double check that the HW transferred everything. */
if (event_trb != td->last_trb ||
- TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
xhci_warn(xhci, "WARN Successful completion "
"on short TX\n");
if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
"%d bytes untransferred\n",
td->urb->ep->desc.bEndpointAddress,
td->urb->transfer_buffer_length,
- TRB_LEN(le32_to_cpu(event->transfer_len)));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)));
/* Fast path - was this the last TRB in the TD for this URB? */
if (event_trb == td->last_trb) {
- if (TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
td->urb->actual_length =
td->urb->transfer_buffer_length -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
if (td->urb->transfer_buffer_length <
td->urb->actual_length) {
xhci_warn(xhci, "HC gave bad length "
"of %d bytes left\n",
- TRB_LEN(le32_to_cpu(event->transfer_len)));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)));
td->urb->actual_length = 0;
if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
*status = -EREMOTEIO;
if (trb_comp_code != COMP_STOP_INVAL)
td->urb->actual_length +=
TRB_LEN(le32_to_cpu(cur_trb->generic.field[2])) -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
}
return finish_td(xhci, td, event_trb, event, ep, status, false);
* transfer type
*/
case COMP_SUCCESS:
- if (TRB_LEN(le32_to_cpu(event->transfer_len)) == 0)
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0)
break;
if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
trb_comp_code = COMP_SHORT_TX;
* TD list.
*/
if (list_empty(&ep_ring->td_list)) {
- xhci_warn(xhci, "WARN Event TRB for slot %d ep %d "
- "with no TDs queued?\n",
- TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
- ep_index);
- xhci_dbg(xhci, "Event TRB with TRB type ID %u\n",
- (le32_to_cpu(event->flags) &
- TRB_TYPE_BITMASK)>>10);
- xhci_print_trb_offsets(xhci, (union xhci_trb *) event);
+ /*
+ * A stopped endpoint may generate an extra completion
+ * event if the device was suspended. Don't print
+ * warnings.
+ */
+ if (!(trb_comp_code == COMP_STOP ||
+ trb_comp_code == COMP_STOP_INVAL)) {
+ xhci_warn(xhci, "WARN Event TRB for slot %d ep %d with no TDs queued?\n",
+ TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
+ ep_index);
+ xhci_dbg(xhci, "Event TRB with TRB type ID %u\n",
+ (le32_to_cpu(event->flags) &
+ TRB_TYPE_BITMASK)>>10);
+ xhci_print_trb_offsets(xhci, (union xhci_trb *) event);
+ }
if (ep->skip) {
ep->skip = false;
xhci_dbg(xhci, "td_list is empty while skip "
* generate interrupts. Don't even try to enable MSI.
*/
if (xhci->quirks & XHCI_BROKEN_MSI)
- return 0;
+ goto legacy_irq;
/* unregister the legacy interrupt */
if (hcd->irq)
return -EINVAL;
}
+ legacy_irq:
/* fall back to legacy interrupt*/
ret = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED,
hcd->irq_descr, hcd);
return 0;
}
+/*
+ * Transfer the port index into real index in the HW port status
+ * registers. Caculate offset between the port's PORTSC register
+ * and port status base. Divide the number of per port register
+ * to get the real index. The raw port number bases 1.
+ */
+int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1)
+{
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ __le32 __iomem *base_addr = &xhci->op_regs->port_status_base;
+ __le32 __iomem *addr;
+ int raw_port;
+
+ if (hcd->speed != HCD_USB3)
+ addr = xhci->usb2_ports[port1 - 1];
+ else
+ addr = xhci->usb3_ports[port1 - 1];
+
+ raw_port = (addr - base_addr)/NUM_PORT_REGS + 1;
+ return raw_port;
+}
+
#ifdef CONFIG_USB_SUSPEND
/* BESL to HIRD Encoding array for USB2 LPM */
/* bits 12:31 are reserved (and should be preserved on writes). */
/* IMAN - Interrupt Management Register */
-#define IMAN_IP (1 << 1)
-#define IMAN_IE (1 << 0)
+#define IMAN_IE (1 << 1)
+#define IMAN_IP (1 << 0)
/* USBSTS - USB status - status bitmasks */
/* HC not running - set to 1 when run/stop bit is cleared. */
__le32 flags;
};
+/* Transfer event TRB length bit mask */
+/* bits 0:23 */
+#define EVENT_TRB_LEN(p) ((p) & 0xffffff)
+
/** Transfer Event bit fields **/
#define TRB_TO_EP_ID(p) (((p) >> 16) & 0x1f)
int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
char *buf, u16 wLength);
int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
+int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
#ifdef CONFIG_PM
int xhci_bus_suspend(struct usb_hcd *hcd);
u8 devctl = musb_readb(mregs, MUSB_DEVCTL);
int err;
- err = musb->int_usb & USB_INTR_VBUSERROR;
+ err = musb->int_usb & MUSB_INTR_VBUSERROR;
if (err) {
/*
* The Mentor core doesn't debounce VBUS as needed
static inline void unmap_dma_buffer(struct musb_request *request,
struct musb *musb)
{
- if (!is_buffer_mapped(request))
+ struct musb_ep *musb_ep = request->ep;
+
+ if (!is_buffer_mapped(request) || !musb_ep->dma)
return;
if (request->request.dma == DMA_ADDR_INVALID) {
ep->busy = 1;
spin_unlock(&musb->lock);
- unmap_dma_buffer(req, musb);
+
+ if (!dma_mapping_error(&musb->g.dev, request->dma))
+ unmap_dma_buffer(req, musb);
+
if (request->status == 0)
dev_dbg(musb->controller, "%s done request %p, %d/%d\n",
ep->end_point.name, request,
tristate "NXP ISP1301 USB transceiver support"
depends on USB || USB_GADGET
depends on I2C
+ select USB_OTG_UTILS
help
Say Y here to add support for the NXP ISP1301 USB transceiver driver.
This chip is typically used as USB transceiver for USB host, gadget
}
struct ark3116_private {
- wait_queue_head_t delta_msr_wait;
struct async_icount icount;
int irda; /* 1 for irda device */
if (!priv)
return -ENOMEM;
- init_waitqueue_head(&priv->delta_msr_wait);
mutex_init(&priv->hw_lock);
spin_lock_init(&priv->status_lock);
case TIOCMIWAIT:
for (;;) {
struct async_icount prev = priv->icount;
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
if ((prev.rng == priv->icount.rng) &&
(prev.dsr == priv->icount.dsr) &&
(prev.dcd == priv->icount.dcd) &&
priv->icount.dcd++;
if (msr & UART_MSR_TERI)
priv->icount.rng++;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
struct ch341_private {
spinlock_t lock; /* access lock */
- wait_queue_head_t delta_msr_wait; /* wait queue for modem status */
unsigned baud_rate; /* set baud rate */
u8 line_control; /* set line control value RTS/DTR */
u8 line_status; /* active status of modem control inputs */
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->baud_rate = DEFAULT_BAUD_RATE;
priv->line_control = CH341_BIT_RTS | CH341_BIT_DTR;
priv->line_control &= ~(CH341_BIT_RTS | CH341_BIT_DTR);
spin_unlock_irqrestore(&priv->lock, flags);
ch341_set_handshake(port->serial->dev, priv->line_control);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
static void ch341_close(struct usb_serial_port *port)
tty_kref_put(tty);
}
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
exit:
spin_unlock_irqrestore(&priv->lock, flags);
while (!multi_change) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
multi_change = priv->multi_status_change;
int baud_rate; /* stores current baud rate in
integer form */
int isthrottled; /* if throttled, discard reads */
- wait_queue_head_t delta_msr_wait; /* used for TIOCMIWAIT */
char prev_status, diff_status; /* used for TIOCMIWAIT */
/* we pass a pointer to this as the argument sent to
cypress_set_termios old_termios */
kfree(priv);
return -ENOMEM;
}
- init_waitqueue_head(&priv->delta_msr_wait);
usb_reset_configuration(serial->dev);
switch (cmd) {
/* This code comes from drivers/char/serial.c and ftdi_sio.c */
case TIOCMIWAIT:
- while (priv != NULL) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ for (;;) {
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
- else {
+
+ if (port->serial->disconnected)
+ return -EIO;
+
+ {
char diff = priv->diff_status;
if (diff == 0)
return -EIO; /* no change => error */
if (priv->current_status != priv->prev_status) {
priv->diff_status |= priv->current_status ^
priv->prev_status;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
priv->prev_status = priv->current_status;
}
spin_unlock_irqrestore(&priv->lock, flags);
struct f81232_private {
spinlock_t lock;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
line_status = priv->line_status;
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
int flags; /* some ASYNC_xxxx flags are supported */
unsigned long last_dtr_rts; /* saved modem control outputs */
struct async_icount icount;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
char prev_status; /* Used for TIOCMIWAIT */
- bool dev_gone; /* Used to abort TIOCMIWAIT */
char transmit_empty; /* If transmitter is empty or not */
__u16 interface; /* FT2232C, FT2232H or FT4232H port interface
(0 for FT232/245) */
{ USB_DEVICE(FTDI_VID, FTDI_RM_CANVIEW_PID) },
{ USB_DEVICE(ACTON_VID, ACTON_SPECTRAPRO_PID) },
{ USB_DEVICE(CONTEC_VID, CONTEC_COM1USBH_PID) },
+ { USB_DEVICE(MITSUBISHI_VID, MITSUBISHI_FXUSB_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USOTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USO9ML2_PID) },
kref_init(&priv->kref);
mutex_init(&priv->cfg_lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->flags = ASYNC_LOW_LATENCY;
- priv->dev_gone = false;
if (quirk && quirk->port_probe)
quirk->port_probe(priv);
{
struct ftdi_private *priv = usb_get_serial_port_data(port);
- priv->dev_gone = true;
- wake_up_interruptible_all(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
remove_sysfs_attrs(port);
if (diff_status & FTDI_RS0_RLSD)
priv->icount.dcd++;
- wake_up_interruptible_all(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
priv->prev_status = status;
}
*/
case TIOCMIWAIT:
cprev = priv->icount;
- while (!priv->dev_gone) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ for (;;) {
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = priv->icount;
if (((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) ||
((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) ||
}
cprev = cnow;
}
- return -EIO;
- break;
case TIOCSERGETLSR:
return get_lsr_info(port, (struct serial_struct __user *)arg);
break;
#define CONTEC_VID 0x06CE /* Vendor ID */
#define CONTEC_COM1USBH_PID 0x8311 /* COM-1(USB)H */
+/*
+ * Mitsubishi Electric Corp. (http://www.meau.com)
+ * Submitted by Konstantin Holoborodko
+ */
+#define MITSUBISHI_VID 0x06D3
+#define MITSUBISHI_FXUSB_PID 0x0284 /* USB/RS422 converters: FX-USB-AW/-BD */
+
/*
* Definitions for B&B Electronics products.
*/
if (!serial)
return;
- mutex_lock(&port->serial->disc_mutex);
-
- if (!port->serial->disconnected)
- garmin_clear(garmin_data_p);
+ garmin_clear(garmin_data_p);
/* shutdown our urbs */
usb_kill_urb(port->read_urb);
/* keep reset state so we know that we must start a new session */
if (garmin_data_p->state != STATE_RESET)
garmin_data_p->state = STATE_DISCONNECTED;
-
- mutex_unlock(&port->serial->disc_mutex);
}
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
wait_queue_head_t wait_open; /* for handling sleeping while waiting for open to finish */
wait_queue_head_t wait_command; /* for handling sleeping while waiting for command to finish */
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
struct async_icount icount;
struct usb_serial_port *port; /* loop back to the owner of this object */
/* initialize our wait queues */
init_waitqueue_head(&edge_port->wait_open);
init_waitqueue_head(&edge_port->wait_chase);
- init_waitqueue_head(&edge_port->delta_msr_wait);
init_waitqueue_head(&edge_port->wait_command);
/* initialize our icount structure */
dev_dbg(&port->dev, "%s (%d) TIOCMIWAIT\n", __func__, port->number);
cprev = edge_port->icount;
while (1) {
- prepare_to_wait(&edge_port->delta_msr_wait,
+ prepare_to_wait(&port->delta_msr_wait,
&wait, TASK_INTERRUPTIBLE);
schedule();
- finish_wait(&edge_port->delta_msr_wait, &wait);
+ finish_wait(&port->delta_msr_wait, &wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = edge_port->icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
icount->dcd++;
if (newMsr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&edge_port->delta_msr_wait);
+ wake_up_interruptible(&edge_port->port->delta_msr_wait);
}
/* Save the new modem status */
int close_pending;
int lsr_event;
struct async_icount icount;
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while
- waiting for msr change to
- happen */
struct edgeport_serial *edge_serial;
struct usb_serial_port *port;
__u8 bUartMode; /* Port type, 0: RS232, etc. */
icount->dcd++;
if (msr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&edge_port->delta_msr_wait);
+ wake_up_interruptible(&edge_port->port->delta_msr_wait);
}
/* Save the new modem status */
dev = port->serial->dev;
memset(&(edge_port->icount), 0x00, sizeof(edge_port->icount));
- init_waitqueue_head(&edge_port->delta_msr_wait);
/* turn off loopback */
status = ti_do_config(edge_port, UMPC_SET_CLR_LOOPBACK, 0);
dev_dbg(&port->dev, "%s - TIOCMIWAIT\n", __func__);
cprev = edge_port->icount;
while (1) {
- interruptible_sleep_on(&edge_port->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = edge_port->icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
.set_termios = edge_set_termios,
.tiocmget = edge_tiocmget,
.tiocmset = edge_tiocmset,
+ .get_icount = edge_get_icount,
.write = edge_write,
.write_room = edge_write_room,
.chars_in_buffer = edge_chars_in_buffer,
unsigned char last_msr; /* Modem Status Register */
unsigned int rx_flags; /* Throttling flags */
struct async_icount icount;
- wait_queue_head_t msr_wait; /* for handling sleeping while waiting
- for msr change to happen */
};
#define THROTTLED 0x01
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->msr_wait);
usb_set_serial_port_data(port, priv);
tty_kref_put(tty);
}
#endif
- wake_up_interruptible(&priv->msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
spin_unlock_irqrestore(&priv->lock, flags);
exit:
retval = usb_submit_urb(urb, GFP_ATOMIC);
cprev = mct_u232_port->icount;
spin_unlock_irqrestore(&mct_u232_port->lock, flags);
for ( ; ; ) {
- prepare_to_wait(&mct_u232_port->msr_wait,
+ prepare_to_wait(&port->delta_msr_wait,
&wait, TASK_INTERRUPTIBLE);
schedule();
- finish_wait(&mct_u232_port->msr_wait, &wait);
+ finish_wait(&port->delta_msr_wait, &wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&mct_u232_port->lock, flags);
cnow = mct_u232_port->icount;
spin_unlock_irqrestore(&mct_u232_port->lock, flags);
char open;
char open_ports;
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
int delta_msr_cond;
struct async_icount icount;
struct usb_serial_port *port; /* loop back to the owner of this object */
icount->rng++;
smp_wmb();
}
+
+ mos7840_port->delta_msr_cond = 1;
+ wake_up_interruptible(&port->port->delta_msr_wait);
}
}
/* initialize our wait queues */
init_waitqueue_head(&mos7840_port->wait_chase);
- init_waitqueue_head(&mos7840_port->delta_msr_wait);
/* initialize our icount structure */
memset(&(mos7840_port->icount), 0x00, sizeof(mos7840_port->icount));
mos7840_port->read_urb_busy = false;
}
}
- wake_up(&mos7840_port->delta_msr_wait);
- mos7840_port->delta_msr_cond = 1;
dev_dbg(&port->dev, "%s - mos7840_port->shadowLCR is End %x\n", __func__,
mos7840_port->shadowLCR);
}
while (1) {
/* interruptible_sleep_on(&mos7840_port->delta_msr_wait); */
mos7840_port->delta_msr_cond = 0;
- wait_event_interruptible(mos7840_port->delta_msr_wait,
- (mos7840_port->
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ mos7840_port->
delta_msr_cond == 1));
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = mos7840_port->icount;
smp_rmb();
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
u8 setup_done;
struct delayed_work delayed_setup_work;
- wait_queue_head_t intr_wait;
struct usb_serial_port *port; /* USB port with which associated */
};
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->intr_wait);
priv->port = port;
INIT_DELAYED_WORK(&priv->delayed_setup_work, setup_line);
INIT_DELAYED_WORK(&priv->delayed_write_work, send_data);
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- wait_event_interruptible(priv->intr_wait,
+ wait_event_interruptible(port->delta_msr_wait,
+ port->serial->disconnected ||
priv->status.pin_state != prev);
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->status.pin_state & PIN_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
if (!priv->transient) {
if (xs->pin_state != priv->status.pin_state)
- wake_up_interruptible(&priv->intr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
memcpy(&priv->status, xs, OTI6858_CTRL_PKT_SIZE);
}
struct pl2303_private {
spinlock_t lock;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->line_status & UART_BREAK_ERROR)
usb_serial_handle_break(port);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
tty = tty_port_tty_get(&port->port);
if (!tty)
line_status = priv->line_status;
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
u8 shadowLSR;
u8 shadowMSR;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
struct async_icount icount;
struct usb_serial_port *port;
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- wait_event_interruptible(priv->delta_msr_wait,
- ((priv->icount.rng != prev.rng) ||
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ (priv->icount.rng != prev.rng) ||
(priv->icount.dsr != prev.dsr) ||
(priv->icount.dcd != prev.dcd) ||
(priv->icount.cts != prev.cts)));
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
cur = priv->icount;
spin_unlock_irqrestore(&priv->lock, flags);
spin_lock_init(&port_priv->lock);
spin_lock_init(&port_priv->urb_lock);
- init_waitqueue_head(&port_priv->delta_msr_wait);
port_priv->port = port;
port_priv->write_urb = usb_alloc_urb(0, GFP_KERNEL);
if (newMSR & UART_MSR_TERI)
port_priv->icount.rng++;
- wake_up_interruptible(&port_priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
struct spcp8x5_private {
spinlock_t lock;
enum spcp8x5_type type;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->type = type;
usb_set_serial_port_data(port , priv);
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
/* wake up the wait for termios */
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
while (1) {
/* wake up in bulk read */
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
spinlock_t status_lock;
u8 shadowLSR;
u8 shadowMSR;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
struct async_icount icount;
};
spin_unlock_irqrestore(&priv->status_lock, flags);
while (1) {
- wait_event_interruptible(priv->delta_msr_wait,
- ((priv->icount.rng != prev.rng) ||
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ (priv->icount.rng != prev.rng) ||
(priv->icount.dsr != prev.dsr) ||
(priv->icount.dcd != prev.dcd) ||
(priv->icount.cts != prev.cts)));
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->status_lock, flags);
cur = priv->icount;
spin_unlock_irqrestore(&priv->status_lock, flags);
return -ENOMEM;
spin_lock_init(&priv->status_lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
priv->icount.dcd++;
if (msr & UART_MSR_TERI)
priv->icount.rng++;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
int tp_flags;
int tp_closing_wait;/* in .01 secs */
struct async_icount tp_icount;
- wait_queue_head_t tp_msr_wait; /* wait for msr change */
wait_queue_head_t tp_write_wait;
struct ti_device *tp_tdev;
struct usb_serial_port *tp_port;
else
tport->tp_uart_base_addr = TI_UART2_BASE_ADDR;
tport->tp_closing_wait = closing_wait;
- init_waitqueue_head(&tport->tp_msr_wait);
init_waitqueue_head(&tport->tp_write_wait);
if (kfifo_alloc(&tport->write_fifo, TI_WRITE_BUF_SIZE, GFP_KERNEL)) {
kfree(tport);
dev_dbg(&port->dev, "%s - TIOCMIWAIT\n", __func__);
cprev = tport->tp_icount;
while (1) {
- interruptible_sleep_on(&tport->tp_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = tport->tp_icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
icount->dcd++;
if (msr & TI_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&tport->tp_msr_wait);
+ wake_up_interruptible(&tport->tp_port->delta_msr_wait);
spin_unlock_irqrestore(&tport->tp_lock, flags);
}
}
}
+ usb_put_intf(serial->interface);
usb_put_dev(serial->dev);
kfree(serial);
}
}
serial->dev = usb_get_dev(dev);
serial->type = driver;
- serial->interface = interface;
+ serial->interface = usb_get_intf(interface);
kref_init(&serial->kref);
mutex_init(&serial->disc_mutex);
serial->minor = SERIAL_TTY_NO_MINOR;
port->port.ops = &serial_port_ops;
port->serial = serial;
spin_lock_init(&port->lock);
+ init_waitqueue_head(&port->delta_msr_wait);
/* Keep this for private driver use for the moment but
should probably go away */
INIT_WORK(&port->work, usb_serial_port_work);
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_MAX_SECTORS_64 | US_FL_BULK_IGNORE_TAG),
+/* Added by Dmitry Artamonow <mad_soft@inbox.ru> */
+UNUSUAL_DEV( 0x04e8, 0x5136, 0x0000, 0x9999,
+ "Samsung",
+ "YP-Z3",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_MAX_SECTORS_64),
+
/* Entry and supporting patch by Theodore Kilgore <kilgota@auburn.edu>.
* Device uses standards-violating 32-byte Bulk Command Block Wrappers and
* reports itself as "Proprietary SCSI Bulk." Cf. device entry 0x084d:0x0011.
#include <linux/pci.h>
#include <linux/uaccess.h>
#include <linux/vfio.h>
+#include <linux/slab.h>
#include "vfio_pci_private.h"
#include <linux/vfio.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
+#include <linux/slab.h>
#include "vfio_pci_private.h"
msg.msg_controllen = 0;
ubufs = NULL;
} else {
- struct ubuf_info *ubuf = &vq->ubuf_info[head];
+ struct ubuf_info *ubuf;
+ ubuf = vq->ubuf_info + vq->upend_idx;
vq->heads[vq->upend_idx].len =
VHOST_DMA_IN_PROGRESS;
VHOST_SCSI_VQ_IO = 2,
};
+/*
+ * VIRTIO_RING_F_EVENT_IDX seems broken. Not sure the bug is in
+ * kernel but disabling it helps.
+ * TODO: debug and remove the workaround.
+ */
+enum {
+ VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)
+};
+
#define VHOST_SCSI_MAX_TARGET 256
#define VHOST_SCSI_MAX_VQ 128
for (index = 0; index < vs->dev.nvqs; ++index) {
if (!vhost_vq_access_ok(&vs->vqs[index])) {
ret = -EFAULT;
- goto err;
+ goto err_dev;
}
}
for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
if (!tv_tpg)
continue;
+ mutex_lock(&tv_tpg->tv_tpg_mutex);
tv_tport = tv_tpg->tport;
if (!tv_tport) {
ret = -ENODEV;
- goto err;
+ goto err_tpg;
}
if (strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
tv_tport->tport_name, tv_tpg->tport_tpgt,
t->vhost_wwpn, t->vhost_tpgt);
ret = -EINVAL;
- goto err;
+ goto err_tpg;
}
tv_tpg->tv_tpg_vhost_count--;
vs->vs_tpg[target] = NULL;
vs->vs_endpoint = false;
+ mutex_unlock(&tv_tpg->tv_tpg_mutex);
}
mutex_unlock(&vs->dev.mutex);
return 0;
-err:
+err_tpg:
+ mutex_unlock(&tv_tpg->tv_tpg_mutex);
+err_dev:
mutex_unlock(&vs->dev.mutex);
return ret;
}
for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
vhost_scsi_flush_vq(vs, i);
+ vhost_work_flush(&vs->dev, &vs->vs_completion_work);
}
static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
{
- if (features & ~VHOST_FEATURES)
+ if (features & ~VHOST_SCSI_FEATURES)
return -EOPNOTSUPP;
mutex_lock(&vs->dev.mutex);
return -EFAULT;
return 0;
case VHOST_GET_FEATURES:
- features = VHOST_FEATURES;
+ features = VHOST_SCSI_FEATURES;
if (copy_to_user(featurep, &features, sizeof features))
return -EFAULT;
return 0;
= var->bits_per_pixel;
break;
case 16:
+ /* Older SOCs use IBGR:555 rather than BGR:565. */
+ if (sinfo->have_intensity_bit)
+ var->green.length = 5;
+ else
+ var->green.length = 6;
+
if (sinfo->lcd_wiring_mode == ATMEL_LCDC_WIRING_RGB) {
- /* RGB:565 mode */
- var->red.offset = 11;
+ /* RGB:5X5 mode */
+ var->red.offset = var->green.length + 5;
var->blue.offset = 0;
} else {
- /* BGR:565 mode */
+ /* BGR:5X5 mode */
var->red.offset = 0;
- var->blue.offset = 11;
+ var->blue.offset = var->green.length + 5;
}
var->green.offset = 5;
- var->green.length = 6;
var->red.length = var->blue.length = 5;
break;
case 32:
case FB_VISUAL_PSEUDOCOLOR:
if (regno < 256) {
- if (cpu_is_at91sam9261() || cpu_is_at91sam9263()
- || cpu_is_at91sam9rl()) {
+ if (sinfo->have_intensity_bit) {
/* old style I+BGR:555 */
val = ((red >> 11) & 0x001f);
val |= ((green >> 6) & 0x03e0);
}
sinfo->info = info;
sinfo->pdev = pdev;
+ if (cpu_is_at91sam9261() || cpu_is_at91sam9263() ||
+ cpu_is_at91sam9rl()) {
+ sinfo->have_intensity_bit = true;
+ }
strcpy(info->fix.id, sinfo->pdev->name);
info->flags = ATMEL_LCDFB_FBINFO_DEFAULT;
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/fb.h>
+#include <linux/io.h>
#include <linux/platform_data/video-ep93xx.h>
unsigned dotclk_delay;
const struct mxsfb_devdata *devdata;
int mapped;
+ u32 sync;
};
#define mxsfb_is_v3(host) (host->devdata->ipversion == 3)
vdctrl0 |= VDCTRL0_HSYNC_ACT_HIGH;
if (fb_info->var.sync & FB_SYNC_VERT_HIGH_ACT)
vdctrl0 |= VDCTRL0_VSYNC_ACT_HIGH;
- if (fb_info->var.sync & FB_SYNC_DATA_ENABLE_HIGH_ACT)
+ if (host->sync & MXSFB_SYNC_DATA_ENABLE_HIGH_ACT)
vdctrl0 |= VDCTRL0_ENABLE_ACT_HIGH;
- if (fb_info->var.sync & FB_SYNC_DOTCLK_FAILING_ACT)
+ if (host->sync & MXSFB_SYNC_DOTCLK_FAILING_ACT)
vdctrl0 |= VDCTRL0_DOTCLK_ACT_FAILING;
writel(vdctrl0, host->base + LCDC_VDCTRL0);
INIT_LIST_HEAD(&fb_info->modelist);
+ host->sync = pdata->sync;
+
ret = mxsfb_init_fbinfo(host);
if (ret != 0)
goto error_init_fb;
#include <linux/omap-dma.h>
+#include <mach/hardware.h>
+
#include "omapfb.h"
#include "lcdc.h"
u32 power_on_resume:1;
};
+/* used to pass spi_device from SPI to DSS portion of the driver */
+static struct tpo_td043_device *g_tpo_td043;
+
static int tpo_td043_write(struct spi_device *spi, u8 addr, u8 data)
{
struct spi_message m;
static int tpo_td043_probe(struct omap_dss_device *dssdev)
{
- struct tpo_td043_device *tpo_td043 = dev_get_drvdata(&dssdev->dev);
+ struct tpo_td043_device *tpo_td043 = g_tpo_td043;
int nreset_gpio = dssdev->reset_gpio;
int ret = 0;
if (ret)
dev_warn(&dssdev->dev, "failed to create sysfs files\n");
+ dev_set_drvdata(&dssdev->dev, tpo_td043);
+
return 0;
fail_gpio_req:
return -ENODEV;
}
+ if (g_tpo_td043 != NULL)
+ return -EBUSY;
+
spi->bits_per_word = 16;
spi->mode = SPI_MODE_0;
tpo_td043->spi = spi;
tpo_td043->nreset_gpio = dssdev->reset_gpio;
dev_set_drvdata(&spi->dev, tpo_td043);
- dev_set_drvdata(&dssdev->dev, tpo_td043);
+ g_tpo_td043 = tpo_td043;
omap_dss_register_driver(&tpo_td043_driver);
omap_dss_unregister_driver(&tpo_td043_driver);
kfree(tpo_td043);
+ g_tpo_td043 = NULL;
return 0;
}
static const enum omap_dss_output_id omap4_dss_supported_outputs[] = {
/* OMAP_DSS_CHANNEL_LCD */
- OMAP_DSS_OUTPUT_DPI | OMAP_DSS_OUTPUT_DBI |
- OMAP_DSS_OUTPUT_DSI1,
+ OMAP_DSS_OUTPUT_DBI | OMAP_DSS_OUTPUT_DSI1,
/* OMAP_DSS_CHANNEL_DIGIT */
- OMAP_DSS_OUTPUT_VENC | OMAP_DSS_OUTPUT_HDMI |
- OMAP_DSS_OUTPUT_DPI,
+ OMAP_DSS_OUTPUT_VENC | OMAP_DSS_OUTPUT_HDMI,
/* OMAP_DSS_CHANNEL_LCD2 */
OMAP_DSS_OUTPUT_DPI | OMAP_DSS_OUTPUT_DBI |
#include "sp5100_tco.h"
/* Module and version information */
-#define TCO_VERSION "0.03"
+#define TCO_VERSION "0.05"
#define TCO_MODULE_NAME "SP5100 TCO timer"
#define TCO_DRIVER_NAME TCO_MODULE_NAME ", v" TCO_VERSION
/* internal variables */
static u32 tcobase_phys;
-static u32 resbase_phys;
static u32 tco_wdt_fired;
static void __iomem *tcobase;
static unsigned int pm_iobase;
static unsigned long timer_alive;
static char tco_expect_close;
static struct pci_dev *sp5100_tco_pci;
-static struct resource wdt_res = {
- .name = "Watchdog Timer",
- .flags = IORESOURCE_MEM,
-};
/* the watchdog platform device */
static struct platform_device *sp5100_tco_platform_device;
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started."
" (default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
-static unsigned int force_addr;
-module_param(force_addr, uint, 0);
-MODULE_PARM_DESC(force_addr, "Force the use of specified MMIO address."
- " ONLY USE THIS PARAMETER IF YOU REALLY KNOW"
- " WHAT YOU ARE DOING (default=none)");
-
/*
* Some TCO specific functions
*/
}
}
-static void tco_timer_disable(void)
-{
- int val;
-
- if (sp5100_tco_pci->revision >= 0x40) {
- /* For SB800 or later */
- /* Enable watchdog decode bit and Disable watchdog timer */
- outb(SB800_PM_WATCHDOG_CONTROL, SB800_IO_PM_INDEX_REG);
- val = inb(SB800_IO_PM_DATA_REG);
- val |= SB800_PCI_WATCHDOG_DECODE_EN;
- val |= SB800_PM_WATCHDOG_DISABLE;
- outb(val, SB800_IO_PM_DATA_REG);
- } else {
- /* For SP5100 or SB7x0 */
- /* Enable watchdog decode bit */
- pci_read_config_dword(sp5100_tco_pci,
- SP5100_PCI_WATCHDOG_MISC_REG,
- &val);
-
- val |= SP5100_PCI_WATCHDOG_DECODE_EN;
-
- pci_write_config_dword(sp5100_tco_pci,
- SP5100_PCI_WATCHDOG_MISC_REG,
- val);
-
- /* Disable Watchdog timer */
- outb(SP5100_PM_WATCHDOG_CONTROL, SP5100_IO_PM_INDEX_REG);
- val = inb(SP5100_IO_PM_DATA_REG);
- val |= SP5100_PM_WATCHDOG_DISABLE;
- outb(val, SP5100_IO_PM_DATA_REG);
- }
-}
-
/*
* /dev/watchdog handling
*/
{
struct pci_dev *dev = NULL;
const char *dev_name = NULL;
- u32 val, tmp_val;
+ u32 val;
u32 index_reg, data_reg, base_addr;
/* Match the PCI device */
} else
pr_debug("SBResource_MMIO is disabled(0x%04x)\n", val);
- /*
- * Lastly re-programming the watchdog timer MMIO address,
- * This method is a last resort...
- *
- * Before re-programming, to ensure that the watchdog timer
- * is disabled, disable the watchdog timer.
- */
- tco_timer_disable();
-
- if (force_addr) {
- /*
- * Force the use of watchdog timer MMIO address, and aligned to
- * 8byte boundary.
- */
- force_addr &= ~0x7;
- val = force_addr;
-
- pr_info("Force the use of 0x%04x as MMIO address\n", val);
- } else {
- /*
- * Get empty slot into the resource tree for watchdog timer.
- */
- if (allocate_resource(&iomem_resource,
- &wdt_res,
- SP5100_WDT_MEM_MAP_SIZE,
- 0xf0000000,
- 0xfffffff8,
- 0x8,
- NULL,
- NULL)) {
- pr_err("MMIO allocation failed\n");
- goto unreg_region;
- }
-
- val = resbase_phys = wdt_res.start;
- pr_debug("Got 0x%04x from resource tree\n", val);
- }
-
- /* Restore to the low three bits */
- outb(base_addr+0, index_reg);
- tmp_val = val | (inb(data_reg) & 0x7);
-
- /* Re-programming the watchdog timer base address */
- outb(base_addr+0, index_reg);
- outb((tmp_val >> 0) & 0xff, data_reg);
- outb(base_addr+1, index_reg);
- outb((tmp_val >> 8) & 0xff, data_reg);
- outb(base_addr+2, index_reg);
- outb((tmp_val >> 16) & 0xff, data_reg);
- outb(base_addr+3, index_reg);
- outb((tmp_val >> 24) & 0xff, data_reg);
-
- if (!request_mem_region_exclusive(val, SP5100_WDT_MEM_MAP_SIZE,
- dev_name)) {
- pr_err("MMIO address 0x%04x already in use\n", val);
- goto unreg_resource;
- }
+ pr_notice("failed to find MMIO address, giving up.\n");
+ goto unreg_region;
setup_wdt:
tcobase_phys = val;
unreg_mem_region:
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
-unreg_resource:
- if (resbase_phys)
- release_resource(&wdt_res);
unreg_region:
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
exit:
static int sp5100_tco_init(struct platform_device *dev)
{
int ret;
- char addr_str[16];
/*
* Check whether or not the hardware watchdog is there. If found, then
clear_bit(0, &timer_alive);
/* Show module parameters */
- if (force_addr == tcobase_phys)
- /* The force_addr is vaild */
- sprintf(addr_str, "0x%04x", force_addr);
- else
- strcpy(addr_str, "none");
-
- pr_info("initialized (0x%p). heartbeat=%d sec (nowayout=%d, "
- "force_addr=%s)\n",
- tcobase, heartbeat, nowayout, addr_str);
+ pr_info("initialized (0x%p). heartbeat=%d sec (nowayout=%d)\n",
+ tcobase, heartbeat, nowayout);
return 0;
exit:
iounmap(tcobase);
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
- if (resbase_phys)
- release_resource(&wdt_res);
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
return ret;
}
misc_deregister(&sp5100_tco_miscdev);
iounmap(tcobase);
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
- if (resbase_phys)
- release_resource(&wdt_res);
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
}
#define SB800_PM_WATCHDOG_DISABLE (1 << 2)
#define SB800_PM_WATCHDOG_SECOND_RES (3 << 0)
#define SB800_ACPI_MMIO_DECODE_EN (1 << 0)
-#define SB800_ACPI_MMIO_SEL (1 << 2)
+#define SB800_ACPI_MMIO_SEL (1 << 1)
#define SB800_PM_WDT_MMIO_OFFSET 0xB00
config XEN_STUB
bool "Xen stub drivers"
- depends on XEN && X86_64
+ depends on XEN && X86_64 && BROKEN
default n
help
Allow kernel to install stub drivers, to reserve space for Xen drivers,
if (unlikely((cpu != cpu_from_evtchn(port))))
do_hypercall = 1;
- else
+ else {
+ /*
+ * Need to clear the mask before checking pending to
+ * avoid a race with an event becoming pending.
+ *
+ * EVTCHNOP_unmask will only trigger an upcall if the
+ * mask bit was set, so if a hypercall is needed
+ * remask the event.
+ */
+ sync_clear_bit(port, BM(&s->evtchn_mask[0]));
evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0]));
- if (unlikely(evtchn_pending && xen_hvm_domain()))
- do_hypercall = 1;
+ if (unlikely(evtchn_pending && xen_hvm_domain())) {
+ sync_set_bit(port, BM(&s->evtchn_mask[0]));
+ do_hypercall = 1;
+ }
+ }
/* Slow path (hypercall) if this is a non-local port or if this is
* an hvm domain and an event is pending (hvm domains don't have
} else {
struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
- sync_clear_bit(port, BM(&s->evtchn_mask[0]));
-
/*
* The following is basically the equivalent of
* 'hw_resend_irq'. Just like a real IO-APIC we 'lose
}
EXPORT_SYMBOL_GPL(xen_event_channel_op_compat);
-int HYPERVISOR_physdev_op_compat(int cmd, void *arg)
+int xen_physdev_op_compat(int cmd, void *arg)
{
struct physdev_op op;
int rc;
return rc;
}
+EXPORT_SYMBOL_GPL(xen_physdev_op_compat);
pr = per_cpu(processors, i);
perf = per_cpu_ptr(acpi_perf_data, i);
+ if (!pr)
+ continue;
+
pr->performance = perf;
rc = acpi_processor_get_performance_info(pr);
if (rc)
#include <xen/events.h>
#include <asm/xen/pci.h>
#include <asm/xen/hypervisor.h>
+#include <xen/interface/physdev.h>
#include "pciback.h"
#include "conf_space.h"
#include "conf_space_quirks.h"
static void pcistub_device_release(struct kref *kref)
{
struct pcistub_device *psdev;
+ struct pci_dev *dev;
struct xen_pcibk_dev_data *dev_data;
psdev = container_of(kref, struct pcistub_device, kref);
- dev_data = pci_get_drvdata(psdev->dev);
+ dev = psdev->dev;
+ dev_data = pci_get_drvdata(dev);
- dev_dbg(&psdev->dev->dev, "pcistub_device_release\n");
+ dev_dbg(&dev->dev, "pcistub_device_release\n");
- xen_unregister_device_domain_owner(psdev->dev);
+ xen_unregister_device_domain_owner(dev);
/* Call the reset function which does not take lock as this
* is called from "unbind" which takes a device_lock mutex.
*/
- __pci_reset_function_locked(psdev->dev);
- if (pci_load_and_free_saved_state(psdev->dev,
- &dev_data->pci_saved_state)) {
- dev_dbg(&psdev->dev->dev, "Could not reload PCI state\n");
- } else
- pci_restore_state(psdev->dev);
+ __pci_reset_function_locked(dev);
+ if (pci_load_and_free_saved_state(dev, &dev_data->pci_saved_state))
+ dev_dbg(&dev->dev, "Could not reload PCI state\n");
+ else
+ pci_restore_state(dev);
+
+ if (pci_find_capability(dev, PCI_CAP_ID_MSIX)) {
+ struct physdev_pci_device ppdev = {
+ .seg = pci_domain_nr(dev->bus),
+ .bus = dev->bus->number,
+ .devfn = dev->devfn
+ };
+ int err = HYPERVISOR_physdev_op(PHYSDEVOP_release_msix,
+ &ppdev);
+
+ if (err)
+ dev_warn(&dev->dev, "MSI-X release failed (%d)\n",
+ err);
+ }
/* Disable the device */
- xen_pcibk_reset_device(psdev->dev);
+ xen_pcibk_reset_device(dev);
kfree(dev_data);
- pci_set_drvdata(psdev->dev, NULL);
+ pci_set_drvdata(dev, NULL);
/* Clean-up the device */
- xen_pcibk_config_free_dyn_fields(psdev->dev);
- xen_pcibk_config_free_dev(psdev->dev);
+ xen_pcibk_config_free_dyn_fields(dev);
+ xen_pcibk_config_free_dev(dev);
- psdev->dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED;
- pci_dev_put(psdev->dev);
+ dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED;
+ pci_dev_put(dev);
kfree(psdev);
}
if (err)
goto config_release;
+ if (pci_find_capability(dev, PCI_CAP_ID_MSIX)) {
+ struct physdev_pci_device ppdev = {
+ .seg = pci_domain_nr(dev->bus),
+ .bus = dev->bus->number,
+ .devfn = dev->devfn
+ };
+
+ err = HYPERVISOR_physdev_op(PHYSDEVOP_prepare_msix, &ppdev);
+ if (err)
+ dev_err(&dev->dev, "MSI-X preparation failed (%d)\n",
+ err);
+ }
+
/* We need the device active to save the state. */
dev_dbg(&dev->dev, "save state of device\n");
pci_save_state(dev);
fw-shipped-$(CONFIG_SCSI_QLOGIC_1280) += qlogic/1040.bin qlogic/1280.bin \
qlogic/12160.bin
fw-shipped-$(CONFIG_SCSI_QLOGICPTI) += qlogic/isp1000.bin
-fw-shipped-$(CONFIG_INFINIBAND_QIB) += qlogic/sd7220.fw
+fw-shipped-$(CONFIG_INFINIBAND_QIB) += intel/sd7220.fw
fw-shipped-$(CONFIG_SND_KORG1212) += korg/k1212.dsp
fw-shipped-$(CONFIG_SND_MAESTRO3) += ess/maestro3_assp_kernel.fw \
ess/maestro3_assp_minisrc.fw
--- /dev/null
+:10000000020A29020A87E5E630E6047F0180027FC2
+:1000100000E5E230E4047E0180027E00EE5F6008CD
+:1000200053F9F7E4F5FE80087F0A121731120EA289
+:1000300075FC08E4F5FDE5E720E70343F908220035
+:1000400001201100042000755101E4F552F553F52B
+:1000500052F57E7F04020438C2360552E552D3942D
+:100060000C4005755201D23690070C7407F0A3744A
+:10007000FFF0E4F50CA3F0900714F0A3F0750B204B
+:10008000F509E4F508E508D39430400302040412AE
+:100090000006150BE50870047F0180027F00E5096A
+:1000A00070047E0180027E00EE5F6005121871D23E
+:1000B0003553E1F7E5084509FFE50B25E025E02488
+:1000C00083F582E43407F583EFF085E220E552D32F
+:1000D0009401400D1219F3E054A064407003020330
+:1000E000FB53F9F8909470E4F0E0F510AF09121E9C
+:1000F000B3AF08EF4408F582758380E0F529EF443B
+:1001000007121A3CF5225440D39400401EE52954AE
+:10011000F070211219F3E04480F0E52254306508B4
+:1001200070091219F3E054BFF080091219F37440FA
+:10013000F00203FB121A127583AE74FFF0AF087E53
+:1001400000EF4407F582E0FDE50B25E025E0248182
+:10015000F582E43407F583EDF090070EE004F0EF4C
+:100160004407F582758398E0F528121A23400C1293
+:1001700019F3E04401121A320203F6AF087E00744C
+:1001800080CDEFCD8D82F583E030E00A1219F3E0E7
+:100190004420F00203FB1219F3E054DFF0EE44AE0A
+:1001A000121A4330E4030203FB749E121A0520E086
+:1001B000030203FB8F828E83E020E0030203FB1225
+:1001C00019F3E04410F0E5E320E708E508121A3AD5
+:1001D0004404F0AF087E00EF121A3A20E2341219FC
+:1001E000F3E04408F0E5E430E6047D0180027D00A0
+:1001F000E57EC3940450047C0180027C00EC4D60D9
+:1002000005C2350203FBEE44D2121A434440F00209
+:1002100003FB1219F3E054F7F0121A127583D2E0BF
+:1002200054BFF0900714E004F0E57E7003757E0182
+:10023000AF087E00121A2340121219F3E044011293
+:1002400019F2E05402121A320203FB1219F3E044CD
+:10025000021219F2E054FEF0C235EE448A8F82F5A4
+:1002600083E0F517548F4440F07490FCE508440790
+:10027000FDF5828C83E0543F900702F0E054C08D7E
+:10028000828C83F07492121A05900703121A197463
+:1002900082121A05900704121A1974B4121A0590E2
+:1002A0000705121A197494FEE5084406121A0AF595
+:1002B0001030E004D2378002C237E510547F8F82BD
+:1002C0008E83F0304430121A035480D394004004DB
+:1002D000D2398002C2398F828E83E04480F0121AB4
+:1002E000035440D394004004D23A8002C23A8F8231
+:1002F0008E83E04440F07492FEE5084406121A0A28
+:1003000030E704D2388002C2388F828E83E0547F77
+:10031000F0121E46E4F50A20030280033043031264
+:1003200019952002028003304203120C8F303006F0
+:10033000121995120C8F120D471219F3E054FBF0AD
+:10034000E50AC39401404643E1081219F3E044046E
+:10035000F0E5E420E72A121A127583D2E05408D39C
+:10036000940040047F0180027F00E50AC3940140AD
+:10037000047E0180027E00EF5E6005121DD78017AB
+:10038000121A127583D2E04408F00203FB121A120B
+:100390007583D2E054F7F0121E467F0812173174AD
+:1003A0008EFE121A128E83E0F51054FEF0E5104412
+:1003B00001FFE508FDED4407F582EFF0E51054FE7E
+:1003C000FFED4407F582EF121A11758386E04410A1
+:1003D000121A11E04410F01219F3E054FD4401FF29
+:1003E0001219F3EF121A3230320CE5084408F58284
+:1003F0007583827405F0AF0B1218D774102508F5B9
+:10040000080200850509E509D3940750030200821C
+:10041000E57ED3940040047F0180027F00E57EC327
+:1004200094FA50047E0180027E00EE5F6002057E39
+:1004300030350B43E1017F0912173102005853E1B7
+:10044000FE0200588E6A8F6B8C6C8D6D756E017517
+:100450006F01757001E4F573F574F57590072FF071
+:10046000F53CF53EF546F547F53DF53FF56FE56F93
+:10047000700FE56B456A12072A758380743AF08025
+:100480000912072A758380741AF0E4F56EC3743F6D
+:10049000956EFF120865758382EFF0121A4D1208EF
+:1004A000C6E533F01208FA1208B140E1E56F700BAF
+:1004B00012072A7583807436F0800912072A758323
+:1004C000807416F0756E0112072A7583B4E56EF01C
+:1004D000121A4D743F256EF582E43400F583E5333E
+:1004E000F074BF256EF582E434001208B140D8E400
+:1004F000F570F546F547F56E1208FAF583E0FE1241
+:1005000008C6E07C002400FFEC3EFEAD3BD3EF9D2F
+:10051000EE9C50047B0180027B00E57070047A0140
+:1005200080027A00EB5A6006856E46757001D3EF43
+:100530009DEE9C50047F0180027F00E570B40104B1
+:100540007E0180027E00EF5E6003856E47056EE5EA
+:100550006E647F70A3E5466005E547B47E0385467B
+:1005600047E56F7008854676854777800EC3747FB0
+:100570009546F578C3747F9547F579E56F7037E553
+:10058000466547700C757301757401F53CF53D8047
+:1005900035E4F54EC3E5479546F53CC313F57125A3
+:1005A00046F572C3943F4005E4F53D8040C3743F77
+:1005B0009572F53D8037E5466547700F7573017597
+:1005C0007501F53EF53F754E018022E4F54EC3E519
+:1005D000479546F53EC313F5712546F572D3943F12
+:1005E0005005E4F53F8006E57224C1F53F056FE54F
+:1005F0006FC39402500302046EE56D456C70028077
+:1006000004E574457590072FF07F01E53E6004E531
+:100610003C7014E4F53CF53DF53EF53F1208D27010
+:1006200004F00206A4807AE53CC3953E4007E53C11
+:10063000953EFF8006C3E53E953CFFE576D3957970
+:10064000400585767A800385797AE577C395785079
+:100650000585777B800385787BE57BD3957A403071
+:10066000E57B957AF53CF53EC3E57B957A900719D5
+:10067000F0E53CC313F571257AF572C3943F40054C
+:10068000E4F53D801FC3743F9572F53DF53F80143E
+:10069000E4F53CF53E900719F01208D27003F080A3
+:1006A000037401F01208657583D0E0540FFEAD3C71
+:1006B00070027E07BE0F027E80EEFBEFD39B74803C
+:1006C000F898401FE4F53CF53E1208D27003F08024
+:1006D000127401F0E508FBEB4407F5827583D2E064
+:1006E0004410F0E508FBEB4409F58275839EEDF0BC
+:1006F000EB4407F5827583CAEDF01208657583CC6B
+:10070000EFF022E5084407F5827583BCE054F0F071
+:10071000E5084407F5827583BEE054F0F0E508442F
+:1007200007F5827583C0E054F0F0E5084407F582D0
+:1007300022F0900728E0FEA3E0F5828E8322854216
+:100740004285414185404074C02FF58274023EF5D8
+:1007500083E542F074E02FF58274023EF58322E5D2
+:100760004229FDE433FCE53CC39DEC6480F87480D1
+:100770009822F583E0900722541FFDE0FAA3E0F5EC
+:10078000828A83EDF022900722E0FCA3E0F5828CC0
+:100790008322900724FFED4407CFF0A3EFF02285DA
+:1007A0003838853939853A3A74C02FF58274023E5B
+:1007B000F58322900726FFED4407CFF0A3EFF02248
+:1007C000F074A02FF58274023EF5832274C02511C7
+:1007D000F582E43401F5832274002511F582E434B6
+:1007E00002F5832274602511F582E43403F5832237
+:1007F00074802511F582E43403F5832274E0251119
+:10080000F582E43403F5832274402511F582E43443
+:1008100006F5832274802FF58274023EF58322AFA1
+:10082000087E00EF4407F58222F583E5824407F550
+:1008300082E540F02274402511F582E43402F5830C
+:100840002274C02511F582E43403F5832274002557
+:1008500011F582E43406F5832274202511F582E433
+:100860003406F58322E508FDED4407F58222E541D3
+:10087000F0E56564014564227E00FB7A00FD7C00A2
+:100880002274202511F582E434022274A02511F58A
+:1008900082E4340322853E42853F418F4022853CDD
+:1008A00042853D418F402275453F900720E4F0A3EB
+:1008B00022F583E532F0056EE56EC3944022F0E543
+:1008C000084406F582227400256EF582E43400F5B2
+:1008D0008322E56D456C90072F22E4F9E53CD39522
+:1008E0003E2274802EF582E43402F583E02274A067
+:1008F0002EF582E43402F583E0227480256EF582C1
+:10090000E43400222542FDE433FC22854242854145
+:100910004185404022ED4C60030209E5EF4E7037FF
+:10092000900726120789E0FD1207CCEDF09007280A
+:10093000120789E0FD1207D8EDF0120786E0541F78
+:10094000FD120881F583EDF0900724120789E05429
+:100950001FFD120835EDF0EF64044E703790072646
+:10096000120789E0FD1207E4EDF0900728120789CD
+:10097000E0FD1207F0EDF0120786E0541FFD1208AB
+:100980008BF583EDF0900724120789E0541FFD12C8
+:100990000841EDF0EF64014E70047D0180027D009E
+:1009A000EF64024E70047F0180027F00EF4D60789B
+:1009B000900726120735E0FF1207FCEF120731E01F
+:1009C000FF120808EFF0900722120735E0541FFFCE
+:1009D00012084DEFF0900724120735E0541FFF1264
+:1009E0000859EFF0221207CCE4F01207D8E4F01215
+:1009F0000881F583E4F01208357414F01207E4E47A
+:100A0000F01207F0E4F012088BF583E4F0120841CD
+:100A10007414F01207FCE4F0120808E4F012084D18
+:100A2000E4F01208597414F02253F9F775FC10E43D
+:100A3000F5FD75FE30F5FFE5E720E70343F908E52E
+:100A4000E620E70B78FFE4F6D8FD53E6FE80097850
+:100A500008E4F6D8FD53E6FE758180E4F5A8D2A837
+:100A6000C2A9D2AFE5E220E50520E602800343E11A
+:100A700002E5E220E00E9000007F007E08E4F0A393
+:100A8000DFFCDEFA020ADB43FA01C0E0C0F0C083FB
+:100A9000C082C0D0121CE7D0D0D082D083D0F0D09A
+:100AA000E053FAFE32021B55E493A3F8E493A3F655
+:100AB00008DFF98029E493A3F85407240CC8C33352
+:100AC000C4540F4420C8834004F456800146F6DF26
+:100AD000E4800B010204081020408090003FE47E77
+:100AE000019360C1A3FF543F30E509541FFEE49316
+:100AF000A360010ECF54C025E060AD40B880FE8CED
+:100B0000648D658A668B67E4F569EF4E7003021D9C
+:100B100055E4F568E5674566703212072A758390DB
+:100B2000E41207297583C2E41207297583C4E4120D
+:100B30000870702912072A758392E41207297583B9
+:100B4000C6E41207297583C8E4F0801190072612C5
+:100B50000735E41208707005120732E4F0121D55D3
+:100B6000121EBFE5674566703312072A758390E54C
+:100B7000411207297583C2E5411207297583C41202
+:100B8000086E702912072A758392E54012072975AD
+:100B900083C6E5401207297583C8800E9007261288
+:100BA000073512086E7006120732E540F0AF697E15
+:100BB00000AD67AC6612044412072A7583CAE0D3FD
+:100BC0009400500C0568E568C394055003020B14AB
+:100BD000228C608D611208DA7420400D2FF582742A
+:100BE000033EF583E53EF0800B2FF58274033EF55E
+:100BF00083E53CF0E53CD3953E403CE561456070C3
+:100C000010E9120904E53E120768403B120895807E
+:100C100018E53EC39538401D853E38E53E600585A4
+:100C20003F3980038539398F3A120814E53E12079F
+:100C3000C0E53FF0228043E5614560701912075F0F
+:100C4000400512089E802712090B120814E5421273
+:100C500007C0E541F022E53CC39538401D853C388E
+:100C6000E53C6005853D3980038539398F3A1208A6
+:100C700014E53C1207C0E53DF02285383885393946
+:100C8000853A3A120814E5381207C0E539F0227F98
+:100C900006121731121D23120E04120E33E0440AFD
+:100CA000F0748EFE120E04120E0BEFF0E52830E504
+:100CB00003D38001C3400575142080037514081206
+:100CC0000E0475838AE514F0B4FF05751280800662
+:100CD000E514C313F512E4F516F57F121936121355
+:100CE000A3E50AC3940150090516E516C394144000
+:100CF000EAE5E420E728120E047583D2E05408D315
+:100D0000940040047F0180027F00E50AC394014003
+:100D1000047E0180027E00EF5E6003121DD7E57F36
+:100D2000C394114014120E047583D2E04480F0E5A0
+:100D3000E420E70F121DD7800A120E047583D2E05B
+:100D4000547FF0121D2322748A850882F583E517EB
+:100D5000F0120E3AE4F0900702E0120E177583903D
+:100D6000EFF07492FEE5084407FFF5828E83E054AD
+:100D7000C0FD900703E0543F4D8F828E83F09007B3
+:100D800004E0120E17758382EFF0900705E0FFED87
+:100D90004407F5827583B4EF120E03758380E05427
+:100DA000BFF030370A120E91758394E04480F03022
+:100DB000380A120E91758392E04480F0E52830E401
+:100DC0001A20390A120E04758388E0547FF0203A05
+:100DD0000A120E04758388E054BFF0748CFE120E64
+:100DE000048E83E0540F120E03758386E054BFF027
+:100DF000E5084406120DFD75838AE4F022F582753C
+:100E00008382E4F0E5084407F582228E83E0F51042
+:100E100054FEF0E5104401FFE508FDED4407F582BE
+:100E200022E515C45407FFE508FDED4408F5827579
+:100E3000838222758380E04440F0E5084408F5820F
+:100E400075838A22E51625E025E024AFF582E43497
+:100E50001AF583E493F50D2243E11043E18053E159
+:100E6000FD85E11022E51625E025E024B2F582E4B7
+:100E7000341AF583E49322855582855483E515F071
+:100E800022E5E25420D3940022E5E25440D39400BA
+:100E900022E5084406F58222FDE508FBEB4407F550
+:100EA000822253F9F775FE3022EF4E70261207CCDE
+:100EB000E0FD90072612077B1207D8E0FD90072877
+:100EC00012077B120881120772120835E09007247E
+:100ED000120778EF64044E70291207E4E0FD9007D2
+:100EE0002612077B1207F0E0FD90072812077B12FD
+:100EF000088B120772120841E0541FFD900724125C
+:100F0000077BEF64014E70047D0180027D00EF6479
+:100F1000024E70047F0180027F00EF4D60351207A2
+:100F2000FCE0FF900726120789EFF0120808E0FFA7
+:100F3000900728120789EFF012084DE0541FFF12A6
+:100F40000786EFF0120859E0541FFF90072412079C
+:100F500089EFF022E4F553120E8140047F018002F4
+:100F60007F00120E8940047E0180027E00EE4F70E9
+:100F700003020FF685E11043E10253E10F85E11012
+:100F8000E4F551E5E3543FF552120E89401DAD5290
+:100F9000AF51121118EF600885E11043E140800B5A
+:100FA00053E1BF120E5812000680FBE5E3543FF5F3
+:100FB00051E5E4543FF552120E81401DAD52AF5140
+:100FC000121118EF600885E11043E120800B53E116
+:100FD000DF120E5812000680FB120E8140047F01C2
+:100FE00080027F00120E8940047E0180027E00EEA6
+:100FF0004F6003120E5B22120E21EFF012109122AD
+:1010000002110002104002109000000000000000D9
+:1010100001200120E4F5571216BD121644E4121007
+:10102000561214B7900726120735E4120731E4F080
+:101030001210561214B7900726120735E541120711
+:1010400031E540F0AF577E00AD567C00120444AF4E
+:10105000567E000211EEFF900720A3E0FDE4F55656
+:10106000F540FEFCAB56FA1211517F0F7D18E4F5E6
+:1010700056F540FEFCAB56FA121541AF567E0012F3
+:101080001AFFE4FFF5567D1FF540FEFCAB56FA2231
+:1010900022E4F555E508FD74A0F556ED4407F55733
+:1010A000E52830E503D38001C340057F28EF8004A5
+:1010B0007F14EFC313F554E4F9120E1875838EE014
+:1010C000F510CEEFCEEED394004026E51054FE127C
+:1010D0000E9875838EEDF0E5104401FDEB4407F5A5
+:1010E00082EDF0855782855683E030E301091E804A
+:1010F000D4C234E9C395544002D2342202000622FD
+:10110000303011901000E493F510901010E493F536
+:101110001012109012115022E4FCC3ED9FFAEFF56B
+:101120008375820079FFE493CC6CCCA3D9F8DAF60E
+:10113000E5E230E4028CE5ED24FFFFEF7582FFF578
+:1011400083E4936C70037F01227F00222211000050
+:10115000228E588F598C5A8D5B8A5C8B5D755E012F
+:10116000E4F55FF560F56212072A7583D0E0FFC4ED
+:10117000540FF561121EA585595ED3E55E955BE5BA
+:101180005A12076B504B1207037583BCE0455E1281
+:1011900007297583BEE0455E1207297583C0E045C7
+:1011A0005EF0AF5FE560120878120AFFAF627E0062
+:1011B000AD5DAC5C120444E561AF5E7E00B4030536
+:1011C000121E218007AD5DAC5C121317055E021183
+:1011D0007A1207037583BCE045401207297583BE68
+:1011E000E045401207297583C0E04540F0228E5843
+:1011F0008F59755A017901755B01E4FB12072A7555
+:1012000083AEE0541AFF120865E0C4135407FEEFE2
+:10121000700CEE6535700790072FE0B4010DAF3507
+:101220007E00120EA9CFEBCF021E60E55964024585
+:101230005870047F0180027F00E559455870047E94
+:101240000180027E00EE4F602385414985404BE5D9
+:10125000594558702CAF5AFECDE9CDFCAB59AA5870
+:10126000120AFFAF5B7E00121E608015AF5B7E002E
+:10127000121E60900726120735E549120731E54B2B
+:10128000F0E4FDAF35FEFC120915228C648D651269
+:1012900008DA403CE56545647010120904C3E53E78
+:1012A000120769403B1208958018E53EC395384007
+:1012B0001D853E38E53E6005853F39800385393917
+:1012C0008F3A1207A8E53E120753E53FF022803B14
+:1012D000E5654564701112075F400512089E801F86
+:1012E00012073EE541F022E53CC39538401D853CA0
+:1012F00038E53C6005853D3980038539398F3A12E0
+:1013000007A8E53C120753E53DF02212079FE53898
+:10131000120753E539F0228C638D641208DA403CE1
+:10132000E56445637010120904C3E53E1207694085
+:101330003B1208958018E53EC39538401D853E3820
+:10134000E53E6005853F3980038539398F3A1207BC
+:10135000A8E53E120753E53FF022803BE564456374
+:10136000701112075F400512089E801F12073EE5AC
+:1013700041F022E53CC39538401D853C38E53C6092
+:1013800005853D3980038539398F3A1207A8E53C38
+:10139000120753E53DF02212079FE538120753E587
+:1013A00039F022E50DFEE5088E544405F555751516
+:1013B0000FF582120E7A1217A320310575150380DE
+:1013C0000375150BE50AC39401503812142020311F
+:1013D0000605150515800415151515E50AC39401B4
+:1013E0005021121420203104051580021515E50A3C
+:1013F000C39401500E120E771217A3203105051564
+:10140000120E77E515B408047F0180027F00E51510
+:10141000B407047E0180027E00EE4F6002057F2249
+:10142000855582855483E515F01217A32212072AE9
+:101430007583AE74FF120729E0541AF534E0C41323
+:101440005407F53524FE602424FE603C24047063B8
+:1014500075312DE508FD74B612079274BC90072211
+:1014600012079574901207B37492803C75313AE577
+:1014700008FD74BA12079274C09007221207B6745E
+:10148000C41207B374C88020753135E508FD74B8FF
+:1014900012079274BEFFED4407900722CFF0A3EF2E
+:1014A000F074C21207B374C6FFED4407A3CFF0A3D4
+:1014B000EFF022753401228E588F598C5A8D5B8A39
+:1014C0005C8B5D755E01E4F55F121EA585595ED3E8
+:1014D000E55E955BE55A12076B5057E55D455C701C
+:1014E0003012072A758392E55E1207297583C6E5D7
+:1014F0005E1207297583C8E55E120729758390E59A
+:101500005E1207297583C2E55E1207297583C480C0
+:1015100003120732E55EF0AF5F7E00AD5DAC5C129A
+:101520000444AF5E7E00AD5DAC5C120BD1055E0283
+:1015300014CFAB5DAA5CAD5BAC5AAF59AE58021B81
+:10154000FB8C5C8D5D8A5E8B5F756001E4F561F5F7
+:1015500062F563121EA58F60D3E560955DE55C12B0
+:10156000076B5061E55F455E702712072A7583B6E9
+:10157000E5601207297583B8E5601207297583BAFB
+:10158000E560F0AF617E00E56212087A120AFF8022
+:1015900019900724120735E56012072975838EE438
+:1015A0001207297401120729E4F0AF637E00AD5FD2
+:1015B000AC5E120444AF607E00AD5FAC5E12128B75
+:1015C00005600215582290114DE49390072EF012F9
+:1015D000081F7583AEE0541AF5347067EF4407F5C1
+:1015E000827583CEE0FF1313135407F536540FD3DF
+:1015F0009400400612142D121BA9E536540F24FE48
+:10160000600C14600C146019240370378010021EE3
+:1016100091121E9112072A7583CEE054EFF0021D3D
+:10162000AE121014E4F555121D850555E555C39409
+:101630000540F412072A7583CEE054C7120729E04B
+:101640004408F022E4F558F559AF08EF4407F58255
+:101650007583D0E0FDC4540FF55AEF4407F5827549
+:1016600083807401F0120821758382E545F0EF4410
+:1016700007F58275838A74FFF0121A4D12072A75D6
+:1016800083BCE054EF1207297583BEE054EF1207C4
+:10169000297583C0E054EF1207297583BCE044101C
+:1016A0001207297583BEE044101207297583C0E034
+:1016B0004410F0AF58E559120878020AFFE4F558D3
+:1016C0007D01F559AF35FEFC12091512072A758305
+:1016D000B674101207297583B87410120729758320
+:1016E000BA74101207297583BC7410120729758308
+:1016F000BE74101207297583C074101207297583F0
+:1017000090E41207297583C2E41207297583C4E4A3
+:10171000120729758392E41207297583C6E412071C
+:10172000297583C8E4F0AF58FEE55912087A020A19
+:10173000FFE5E230E46CE5E754C064407064E5091D
+:10174000C45430FEE50825E025E054C04EFEEF54B9
+:101750003F4EFDE52BAE2A7802C333CE33CED8F907
+:10176000F5828E83EDF0E52BAE2A7802C333CE33BB
+:10177000CED8F9FFF5828E83A3E5FEF08F828E83AB
+:10178000A3A3E5FDF08F828E83A3A3A3E5FCF0C3A2
+:10179000E52B94FAE52A94005008052BE52B7002FE
+:1017A000052A22E4FFE4F558F556F5577482FC1239
+:1017B0000E048C83E0F510547FF0E5104480120E87
+:1017C00098EDF07E0A120E047583A0E020E026DE7C
+:1017D000F40557E55770020556E5142401FDE4337E
+:1017E000FCD3E5579DE5569C40D9E50A942050026C
+:1017F000050A43E108C231120E047583A6E05512B2
+:1018000065127003D23122C23122900726E0FAA37A
+:10181000E0F5828A83E0F541E539C395414026E54C
+:10182000399541C39FEE12076B40047C0180027C16
+:1018300000E541643F60047B0180027B00EC5B605B
+:101840002905418028C3E5419539C39FEE12076BF6
+:1018500040047F0180027F00E54160047E01800238
+:101860007E00EF5E600415418003853941853A4072
+:1018700022E5E230E460E5E130E25BE50970047FF7
+:101880000180027F00E50870047E0180027E00EE88
+:101890005F604353F9F8E5E230E43BE5E130E22EE6
+:1018A00043FA0253FAFBE4F510909470E510F0E56A
+:1018B000E130E2E7909470E06510600343FA0405BC
+:1018C00010909470E510F070E612000680E153FA73
+:1018D000FD53FAFB80C0228F54120006E5E130E090
+:1018E000047F0180027F00E57ED3940540047E01E1
+:1018F00080027E00EE4F603D855411E5E220E1322A
+:1019000074CE121A0530E7047D0180027D008F82BB
+:101910008E83E030E6047F0180027F00EF5D70156A
+:101920001215C674CE121A0530E607E04480F04363
+:10193000F98012187122120E44E51625E025E024E4
+:10194000B0F582E4341AF583E493F50FE51625E04B
+:1019500025E024B1F582E4341AF583E493F50E1200
+:101960000E65F510E50F54F0120E1775838CEFF02D
+:10197000E50F30E00C120E04758386E04440F080E1
+:101980000A120E04758386E054BFF0120E9175831F
+:1019900082E50EF0227F05121731120E04120E336B
+:1019A0007402F0748EFE120E04120E0BEFF0751519
+:1019B00070120FF72034057515108003751550123D
+:1019C0000FF72034047410800274F02515F51512F9
+:1019D0000E21EFF0121091203417E5156430600CE1
+:1019E00074102515F515B48003E4F515120E21EFDA
+:1019F000F022F0E50B25E025E02482F582E43407AF
+:101A0000F583227488FEE5084407FFF5828E83E0A3
+:101A100022F0E5084407F58222F0E054C08F828E60
+:101A200083F022EF4407F582758386E05410D39447
+:101A30000022F0900715E004F0224406F582758339
+:101A40009EE022FEEF4407F5828E83E022E49007B9
+:101A50002AF0A3F012072A758382E0547F12072927
+:101A6000E04480F01210FC12081F7583A0E020E013
+:101A70001A90072BE004F0700690072AE004F0901B
+:101A8000072AE0B410E1A3E0B400DCEE44A6FCEFCA
+:101A90004407F5828C83E0F532EE44A8FEEF44075C
+:101AA000F5828E83E0F5332201201100042000909E
+:101AB00000200F9200210F9400220F9600230F9810
+:101AC00000240F9A00250F9C00260F9E00270FA0D0
+:101AD000012001A2012101A4012201A6012301A8E4
+:101AE000012401AA012501AC012601AE012701B0A4
+:101AF000012801B400280FB640280FB8612801CB97
+:101B0000EFCBCAEECA7F01E4FDEB4A7024E508F58D
+:101B10008274B6120829E508F58274B8120829E51E
+:101B200008F58274BA1208297E007C00120AFF8030
+:101B300012900726120735E541F090072412073569
+:101B4000E540F012072A75838EE41207297401120A
+:101B50000729E4F022E4F526F52753E1FEF52A757E
+:101B60002B01F5087F0112173130301C901AA9E4BF
+:101B700093F510901FF9E493F510900041E493F56C
+:101B800010901ECAE493F5107F02121731120F5401
+:101B90007F03121731120006E5E230E70912100048
+:101BA00030300312110002004712081F7583D0E085
+:101BB000C4540FFD7543017544FF1208AA7404F064
+:101BC000753B01ED14600C14600B14600F2403705E
+:101BD0000B800980001208A704F080061208A77481
+:101BE00004F0EE4482FEEF4407F5828E83E5451251
+:101BF00008BE758382E531F002114C8E608F611250
+:101C00001EA5E4FFCEEDCEEED39561E56012076B25
+:101C1000403974202EF582E43403F583E07003FF2D
+:101C200080261208E2FDC39F401ECFEDCFEB4A7025
+:101C30000B8D421208EEF5418E40800C1208E2F541
+:101C4000381208EEF5398E3A1E80BC22755801E52F
+:101C500035700C1207CCE0F54A1207D8E0F54CE5D8
+:101C600035B4040C1207E4E0F54A1207F0E0F54C35
+:101C7000E535B401047F0180027F00E535B402043C
+:101C80007E0180027E00EE4F600C1207FCE0F54AF8
+:101C9000120808E0F54C85414985404B22755B01EF
+:101CA000900724120735E0541FFFD3940250048F8D
+:101CB000588005EF24FEF558EFC394184005755978
+:101CC000188004EF04F55985435AAF587E00AD598A
+:101CD0007C00AB5B7A00121541AF5A7E0012180AE5
+:101CE000AF5B7E00021AFFE5E230E70E121003C27E
+:101CF000303030031210FF203328E5E730E70512BB
+:101D00000EA2800DE5FEC394205006120EA243F9E8
+:101D100008E5F230E70353F97FE5F15470D39400FE
+:101D200050D822120E04758380E4F0E508440712AF
+:101D30000DFD758384120E02758386120E02758363
+:101D40008CE054F3120E0375838E120E0275839489
+:101D5000E054FBF02212072A75838EE412072974DF
+:101D600001120729E41208BE75838CE04420120892
+:101D7000BEE054DFF07484850882F583E0547FF080
+:101D8000E04480F022755601E4FDF557AF35FEFCC6
+:101D9000120915121C9D121E7A121C4CAF577E00A0
+:101DA000AD567C00120444AF567E000211EE75560B
+:101DB00001E4FDF557AF35FEFC120915121C9D120A
+:101DC0001E7A121C4CAF577E00AD567C00120444A4
+:101DD000AF567E000211EEE4F516120E44FEE50841
+:101DE0004405FF120E658F828E83F00516E516C33B
+:101DF000941440E6E508120E2BE4F022E4F558F5C1
+:101E000059F55AFFFEAD58FC1209157F047E00AD4E
+:101E1000587C001209157F027E00AD587C00020933
+:101E200015E53C253EFCE5422400FBE433FAECC317
+:101E30009BEA12076B400B8C42E53D253FF5418F35
+:101E4000402212090B227484F5188508198519821D
+:101E5000851883E0547FF0E04480F0E04480F02275
+:101E6000EF4E700B12072A7583D2E054DFF0221276
+:101E7000072A7583D2E04420F02275580190072686
+:101E8000120735E0543FF541120732E0543FF54068
+:101E900022755602E4F557121DFCAF577E00AD5671
+:101EA0007C00020444E4F542F541F540F538F5398B
+:101EB000F53A22EF5407FFE5F954F84FF5F9227F80
+:101EC00001E4FE0F0EBEFFFB2201200001042000F2
+:101ED0000000000000000000000000000000000002
+:101EE00000000000000000000000000000000000F2
+:101EF00000000000000000000000000000000000E2
+:101F000000000000000000000000000000000000D1
+:101F100000000000000000000000000000000000C1
+:101F200000000000000000000000000000000000B1
+:101F300000000000000000000000000000000000A1
+:101F40000000000000000000000000000000000091
+:101F50000000000000000000000000000000000081
+:101F60000000000000000000000000000000000071
+:101F70000000000000000000000000000000000061
+:101F80000000000000000000000000000000000051
+:101F90000000000000000000000000000000000041
+:101FA0000000000000000000000000000000000031
+:101FB0000000000000000000000000000000000021
+:101FC0000000000000000000000000000000000011
+:101FD0000000000000000000000000000000000001
+:101FE00000000000000000000000000000000000F1
+:101FF000000000000000000001201100042000810A
+:00000001FF
+++ /dev/null
-:10000000020A29020A87E5E630E6047F0180027FC2
-:1000100000E5E230E4047E0180027E00EE5F6008CD
-:1000200053F9F7E4F5FE80087F0A121731120EA289
-:1000300075FC08E4F5FDE5E720E70343F908220035
-:1000400001201100042000755101E4F552F553F52B
-:1000500052F57E7F04020438C2360552E552D3942D
-:100060000C4005755201D23690070C7407F0A3744A
-:10007000FFF0E4F50CA3F0900714F0A3F0750B204B
-:10008000F509E4F508E508D39430400302040412AE
-:100090000006150BE50870047F0180027F00E5096A
-:1000A00070047E0180027E00EE5F6005121871D23E
-:1000B0003553E1F7E5084509FFE50B25E025E02488
-:1000C00083F582E43407F583EFF085E220E552D32F
-:1000D0009401400D1219F3E054A064407003020330
-:1000E000FB53F9F8909470E4F0E0F510AF09121E9C
-:1000F000B3AF08EF4408F582758380E0F529EF443B
-:1001000007121A3CF5225440D39400401EE52954AE
-:10011000F070211219F3E04480F0E52254306508B4
-:1001200070091219F3E054BFF080091219F37440FA
-:10013000F00203FB121A127583AE74FFF0AF087E53
-:1001400000EF4407F582E0FDE50B25E025E0248182
-:10015000F582E43407F583EDF090070EE004F0EF4C
-:100160004407F582758398E0F528121A23400C1293
-:1001700019F3E04401121A320203F6AF087E00744C
-:1001800080CDEFCD8D82F583E030E00A1219F3E0E7
-:100190004420F00203FB1219F3E054DFF0EE44AE0A
-:1001A000121A4330E4030203FB749E121A0520E086
-:1001B000030203FB8F828E83E020E0030203FB1225
-:1001C00019F3E04410F0E5E320E708E508121A3AD5
-:1001D0004404F0AF087E00EF121A3A20E2341219FC
-:1001E000F3E04408F0E5E430E6047D0180027D00A0
-:1001F000E57EC3940450047C0180027C00EC4D60D9
-:1002000005C2350203FBEE44D2121A434440F00209
-:1002100003FB1219F3E054F7F0121A127583D2E0BF
-:1002200054BFF0900714E004F0E57E7003757E0182
-:10023000AF087E00121A2340121219F3E044011293
-:1002400019F2E05402121A320203FB1219F3E044CD
-:10025000021219F2E054FEF0C235EE448A8F82F5A4
-:1002600083E0F517548F4440F07490FCE508440790
-:10027000FDF5828C83E0543F900702F0E054C08D7E
-:10028000828C83F07492121A05900703121A197463
-:1002900082121A05900704121A1974B4121A0590E2
-:1002A0000705121A197494FEE5084406121A0AF595
-:1002B0001030E004D2378002C237E510547F8F82BD
-:1002C0008E83F0304430121A035480D394004004DB
-:1002D000D2398002C2398F828E83E04480F0121AB4
-:1002E000035440D394004004D23A8002C23A8F8231
-:1002F0008E83E04440F07492FEE5084406121A0A28
-:1003000030E704D2388002C2388F828E83E0547F77
-:10031000F0121E46E4F50A20030280033043031264
-:1003200019952002028003304203120C8F303006F0
-:10033000121995120C8F120D471219F3E054FBF0AD
-:10034000E50AC39401404643E1081219F3E044046E
-:10035000F0E5E420E72A121A127583D2E05408D39C
-:10036000940040047F0180027F00E50AC3940140AD
-:10037000047E0180027E00EF5E6005121DD78017AB
-:10038000121A127583D2E04408F00203FB121A120B
-:100390007583D2E054F7F0121E467F0812173174AD
-:1003A0008EFE121A128E83E0F51054FEF0E5104412
-:1003B00001FFE508FDED4407F582EFF0E51054FE7E
-:1003C000FFED4407F582EF121A11758386E04410A1
-:1003D000121A11E04410F01219F3E054FD4401FF29
-:1003E0001219F3EF121A3230320CE5084408F58284
-:1003F0007583827405F0AF0B1218D774102508F5B9
-:10040000080200850509E509D3940750030200821C
-:10041000E57ED3940040047F0180027F00E57EC327
-:1004200094FA50047E0180027E00EE5F6002057E39
-:1004300030350B43E1017F0912173102005853E1B7
-:10044000FE0200588E6A8F6B8C6C8D6D756E017517
-:100450006F01757001E4F573F574F57590072FF071
-:10046000F53CF53EF546F547F53DF53FF56FE56F93
-:10047000700FE56B456A12072A758380743AF08025
-:100480000912072A758380741AF0E4F56EC3743F6D
-:10049000956EFF120865758382EFF0121A4D1208EF
-:1004A000C6E533F01208FA1208B140E1E56F700BAF
-:1004B00012072A7583807436F0800912072A758323
-:1004C000807416F0756E0112072A7583B4E56EF01C
-:1004D000121A4D743F256EF582E43400F583E5333E
-:1004E000F074BF256EF582E434001208B140D8E400
-:1004F000F570F546F547F56E1208FAF583E0FE1241
-:1005000008C6E07C002400FFEC3EFEAD3BD3EF9D2F
-:10051000EE9C50047B0180027B00E57070047A0140
-:1005200080027A00EB5A6006856E46757001D3EF43
-:100530009DEE9C50047F0180027F00E570B40104B1
-:100540007E0180027E00EF5E6003856E47056EE5EA
-:100550006E647F70A3E5466005E547B47E0385467B
-:1005600047E56F7008854676854777800EC3747FB0
-:100570009546F578C3747F9547F579E56F7037E553
-:10058000466547700C757301757401F53CF53D8047
-:1005900035E4F54EC3E5479546F53CC313F57125A3
-:1005A00046F572C3943F4005E4F53D8040C3743F77
-:1005B0009572F53D8037E5466547700F7573017597
-:1005C0007501F53EF53F754E018022E4F54EC3E519
-:1005D000479546F53EC313F5712546F572D3943F12
-:1005E0005005E4F53F8006E57224C1F53F056FE54F
-:1005F0006FC39402500302046EE56D456C70028077
-:1006000004E574457590072FF07F01E53E6004E531
-:100610003C7014E4F53CF53DF53EF53F1208D27010
-:1006200004F00206A4807AE53CC3953E4007E53C11
-:10063000953EFF8006C3E53E953CFFE576D3957970
-:10064000400585767A800385797AE577C395785079
-:100650000585777B800385787BE57BD3957A403071
-:10066000E57B957AF53CF53EC3E57B957A900719D5
-:10067000F0E53CC313F571257AF572C3943F40054C
-:10068000E4F53D801FC3743F9572F53DF53F80143E
-:10069000E4F53CF53E900719F01208D27003F080A3
-:1006A000037401F01208657583D0E0540FFEAD3C71
-:1006B00070027E07BE0F027E80EEFBEFD39B74803C
-:1006C000F898401FE4F53CF53E1208D27003F08024
-:1006D000127401F0E508FBEB4407F5827583D2E064
-:1006E0004410F0E508FBEB4409F58275839EEDF0BC
-:1006F000EB4407F5827583CAEDF01208657583CC6B
-:10070000EFF022E5084407F5827583BCE054F0F071
-:10071000E5084407F5827583BEE054F0F0E508442F
-:1007200007F5827583C0E054F0F0E5084407F582D0
-:1007300022F0900728E0FEA3E0F5828E8322854216
-:100740004285414185404074C02FF58274023EF5D8
-:1007500083E542F074E02FF58274023EF58322E5D2
-:100760004229FDE433FCE53CC39DEC6480F87480D1
-:100770009822F583E0900722541FFDE0FAA3E0F5EC
-:10078000828A83EDF022900722E0FCA3E0F5828CC0
-:100790008322900724FFED4407CFF0A3EFF02285DA
-:1007A0003838853939853A3A74C02FF58274023E5B
-:1007B000F58322900726FFED4407CFF0A3EFF02248
-:1007C000F074A02FF58274023EF5832274C02511C7
-:1007D000F582E43401F5832274002511F582E434B6
-:1007E00002F5832274602511F582E43403F5832237
-:1007F00074802511F582E43403F5832274E0251119
-:10080000F582E43403F5832274402511F582E43443
-:1008100006F5832274802FF58274023EF58322AFA1
-:10082000087E00EF4407F58222F583E5824407F550
-:1008300082E540F02274402511F582E43402F5830C
-:100840002274C02511F582E43403F5832274002557
-:1008500011F582E43406F5832274202511F582E433
-:100860003406F58322E508FDED4407F58222E541D3
-:10087000F0E56564014564227E00FB7A00FD7C00A2
-:100880002274202511F582E434022274A02511F58A
-:1008900082E4340322853E42853F418F4022853CDD
-:1008A00042853D418F402275453F900720E4F0A3EB
-:1008B00022F583E532F0056EE56EC3944022F0E543
-:1008C000084406F582227400256EF582E43400F5B2
-:1008D0008322E56D456C90072F22E4F9E53CD39522
-:1008E0003E2274802EF582E43402F583E02274A067
-:1008F0002EF582E43402F583E0227480256EF582C1
-:10090000E43400222542FDE433FC22854242854145
-:100910004185404022ED4C60030209E5EF4E7037FF
-:10092000900726120789E0FD1207CCEDF09007280A
-:10093000120789E0FD1207D8EDF0120786E0541F78
-:10094000FD120881F583EDF0900724120789E05429
-:100950001FFD120835EDF0EF64044E703790072646
-:10096000120789E0FD1207E4EDF0900728120789CD
-:10097000E0FD1207F0EDF0120786E0541FFD1208AB
-:100980008BF583EDF0900724120789E0541FFD12C8
-:100990000841EDF0EF64014E70047D0180027D009E
-:1009A000EF64024E70047F0180027F00EF4D60789B
-:1009B000900726120735E0FF1207FCEF120731E01F
-:1009C000FF120808EFF0900722120735E0541FFFCE
-:1009D00012084DEFF0900724120735E0541FFF1264
-:1009E0000859EFF0221207CCE4F01207D8E4F01215
-:1009F0000881F583E4F01208357414F01207E4E47A
-:100A0000F01207F0E4F012088BF583E4F0120841CD
-:100A10007414F01207FCE4F0120808E4F012084D18
-:100A2000E4F01208597414F02253F9F775FC10E43D
-:100A3000F5FD75FE30F5FFE5E720E70343F908E52E
-:100A4000E620E70B78FFE4F6D8FD53E6FE80097850
-:100A500008E4F6D8FD53E6FE758180E4F5A8D2A837
-:100A6000C2A9D2AFE5E220E50520E602800343E11A
-:100A700002E5E220E00E9000007F007E08E4F0A393
-:100A8000DFFCDEFA020ADB43FA01C0E0C0F0C083FB
-:100A9000C082C0D0121CE7D0D0D082D083D0F0D09A
-:100AA000E053FAFE32021B55E493A3F8E493A3F655
-:100AB00008DFF98029E493A3F85407240CC8C33352
-:100AC000C4540F4420C8834004F456800146F6DF26
-:100AD000E4800B010204081020408090003FE47E77
-:100AE000019360C1A3FF543F30E509541FFEE49316
-:100AF000A360010ECF54C025E060AD40B880FE8CED
-:100B0000648D658A668B67E4F569EF4E7003021D9C
-:100B100055E4F568E5674566703212072A758390DB
-:100B2000E41207297583C2E41207297583C4E4120D
-:100B30000870702912072A758392E41207297583B9
-:100B4000C6E41207297583C8E4F0801190072612C5
-:100B50000735E41208707005120732E4F0121D55D3
-:100B6000121EBFE5674566703312072A758390E54C
-:100B7000411207297583C2E5411207297583C41202
-:100B8000086E702912072A758392E54012072975AD
-:100B900083C6E5401207297583C8800E9007261288
-:100BA000073512086E7006120732E540F0AF697E15
-:100BB00000AD67AC6612044412072A7583CAE0D3FD
-:100BC0009400500C0568E568C394055003020B14AB
-:100BD000228C608D611208DA7420400D2FF582742A
-:100BE000033EF583E53EF0800B2FF58274033EF55E
-:100BF00083E53CF0E53CD3953E403CE561456070C3
-:100C000010E9120904E53E120768403B120895807E
-:100C100018E53EC39538401D853E38E53E600585A4
-:100C20003F3980038539398F3A120814E53E12079F
-:100C3000C0E53FF0228043E5614560701912075F0F
-:100C4000400512089E802712090B120814E5421273
-:100C500007C0E541F022E53CC39538401D853C388E
-:100C6000E53C6005853D3980038539398F3A1208A6
-:100C700014E53C1207C0E53DF02285383885393946
-:100C8000853A3A120814E5381207C0E539F0227F98
-:100C900006121731121D23120E04120E33E0440AFD
-:100CA000F0748EFE120E04120E0BEFF0E52830E504
-:100CB00003D38001C3400575142080037514081206
-:100CC0000E0475838AE514F0B4FF05751280800662
-:100CD000E514C313F512E4F516F57F121936121355
-:100CE000A3E50AC3940150090516E516C394144000
-:100CF000EAE5E420E728120E047583D2E05408D315
-:100D0000940040047F0180027F00E50AC394014003
-:100D1000047E0180027E00EF5E6003121DD7E57F36
-:100D2000C394114014120E047583D2E04480F0E5A0
-:100D3000E420E70F121DD7800A120E047583D2E05B
-:100D4000547FF0121D2322748A850882F583E517EB
-:100D5000F0120E3AE4F0900702E0120E177583903D
-:100D6000EFF07492FEE5084407FFF5828E83E054AD
-:100D7000C0FD900703E0543F4D8F828E83F09007B3
-:100D800004E0120E17758382EFF0900705E0FFED87
-:100D90004407F5827583B4EF120E03758380E05427
-:100DA000BFF030370A120E91758394E04480F03022
-:100DB000380A120E91758392E04480F0E52830E401
-:100DC0001A20390A120E04758388E0547FF0203A05
-:100DD0000A120E04758388E054BFF0748CFE120E64
-:100DE000048E83E0540F120E03758386E054BFF027
-:100DF000E5084406120DFD75838AE4F022F582753C
-:100E00008382E4F0E5084407F582228E83E0F51042
-:100E100054FEF0E5104401FFE508FDED4407F582BE
-:100E200022E515C45407FFE508FDED4408F5827579
-:100E3000838222758380E04440F0E5084408F5820F
-:100E400075838A22E51625E025E024AFF582E43497
-:100E50001AF583E493F50D2243E11043E18053E159
-:100E6000FD85E11022E51625E025E024B2F582E4B7
-:100E7000341AF583E49322855582855483E515F071
-:100E800022E5E25420D3940022E5E25440D39400BA
-:100E900022E5084406F58222FDE508FBEB4407F550
-:100EA000822253F9F775FE3022EF4E70261207CCDE
-:100EB000E0FD90072612077B1207D8E0FD90072877
-:100EC00012077B120881120772120835E09007247E
-:100ED000120778EF64044E70291207E4E0FD9007D2
-:100EE0002612077B1207F0E0FD90072812077B12FD
-:100EF000088B120772120841E0541FFD900724125C
-:100F0000077BEF64014E70047D0180027D00EF6479
-:100F1000024E70047F0180027F00EF4D60351207A2
-:100F2000FCE0FF900726120789EFF0120808E0FFA7
-:100F3000900728120789EFF012084DE0541FFF12A6
-:100F40000786EFF0120859E0541FFF90072412079C
-:100F500089EFF022E4F553120E8140047F018002F4
-:100F60007F00120E8940047E0180027E00EE4F70E9
-:100F700003020FF685E11043E10253E10F85E11012
-:100F8000E4F551E5E3543FF552120E89401DAD5290
-:100F9000AF51121118EF600885E11043E140800B5A
-:100FA00053E1BF120E5812000680FBE5E3543FF5F3
-:100FB00051E5E4543FF552120E81401DAD52AF5140
-:100FC000121118EF600885E11043E120800B53E116
-:100FD000DF120E5812000680FB120E8140047F01C2
-:100FE00080027F00120E8940047E0180027E00EEA6
-:100FF0004F6003120E5B22120E21EFF012109122AD
-:1010000002110002104002109000000000000000D9
-:1010100001200120E4F5571216BD121644E4121007
-:10102000561214B7900726120735E4120731E4F080
-:101030001210561214B7900726120735E541120711
-:1010400031E540F0AF577E00AD567C00120444AF4E
-:10105000567E000211EEFF900720A3E0FDE4F55656
-:10106000F540FEFCAB56FA1211517F0F7D18E4F5E6
-:1010700056F540FEFCAB56FA121541AF567E0012F3
-:101080001AFFE4FFF5567D1FF540FEFCAB56FA2231
-:1010900022E4F555E508FD74A0F556ED4407F55733
-:1010A000E52830E503D38001C340057F28EF8004A5
-:1010B0007F14EFC313F554E4F9120E1875838EE014
-:1010C000F510CEEFCEEED394004026E51054FE127C
-:1010D0000E9875838EEDF0E5104401FDEB4407F5A5
-:1010E00082EDF0855782855683E030E301091E804A
-:1010F000D4C234E9C395544002D2342202000622FD
-:10110000303011901000E493F510901010E493F536
-:101110001012109012115022E4FCC3ED9FFAEFF56B
-:101120008375820079FFE493CC6CCCA3D9F8DAF60E
-:10113000E5E230E4028CE5ED24FFFFEF7582FFF578
-:1011400083E4936C70037F01227F00222211000050
-:10115000228E588F598C5A8D5B8A5C8B5D755E012F
-:10116000E4F55FF560F56212072A7583D0E0FFC4ED
-:10117000540FF561121EA585595ED3E55E955BE5BA
-:101180005A12076B504B1207037583BCE0455E1281
-:1011900007297583BEE0455E1207297583C0E045C7
-:1011A0005EF0AF5FE560120878120AFFAF627E0062
-:1011B000AD5DAC5C120444E561AF5E7E00B4030536
-:1011C000121E218007AD5DAC5C121317055E021183
-:1011D0007A1207037583BCE045401207297583BE68
-:1011E000E045401207297583C0E04540F0228E5843
-:1011F0008F59755A017901755B01E4FB12072A7555
-:1012000083AEE0541AFF120865E0C4135407FEEFE2
-:10121000700CEE6535700790072FE0B4010DAF3507
-:101220007E00120EA9CFEBCF021E60E55964024585
-:101230005870047F0180027F00E559455870047E94
-:101240000180027E00EE4F602385414985404BE5D9
-:10125000594558702CAF5AFECDE9CDFCAB59AA5870
-:10126000120AFFAF5B7E00121E608015AF5B7E002E
-:10127000121E60900726120735E549120731E54B2B
-:10128000F0E4FDAF35FEFC120915228C648D651269
-:1012900008DA403CE56545647010120904C3E53E78
-:1012A000120769403B1208958018E53EC395384007
-:1012B0001D853E38E53E6005853F39800385393917
-:1012C0008F3A1207A8E53E120753E53FF022803B14
-:1012D000E5654564701112075F400512089E801F86
-:1012E00012073EE541F022E53CC39538401D853CA0
-:1012F00038E53C6005853D3980038539398F3A12E0
-:1013000007A8E53C120753E53DF02212079FE53898
-:10131000120753E539F0228C638D641208DA403CE1
-:10132000E56445637010120904C3E53E1207694085
-:101330003B1208958018E53EC39538401D853E3820
-:10134000E53E6005853F3980038539398F3A1207BC
-:10135000A8E53E120753E53FF022803BE564456374
-:10136000701112075F400512089E801F12073EE5AC
-:1013700041F022E53CC39538401D853C38E53C6092
-:1013800005853D3980038539398F3A1207A8E53C38
-:10139000120753E53DF02212079FE538120753E587
-:1013A00039F022E50DFEE5088E544405F555751516
-:1013B0000FF582120E7A1217A320310575150380DE
-:1013C0000375150BE50AC39401503812142020311F
-:1013D0000605150515800415151515E50AC39401B4
-:1013E0005021121420203104051580021515E50A3C
-:1013F000C39401500E120E771217A3203105051564
-:10140000120E77E515B408047F0180027F00E51510
-:10141000B407047E0180027E00EE4F6002057F2249
-:10142000855582855483E515F01217A32212072AE9
-:101430007583AE74FF120729E0541AF534E0C41323
-:101440005407F53524FE602424FE603C24047063B8
-:1014500075312DE508FD74B612079274BC90072211
-:1014600012079574901207B37492803C75313AE577
-:1014700008FD74BA12079274C09007221207B6745E
-:10148000C41207B374C88020753135E508FD74B8FF
-:1014900012079274BEFFED4407900722CFF0A3EF2E
-:1014A000F074C21207B374C6FFED4407A3CFF0A3D4
-:1014B000EFF022753401228E588F598C5A8D5B8A39
-:1014C0005C8B5D755E01E4F55F121EA585595ED3E8
-:1014D000E55E955BE55A12076B5057E55D455C701C
-:1014E0003012072A758392E55E1207297583C6E5D7
-:1014F0005E1207297583C8E55E120729758390E59A
-:101500005E1207297583C2E55E1207297583C480C0
-:1015100003120732E55EF0AF5F7E00AD5DAC5C129A
-:101520000444AF5E7E00AD5DAC5C120BD1055E0283
-:1015300014CFAB5DAA5CAD5BAC5AAF59AE58021B81
-:10154000FB8C5C8D5D8A5E8B5F756001E4F561F5F7
-:1015500062F563121EA58F60D3E560955DE55C12B0
-:10156000076B5061E55F455E702712072A7583B6E9
-:10157000E5601207297583B8E5601207297583BAFB
-:10158000E560F0AF617E00E56212087A120AFF8022
-:1015900019900724120735E56012072975838EE438
-:1015A0001207297401120729E4F0AF637E00AD5FD2
-:1015B000AC5E120444AF607E00AD5FAC5E12128B75
-:1015C00005600215582290114DE49390072EF012F9
-:1015D000081F7583AEE0541AF5347067EF4407F5C1
-:1015E000827583CEE0FF1313135407F536540FD3DF
-:1015F0009400400612142D121BA9E536540F24FE48
-:10160000600C14600C146019240370378010021EE3
-:1016100091121E9112072A7583CEE054EFF0021D3D
-:10162000AE121014E4F555121D850555E555C39409
-:101630000540F412072A7583CEE054C7120729E04B
-:101640004408F022E4F558F559AF08EF4407F58255
-:101650007583D0E0FDC4540FF55AEF4407F5827549
-:1016600083807401F0120821758382E545F0EF4410
-:1016700007F58275838A74FFF0121A4D12072A75D6
-:1016800083BCE054EF1207297583BEE054EF1207C4
-:10169000297583C0E054EF1207297583BCE044101C
-:1016A0001207297583BEE044101207297583C0E034
-:1016B0004410F0AF58E559120878020AFFE4F558D3
-:1016C0007D01F559AF35FEFC12091512072A758305
-:1016D000B674101207297583B87410120729758320
-:1016E000BA74101207297583BC7410120729758308
-:1016F000BE74101207297583C074101207297583F0
-:1017000090E41207297583C2E41207297583C4E4A3
-:10171000120729758392E41207297583C6E412071C
-:10172000297583C8E4F0AF58FEE55912087A020A19
-:10173000FFE5E230E46CE5E754C064407064E5091D
-:10174000C45430FEE50825E025E054C04EFEEF54B9
-:101750003F4EFDE52BAE2A7802C333CE33CED8F907
-:10176000F5828E83EDF0E52BAE2A7802C333CE33BB
-:10177000CED8F9FFF5828E83A3E5FEF08F828E83AB
-:10178000A3A3E5FDF08F828E83A3A3A3E5FCF0C3A2
-:10179000E52B94FAE52A94005008052BE52B7002FE
-:1017A000052A22E4FFE4F558F556F5577482FC1239
-:1017B0000E048C83E0F510547FF0E5104480120E87
-:1017C00098EDF07E0A120E047583A0E020E026DE7C
-:1017D000F40557E55770020556E5142401FDE4337E
-:1017E000FCD3E5579DE5569C40D9E50A942050026C
-:1017F000050A43E108C231120E047583A6E05512B2
-:1018000065127003D23122C23122900726E0FAA37A
-:10181000E0F5828A83E0F541E539C395414026E54C
-:10182000399541C39FEE12076B40047C0180027C16
-:1018300000E541643F60047B0180027B00EC5B605B
-:101840002905418028C3E5419539C39FEE12076BF6
-:1018500040047F0180027F00E54160047E01800238
-:101860007E00EF5E600415418003853941853A4072
-:1018700022E5E230E460E5E130E25BE50970047FF7
-:101880000180027F00E50870047E0180027E00EE88
-:101890005F604353F9F8E5E230E43BE5E130E22EE6
-:1018A00043FA0253FAFBE4F510909470E510F0E56A
-:1018B000E130E2E7909470E06510600343FA0405BC
-:1018C00010909470E510F070E612000680E153FA73
-:1018D000FD53FAFB80C0228F54120006E5E130E090
-:1018E000047F0180027F00E57ED3940540047E01E1
-:1018F00080027E00EE4F603D855411E5E220E1322A
-:1019000074CE121A0530E7047D0180027D008F82BB
-:101910008E83E030E6047F0180027F00EF5D70156A
-:101920001215C674CE121A0530E607E04480F04363
-:10193000F98012187122120E44E51625E025E024E4
-:10194000B0F582E4341AF583E493F50FE51625E04B
-:1019500025E024B1F582E4341AF583E493F50E1200
-:101960000E65F510E50F54F0120E1775838CEFF02D
-:10197000E50F30E00C120E04758386E04440F080E1
-:101980000A120E04758386E054BFF0120E9175831F
-:1019900082E50EF0227F05121731120E04120E336B
-:1019A0007402F0748EFE120E04120E0BEFF0751519
-:1019B00070120FF72034057515108003751550123D
-:1019C0000FF72034047410800274F02515F51512F9
-:1019D0000E21EFF0121091203417E5156430600CE1
-:1019E00074102515F515B48003E4F515120E21EFDA
-:1019F000F022F0E50B25E025E02482F582E43407AF
-:101A0000F583227488FEE5084407FFF5828E83E0A3
-:101A100022F0E5084407F58222F0E054C08F828E60
-:101A200083F022EF4407F582758386E05410D39447
-:101A30000022F0900715E004F0224406F582758339
-:101A40009EE022FEEF4407F5828E83E022E49007B9
-:101A50002AF0A3F012072A758382E0547F12072927
-:101A6000E04480F01210FC12081F7583A0E020E013
-:101A70001A90072BE004F0700690072AE004F0901B
-:101A8000072AE0B410E1A3E0B400DCEE44A6FCEFCA
-:101A90004407F5828C83E0F532EE44A8FEEF44075C
-:101AA000F5828E83E0F5332201201100042000909E
-:101AB00000200F9200210F9400220F9600230F9810
-:101AC00000240F9A00250F9C00260F9E00270FA0D0
-:101AD000012001A2012101A4012201A6012301A8E4
-:101AE000012401AA012501AC012601AE012701B0A4
-:101AF000012801B400280FB640280FB8612801CB97
-:101B0000EFCBCAEECA7F01E4FDEB4A7024E508F58D
-:101B10008274B6120829E508F58274B8120829E51E
-:101B200008F58274BA1208297E007C00120AFF8030
-:101B300012900726120735E541F090072412073569
-:101B4000E540F012072A75838EE41207297401120A
-:101B50000729E4F022E4F526F52753E1FEF52A757E
-:101B60002B01F5087F0112173130301C901AA9E4BF
-:101B700093F510901FF9E493F510900041E493F56C
-:101B800010901ECAE493F5107F02121731120F5401
-:101B90007F03121731120006E5E230E70912100048
-:101BA00030300312110002004712081F7583D0E085
-:101BB000C4540FFD7543017544FF1208AA7404F064
-:101BC000753B01ED14600C14600B14600F2403705E
-:101BD0000B800980001208A704F080061208A77481
-:101BE00004F0EE4482FEEF4407F5828E83E5451251
-:101BF00008BE758382E531F002114C8E608F611250
-:101C00001EA5E4FFCEEDCEEED39561E56012076B25
-:101C1000403974202EF582E43403F583E07003FF2D
-:101C200080261208E2FDC39F401ECFEDCFEB4A7025
-:101C30000B8D421208EEF5418E40800C1208E2F541
-:101C4000381208EEF5398E3A1E80BC22755801E52F
-:101C500035700C1207CCE0F54A1207D8E0F54CE5D8
-:101C600035B4040C1207E4E0F54A1207F0E0F54C35
-:101C7000E535B401047F0180027F00E535B402043C
-:101C80007E0180027E00EE4F600C1207FCE0F54AF8
-:101C9000120808E0F54C85414985404B22755B01EF
-:101CA000900724120735E0541FFFD3940250048F8D
-:101CB000588005EF24FEF558EFC394184005755978
-:101CC000188004EF04F55985435AAF587E00AD598A
-:101CD0007C00AB5B7A00121541AF5A7E0012180AE5
-:101CE000AF5B7E00021AFFE5E230E70E121003C27E
-:101CF000303030031210FF203328E5E730E70512BB
-:101D00000EA2800DE5FEC394205006120EA243F9E8
-:101D100008E5F230E70353F97FE5F15470D39400FE
-:101D200050D822120E04758380E4F0E508440712AF
-:101D30000DFD758384120E02758386120E02758363
-:101D40008CE054F3120E0375838E120E0275839489
-:101D5000E054FBF02212072A75838EE412072974DF
-:101D600001120729E41208BE75838CE04420120892
-:101D7000BEE054DFF07484850882F583E0547FF080
-:101D8000E04480F022755601E4FDF557AF35FEFCC6
-:101D9000120915121C9D121E7A121C4CAF577E00A0
-:101DA000AD567C00120444AF567E000211EE75560B
-:101DB00001E4FDF557AF35FEFC120915121C9D120A
-:101DC0001E7A121C4CAF577E00AD567C00120444A4
-:101DD000AF567E000211EEE4F516120E44FEE50841
-:101DE0004405FF120E658F828E83F00516E516C33B
-:101DF000941440E6E508120E2BE4F022E4F558F5C1
-:101E000059F55AFFFEAD58FC1209157F047E00AD4E
-:101E1000587C001209157F027E00AD587C00020933
-:101E200015E53C253EFCE5422400FBE433FAECC317
-:101E30009BEA12076B400B8C42E53D253FF5418F35
-:101E4000402212090B227484F5188508198519821D
-:101E5000851883E0547FF0E04480F0E04480F02275
-:101E6000EF4E700B12072A7583D2E054DFF0221276
-:101E7000072A7583D2E04420F02275580190072686
-:101E8000120735E0543FF541120732E0543FF54068
-:101E900022755602E4F557121DFCAF577E00AD5671
-:101EA0007C00020444E4F542F541F540F538F5398B
-:101EB000F53A22EF5407FFE5F954F84FF5F9227F80
-:101EC00001E4FE0F0EBEFFFB2201200001042000F2
-:101ED0000000000000000000000000000000000002
-:101EE00000000000000000000000000000000000F2
-:101EF00000000000000000000000000000000000E2
-:101F000000000000000000000000000000000000D1
-:101F100000000000000000000000000000000000C1
-:101F200000000000000000000000000000000000B1
-:101F300000000000000000000000000000000000A1
-:101F40000000000000000000000000000000000091
-:101F50000000000000000000000000000000000081
-:101F60000000000000000000000000000000000071
-:101F70000000000000000000000000000000000061
-:101F80000000000000000000000000000000000051
-:101F90000000000000000000000000000000000041
-:101FA0000000000000000000000000000000000031
-:101FB0000000000000000000000000000000000021
-:101FC0000000000000000000000000000000000011
-:101FD0000000000000000000000000000000000001
-:101FE00000000000000000000000000000000000F1
-:101FF000000000000000000001201100042000810A
-:00000001FF
if (tree_mod_dont_log(fs_info, NULL))
return 0;
+ __tree_mod_log_free_eb(fs_info, old_root);
+
ret = tree_mod_alloc(fs_info, flags, &tm);
if (ret < 0)
goto out;
static noinline void
tree_mod_log_eb_copy(struct btrfs_fs_info *fs_info, struct extent_buffer *dst,
struct extent_buffer *src, unsigned long dst_offset,
- unsigned long src_offset, int nr_items)
+ unsigned long src_offset, int nr_items, int log_removal)
{
int ret;
int i;
}
for (i = 0; i < nr_items; i++) {
- ret = tree_mod_log_insert_key_locked(fs_info, src,
- i + src_offset,
- MOD_LOG_KEY_REMOVE);
- BUG_ON(ret < 0);
+ if (log_removal) {
+ ret = tree_mod_log_insert_key_locked(fs_info, src,
+ i + src_offset,
+ MOD_LOG_KEY_REMOVE);
+ BUG_ON(ret < 0);
+ }
ret = tree_mod_log_insert_key_locked(fs_info, dst,
i + dst_offset,
MOD_LOG_KEY_ADD);
ret = btrfs_dec_ref(trans, root, buf, 1, 1);
BUG_ON(ret); /* -ENOMEM */
}
- tree_mod_log_free_eb(root->fs_info, buf);
clean_tree_block(trans, root, buf);
*last_ref = 1;
}
btrfs_set_node_ptr_generation(parent, parent_slot,
trans->transid);
btrfs_mark_buffer_dirty(parent);
+ tree_mod_log_free_eb(root->fs_info, buf);
btrfs_free_tree_block(trans, root, buf, parent_start,
last_ref);
}
goto enospc;
}
- tree_mod_log_free_eb(root->fs_info, root->node);
tree_mod_log_set_root_pointer(root, child);
rcu_assign_pointer(root->node, child);
push_items = min(src_nritems - 8, push_items);
tree_mod_log_eb_copy(root->fs_info, dst, src, dst_nritems, 0,
- push_items);
+ push_items, 1);
copy_extent_buffer(dst, src,
btrfs_node_key_ptr_offset(dst_nritems),
btrfs_node_key_ptr_offset(0),
sizeof(struct btrfs_key_ptr));
tree_mod_log_eb_copy(root->fs_info, dst, src, 0,
- src_nritems - push_items, push_items);
+ src_nritems - push_items, push_items, 1);
copy_extent_buffer(dst, src,
btrfs_node_key_ptr_offset(0),
btrfs_node_key_ptr_offset(src_nritems - push_items),
int mid;
int ret;
u32 c_nritems;
+ int tree_mod_log_removal = 1;
c = path->nodes[level];
WARN_ON(btrfs_header_generation(c) != trans->transid);
if (c == root->node) {
/* trying to split the root, lets make a new one */
ret = insert_new_root(trans, root, path, level + 1);
+ /*
+ * removal of root nodes has been logged by
+ * tree_mod_log_set_root_pointer due to locking
+ */
+ tree_mod_log_removal = 0;
if (ret)
return ret;
} else {
(unsigned long)btrfs_header_chunk_tree_uuid(split),
BTRFS_UUID_SIZE);
- tree_mod_log_eb_copy(root->fs_info, split, c, 0, mid, c_nritems - mid);
+ tree_mod_log_eb_copy(root->fs_info, split, c, 0, mid, c_nritems - mid,
+ tree_mod_log_removal);
copy_extent_buffer(split, c,
btrfs_node_key_ptr_offset(0),
btrfs_node_key_ptr_offset(mid),
0, objectid, NULL, 0, 0, 0);
if (IS_ERR(leaf)) {
ret = PTR_ERR(leaf);
+ leaf = NULL;
goto fail;
}
btrfs_tree_unlock(leaf);
+ return root;
+
fail:
- if (ret)
- return ERR_PTR(ret);
+ if (leaf) {
+ btrfs_tree_unlock(leaf);
+ free_extent_buffer(leaf);
+ }
+ kfree(root);
- return root;
+ return ERR_PTR(ret);
}
static struct btrfs_root *alloc_log_tree(struct btrfs_trans_handle *trans,
if (btrfs_root_refs(&root->root_item) == 0)
synchronize_srcu(&fs_info->subvol_srcu);
- if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) {
+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
btrfs_free_log(NULL, root);
btrfs_free_log_root_tree(NULL, fs_info);
}
cache->bytes_super += stripe_len;
ret = add_excluded_extent(root, cache->key.objectid,
stripe_len);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret)
+ return ret;
}
for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) {
ret = btrfs_rmap_block(&root->fs_info->mapping_tree,
cache->key.objectid, bytenr,
0, &logical, &nr, &stripe_len);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret)
+ return ret;
while (nr--) {
cache->bytes_super += stripe_len;
ret = add_excluded_extent(root, logical[nr],
stripe_len);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret) {
+ kfree(logical);
+ return ret;
+ }
}
kfree(logical);
spin_lock(&sinfo->lock);
spin_lock(&block_rsv->lock);
- block_rsv->size = num_bytes;
+ block_rsv->size = min_t(u64, num_bytes, 512 * 1024 * 1024);
num_bytes = sinfo->bytes_used + sinfo->bytes_pinned +
sinfo->bytes_reserved + sinfo->bytes_readonly +
* If the inodes csum_bytes is the same as the original
* csum_bytes then we know we haven't raced with any free()ers
* so we can just reduce our inodes csum bytes and carry on.
- * Otherwise we have to do the normal free thing to account for
- * the case that the free side didn't free up its reserve
- * because of this outstanding reservation.
*/
- if (BTRFS_I(inode)->csum_bytes == csum_bytes)
+ if (BTRFS_I(inode)->csum_bytes == csum_bytes) {
calc_csum_metadata_size(inode, num_bytes, 0);
- else
- to_free = calc_csum_metadata_size(inode, num_bytes, 0);
+ } else {
+ u64 orig_csum_bytes = BTRFS_I(inode)->csum_bytes;
+ u64 bytes;
+
+ /*
+ * This is tricky, but first we need to figure out how much we
+ * free'd from any free-ers that occured during this
+ * reservation, so we reset ->csum_bytes to the csum_bytes
+ * before we dropped our lock, and then call the free for the
+ * number of bytes that were freed while we were trying our
+ * reservation.
+ */
+ bytes = csum_bytes - BTRFS_I(inode)->csum_bytes;
+ BTRFS_I(inode)->csum_bytes = csum_bytes;
+ to_free = calc_csum_metadata_size(inode, bytes, 0);
+
+
+ /*
+ * Now we need to see how much we would have freed had we not
+ * been making this reservation and our ->csum_bytes were not
+ * artificially inflated.
+ */
+ BTRFS_I(inode)->csum_bytes = csum_bytes - num_bytes;
+ bytes = csum_bytes - orig_csum_bytes;
+ bytes = calc_csum_metadata_size(inode, bytes, 0);
+
+ /*
+ * Now reset ->csum_bytes to what it should be. If bytes is
+ * more than to_free then we would have free'd more space had we
+ * not had an artificially high ->csum_bytes, so we need to free
+ * the remainder. If bytes is the same or less then we don't
+ * need to do anything, the other free-ers did the correct
+ * thing.
+ */
+ BTRFS_I(inode)->csum_bytes = orig_csum_bytes - num_bytes;
+ if (bytes > to_free)
+ to_free = bytes - to_free;
+ else
+ to_free = 0;
+ }
spin_unlock(&BTRFS_I(inode)->lock);
if (dropped)
to_free += btrfs_calc_trans_metadata_size(root, dropped);
* info has super bytes accounted for, otherwise we'll think
* we have more space than we actually do.
*/
- exclude_super_stripes(root, cache);
+ ret = exclude_super_stripes(root, cache);
+ if (ret) {
+ /*
+ * We may have excluded something, so call this just in
+ * case.
+ */
+ free_excluded_extents(root, cache);
+ kfree(cache->free_space_ctl);
+ kfree(cache);
+ goto error;
+ }
/*
* check for two cases, either we are full, and therefore
cache->last_byte_to_unpin = (u64)-1;
cache->cached = BTRFS_CACHE_FINISHED;
- exclude_super_stripes(root, cache);
+ ret = exclude_super_stripes(root, cache);
+ if (ret) {
+ /*
+ * We may have excluded something, so call this just in
+ * case.
+ */
+ free_excluded_extents(root, cache);
+ kfree(cache->free_space_ctl);
+ kfree(cache);
+ return ret;
+ }
add_new_free_space(cache, root->fs_info, chunk_offset,
chunk_offset + size);
GFP_NOFS);
}
+int extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end)
+{
+ unsigned long index = start >> PAGE_CACHE_SHIFT;
+ unsigned long end_index = end >> PAGE_CACHE_SHIFT;
+ struct page *page;
+
+ while (index <= end_index) {
+ page = find_get_page(inode->i_mapping, index);
+ BUG_ON(!page); /* Pages should be in the extent_io_tree */
+ clear_page_dirty_for_io(page);
+ page_cache_release(page);
+ index++;
+ }
+ return 0;
+}
+
+int extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
+{
+ unsigned long index = start >> PAGE_CACHE_SHIFT;
+ unsigned long end_index = end >> PAGE_CACHE_SHIFT;
+ struct page *page;
+
+ while (index <= end_index) {
+ page = find_get_page(inode->i_mapping, index);
+ BUG_ON(!page); /* Pages should be in the extent_io_tree */
+ account_page_redirty(page);
+ __set_page_dirty_nobuffers(page);
+ page_cache_release(page);
+ index++;
+ }
+ return 0;
+}
+
/*
* helper function to set both pages and extents in the tree writeback
*/
unsigned long *map_len);
int extent_range_uptodate(struct extent_io_tree *tree,
u64 start, u64 end);
+int extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end);
+int extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
int extent_clear_unlock_delalloc(struct inode *inode,
struct extent_io_tree *tree,
u64 start, u64 end, struct page *locked_page,
csums_in_item = btrfs_item_size_nr(leaf, path->slots[0]);
csums_in_item /= csum_size;
- if (csum_offset >= csums_in_item) {
+ if (csum_offset == csums_in_item) {
ret = -EFBIG;
goto fail;
+ } else if (csum_offset > csums_in_item) {
+ goto fail;
}
}
item = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_csum_item);
return -ENOMEM;
sector_sum = sums->sums;
- trans->adding_csums = 1;
again:
next_offset = (u64)-1;
found_next = 0;
goto again;
}
out:
- trans->adding_csums = 0;
btrfs_free_path(path);
return ret;
{
struct inode *inode = file_inode(file);
struct extent_state *cached_state = NULL;
+ struct btrfs_root *root = BTRFS_I(inode)->root;
u64 cur_offset;
u64 last_byte;
u64 alloc_start;
ret = btrfs_check_data_free_space(inode, alloc_end - alloc_start);
if (ret)
return ret;
+ if (root->fs_info->quota_enabled) {
+ ret = btrfs_qgroup_reserve(root, alloc_end - alloc_start);
+ if (ret)
+ goto out_reserve_fail;
+ }
/*
* wait for ordered IO before we have any locks. We'll loop again
&cached_state, GFP_NOFS);
out:
mutex_unlock(&inode->i_mutex);
+ if (root->fs_info->quota_enabled)
+ btrfs_qgroup_free(root, alloc_end - alloc_start);
+out_reserve_fail:
/* Let go of our reservation. */
btrfs_free_reserved_data_space(inode, alloc_end - alloc_start);
return ret;
int i;
int will_compress;
int compress_type = root->fs_info->compress_type;
+ int redirty = 0;
/* if this is a small write inside eof, kick off a defrag */
if ((end - start + 1) < 16 * 1024 &&
if (BTRFS_I(inode)->force_compress)
compress_type = BTRFS_I(inode)->force_compress;
+ /*
+ * we need to call clear_page_dirty_for_io on each
+ * page in the range. Otherwise applications with the file
+ * mmap'd can wander in and change the page contents while
+ * we are compressing them.
+ *
+ * If the compression fails for any reason, we set the pages
+ * dirty again later on.
+ */
+ extent_range_clear_dirty_for_io(inode, start, end);
+ redirty = 1;
ret = btrfs_compress_pages(compress_type,
inode->i_mapping, start,
total_compressed, pages,
__set_page_dirty_nobuffers(locked_page);
/* unlocked later on in the async handlers */
}
+ if (redirty)
+ extent_range_redirty_for_io(inode, start, end);
add_async_extent(async_cow, start, end - start + 1,
0, NULL, 0, BTRFS_COMPRESS_NONE);
*num_added += 1;
struct btrfs_ordered_sum *sum;
list_for_each_entry(sum, list, list) {
+ trans->adding_csums = 1;
btrfs_csum_file_blocks(trans,
BTRFS_I(inode)->root->fs_info->csum_root, sum);
+ trans->adding_csums = 0;
}
return 0;
}
* 1 for the dir item
* 1 for the dir index
* 1 for the inode ref
- * 1 for the inode ref in the tree log
- * 2 for the dir entries in the log
* 1 for the inode
*/
- trans = btrfs_start_transaction(root, 8);
+ trans = btrfs_start_transaction(root, 5);
if (!IS_ERR(trans) || PTR_ERR(trans) != -ENOSPC)
return trans;
* inodes. So 5 * 2 is 10, plus 1 for the new link, so 11 total items
* should cover the worst case number of items we'll modify.
*/
- trans = btrfs_start_transaction(root, 20);
+ trans = btrfs_start_transaction(root, 11);
if (IS_ERR(trans)) {
ret = PTR_ERR(trans);
goto out_notrans;
INIT_LIST_HEAD(&splice);
INIT_LIST_HEAD(&works);
+ mutex_lock(&root->fs_info->ordered_operations_mutex);
spin_lock(&root->fs_info->ordered_extent_lock);
list_splice_init(&root->fs_info->ordered_extents, &splice);
while (!list_empty(&splice)) {
cond_resched();
}
+ mutex_unlock(&root->fs_info->ordered_operations_mutex);
}
/*
ret = btrfs_find_all_roots(trans, fs_info, node->bytenr,
sgn > 0 ? node->seq - 1 : node->seq, &roots);
if (ret < 0)
- goto out;
+ return ret;
spin_lock(&fs_info->qgroup_lock);
quota_root = fs_info->quota_root;
ret = 0;
unlock:
spin_unlock(&fs_info->qgroup_lock);
-out:
ulist_free(roots);
ulist_free(tmp);
eb = path->nodes[0];
ei = btrfs_item_ptr(eb, path->slots[0], struct btrfs_extent_item);
item_size = btrfs_item_size_nr(eb, path->slots[0]);
- btrfs_release_path(path);
if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) {
do {
ret < 0 ? -1 : ref_level,
ret < 0 ? -1 : ref_root);
} while (ret != 1);
+ btrfs_release_path(path);
} else {
+ btrfs_release_path(path);
swarn.path = path;
swarn.dev = dev;
iterate_extent_inodes(fs_info, found_key.objectid,
found_key.type != key.type) {
key.offset += right_len;
break;
- } else {
- if (found_key.offset != key.offset + right_len) {
- /* Should really not happen */
- ret = -EIO;
- goto out;
- }
+ }
+ if (found_key.offset != key.offset + right_len) {
+ ret = 0;
+ goto out;
}
key = found_key;
}
em = lookup_extent_mapping(em_tree, chunk_start, 1);
read_unlock(&em_tree->lock);
- BUG_ON(!em || em->start != chunk_start);
+ if (!em) {
+ printk(KERN_ERR "btrfs: couldn't find em for chunk %Lu\n",
+ chunk_start);
+ return -EIO;
+ }
+
+ if (em->start != chunk_start) {
+ printk(KERN_ERR "btrfs: bad chunk start, em=%Lu, wanted=%Lu\n",
+ em->start, chunk_start);
+ free_extent_map(em);
+ return -EIO;
+ }
map = (struct map_lookup *)em->bdev;
length = em->len;
}
}
- /* mechlistMIC */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- /* Check if we have reached the end of the blob, but with
- no mechListMic (e.g. NTLMSSP instead of KRB5) */
- if (ctx.error == ASN1_ERR_DEC_EMPTY)
- goto decode_negtoken_exit;
- cFYI(1, "Error decoding last part negTokenInit exit3");
- return 0;
- } else if ((cls != ASN1_CTX) || (con != ASN1_CON)) {
- /* tag = 3 indicating mechListMIC */
- cFYI(1, "Exit 4 cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- return 0;
- }
-
- /* sequence */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- cFYI(1, "Error decoding last part negTokenInit exit5");
- return 0;
- } else if ((cls != ASN1_UNI) || (con != ASN1_CON)
- || (tag != ASN1_SEQ)) {
- cFYI(1, "cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- }
-
- /* sequence of */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- cFYI(1, "Error decoding last part negTokenInit exit 7");
- return 0;
- } else if ((cls != ASN1_CTX) || (con != ASN1_CON)) {
- cFYI(1, "Exit 8 cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- return 0;
- }
-
- /* general string */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- cFYI(1, "Error decoding last part negTokenInit exit9");
- return 0;
- } else if ((cls != ASN1_UNI) || (con != ASN1_PRI)
- || (tag != ASN1_GENSTR)) {
- cFYI(1, "Exit10 cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- return 0;
- }
- cFYI(1, "Need to call asn1_octets_decode() function for %s",
- ctx.pointer); /* is this UTF-8 or ASCII? */
-decode_negtoken_exit:
+ /*
+ * We currently ignore anything at the end of the SPNEGO blob after
+ * the mechTypes have been parsed, since none of that info is
+ * used at the moment.
+ */
return 1;
}
__u8 cifs_client_guid[SMB2_CLIENT_GUID_SIZE];
#endif
+/*
+ * Bumps refcount for cifs super block.
+ * Note that it should be only called if a referece to VFS super block is
+ * already held, e.g. in open-type syscalls context. Otherwise it can race with
+ * atomic_dec_and_test in deactivate_locked_super.
+ */
+void
+cifs_sb_active(struct super_block *sb)
+{
+ struct cifs_sb_info *server = CIFS_SB(sb);
+
+ if (atomic_inc_return(&server->active) == 1)
+ atomic_inc(&sb->s_active);
+}
+
+void
+cifs_sb_deactive(struct super_block *sb)
+{
+ struct cifs_sb_info *server = CIFS_SB(sb);
+
+ if (atomic_dec_and_test(&server->active))
+ deactivate_super(sb);
+}
+
static int
cifs_read_super(struct super_block *sb)
{
extern const struct address_space_operations cifs_addr_ops;
extern const struct address_space_operations cifs_addr_ops_smallbuf;
+/* Functions related to super block operations */
+extern void cifs_sb_active(struct super_block *sb);
+extern void cifs_sb_deactive(struct super_block *sb);
+
/* Functions related to inodes */
extern const struct inode_operations cifs_dir_inode_ops;
extern struct inode *cifs_root_iget(struct super_block *);
INIT_WORK(&cfile->oplock_break, cifs_oplock_break);
mutex_init(&cfile->fh_mutex);
+ cifs_sb_active(inode->i_sb);
+
/*
* If the server returned a read oplock and we have mandatory brlocks,
* set oplock level to None.
struct cifs_tcon *tcon = tlink_tcon(cifs_file->tlink);
struct TCP_Server_Info *server = tcon->ses->server;
struct cifsInodeInfo *cifsi = CIFS_I(inode);
- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct super_block *sb = inode->i_sb;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
struct cifsLockInfo *li, *tmp;
struct cifs_fid fid;
struct cifs_pending_open open;
cifs_put_tlink(cifs_file->tlink);
dput(cifs_file->dentry);
+ cifs_sb_deactive(sb);
kfree(cifs_file);
}
cifs_sb->mnt_cifs_flags &
CIFS_MOUNT_MAP_SPECIAL_CHR);
if (rc != 0) {
- rc = -ETXTBSY;
+ rc = -EBUSY;
goto undo_setattr;
}
if (rc == -ENOENT)
rc = 0;
else if (rc != 0) {
- rc = -ETXTBSY;
+ rc = -EBUSY;
goto undo_rename;
}
cifsInode->delete_pending = true;
cifs_drop_nlink(inode);
} else if (rc == -ENOENT) {
d_drop(dentry);
- } else if (rc == -ETXTBSY) {
+ } else if (rc == -EBUSY) {
if (server->ops->rename_pending_delete) {
rc = server->ops->rename_pending_delete(full_path,
dentry, xid);
if (rc == 0)
cifs_drop_nlink(inode);
}
- if (rc == -ETXTBSY)
- rc = -EBUSY;
} else if ((rc == -EACCES) && (dosattr == 0) && inode) {
attrs = kzalloc(sizeof(*attrs), GFP_KERNEL);
if (attrs == NULL) {
* source. Note that cross directory moves do not work with
* rename by filehandle to various Windows servers.
*/
- if (rc == 0 || rc != -ETXTBSY)
+ if (rc == 0 || rc != -EBUSY)
goto do_rename_exit;
/* open-file renames don't work across directories */
{ERRdiffdevice, -EXDEV},
{ERRnofiles, -ENOENT},
{ERRwriteprot, -EROFS},
- {ERRbadshare, -ETXTBSY},
+ {ERRbadshare, -EBUSY},
{ERRlock, -EACCES},
{ERRunsup, -EINVAL},
{ERRnosuchshare, -ENXIO},
bool slash = false;
int error = 0;
- br_read_lock(&vfsmount_lock);
while (dentry != root->dentry || vfsmnt != root->mnt) {
struct dentry * parent;
if (!error && !slash)
error = prepend(buffer, buflen, "/", 1);
-out:
- br_read_unlock(&vfsmount_lock);
return error;
global_root:
error = prepend(buffer, buflen, "/", 1);
if (!error)
error = is_mounted(vfsmnt) ? 1 : 2;
- goto out;
+ return error;
}
/**
int error;
prepend(&res, &buflen, "\0", 1);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = prepend_path(path, root, &res, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
return ERR_PTR(error);
int error;
prepend(&res, &buflen, "\0", 1);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = prepend_path(path, &root, &res, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error > 1)
error = -EINVAL;
return path->dentry->d_op->d_dname(path->dentry, buf, buflen);
get_fs_root(current->fs, &root);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = path_with_deleted(path, &root, &res, &buflen);
+ write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
res = ERR_PTR(error);
- write_sequnlock(&rename_lock);
path_put(&root);
return res;
}
get_fs_root_and_pwd(current->fs, &root, &pwd);
error = -ENOENT;
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
if (!d_unlinked(pwd.dentry)) {
unsigned long len;
prepend(&cwd, &buflen, "\0", 1);
error = prepend_path(&pwd, &root, &cwd, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
goto out;
}
} else {
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
}
out:
*/
struct flex_groups {
- atomic_t free_inodes;
- atomic_t free_clusters;
- atomic_t used_dirs;
+ atomic64_t free_clusters;
+ atomic_t free_inodes;
+ atomic_t used_dirs;
};
#define EXT4_BG_INODE_UNINIT 0x0001 /* Inode table/bitmap not in use */
extern int __init ext4_init_pageio(void);
extern void ext4_add_complete_io(ext4_io_end_t *io_end);
extern void ext4_exit_pageio(void);
-extern void ext4_ioend_wait(struct inode *);
+extern void ext4_ioend_shutdown(struct inode *);
extern void ext4_free_io_end(ext4_io_end_t *io);
extern ext4_io_end_t *ext4_init_io_end(struct inode *inode, gfp_t flags);
extern void ext4_end_io_work(struct work_struct *work);
unsigned short ext1_ee_len, ext2_ee_len, max_len;
/*
- * Make sure that either both extents are uninitialized, or
- * both are _not_.
+ * Make sure that both extents are initialized. We don't merge
+ * uninitialized extents so that we can be sure that end_io code has
+ * the extent that was written properly split out and conversion to
+ * initialized is trivial.
*/
- if (ext4_ext_is_uninitialized(ex1) ^ ext4_ext_is_uninitialized(ex2))
+ if (ext4_ext_is_uninitialized(ex1) || ext4_ext_is_uninitialized(ex2))
return 0;
if (ext4_ext_is_uninitialized(ex1))
{
ext4_fsblk_t newblock;
ext4_lblk_t ee_block;
- struct ext4_extent *ex, newex, orig_ex;
+ struct ext4_extent *ex, newex, orig_ex, zero_ex;
struct ext4_extent *ex2 = NULL;
unsigned int ee_len, depth;
int err = 0;
newblock = split - ee_block + ext4_ext_pblock(ex);
BUG_ON(split < ee_block || split >= (ee_block + ee_len));
+ BUG_ON(!ext4_ext_is_uninitialized(ex) &&
+ split_flag & (EXT4_EXT_MAY_ZEROOUT |
+ EXT4_EXT_MARK_UNINIT1 |
+ EXT4_EXT_MARK_UNINIT2));
err = ext4_ext_get_access(handle, inode, path + depth);
if (err)
err = ext4_ext_insert_extent(handle, inode, path, &newex, flags);
if (err == -ENOSPC && (EXT4_EXT_MAY_ZEROOUT & split_flag)) {
if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {
- if (split_flag & EXT4_EXT_DATA_VALID1)
+ if (split_flag & EXT4_EXT_DATA_VALID1) {
err = ext4_ext_zeroout(inode, ex2);
- else
+ zero_ex.ee_block = ex2->ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(ex2);
+ ext4_ext_store_pblock(&zero_ex,
+ ext4_ext_pblock(ex2));
+ } else {
err = ext4_ext_zeroout(inode, ex);
- } else
+ zero_ex.ee_block = ex->ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(ex);
+ ext4_ext_store_pblock(&zero_ex,
+ ext4_ext_pblock(ex));
+ }
+ } else {
err = ext4_ext_zeroout(inode, &orig_ex);
+ zero_ex.ee_block = orig_ex.ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(&orig_ex);
+ ext4_ext_store_pblock(&zero_ex,
+ ext4_ext_pblock(&orig_ex));
+ }
if (err)
goto fix_extent_len;
ex->ee_len = cpu_to_le16(ee_len);
ext4_ext_try_to_merge(handle, inode, path, ex);
err = ext4_ext_dirty(handle, inode, path + path->p_depth);
+ if (err)
+ goto fix_extent_len;
+
+ /* update extent status tree */
+ err = ext4_es_zeroout(inode, &zero_ex);
+
goto out;
} else if (err)
goto fix_extent_len;
int err = 0;
int uninitialized;
int split_flag1, flags1;
+ int allocated = map->m_len;
depth = ext_depth(inode);
ex = path[depth].p_ext;
map->m_lblk + map->m_len, split_flag1, flags1);
if (err)
goto out;
+ } else {
+ allocated = ee_len - (map->m_lblk - ee_block);
}
-
+ /*
+ * Update path is required because previous ext4_split_extent_at() may
+ * result in split of original leaf or extent zeroout.
+ */
ext4_ext_drop_refs(path);
path = ext4_ext_find_extent(inode, map->m_lblk, path);
if (IS_ERR(path))
return PTR_ERR(path);
+ depth = ext_depth(inode);
+ ex = path[depth].p_ext;
+ uninitialized = ext4_ext_is_uninitialized(ex);
+ split_flag1 = 0;
if (map->m_lblk >= ee_block) {
- split_flag1 = split_flag & (EXT4_EXT_MAY_ZEROOUT |
- EXT4_EXT_DATA_VALID2);
- if (uninitialized)
+ split_flag1 = split_flag & EXT4_EXT_DATA_VALID2;
+ if (uninitialized) {
split_flag1 |= EXT4_EXT_MARK_UNINIT1;
- if (split_flag & EXT4_EXT_MARK_UNINIT2)
- split_flag1 |= EXT4_EXT_MARK_UNINIT2;
+ split_flag1 |= split_flag & (EXT4_EXT_MAY_ZEROOUT |
+ EXT4_EXT_MARK_UNINIT2);
+ }
err = ext4_split_extent_at(handle, inode, path,
map->m_lblk, split_flag1, flags);
if (err)
ext4_ext_show_leaf(inode, path);
out:
- return err ? err : map->m_len;
+ return err ? err : allocated;
}
/*
ee_block = le32_to_cpu(ex->ee_block);
ee_len = ext4_ext_get_actual_len(ex);
allocated = ee_len - (map->m_lblk - ee_block);
+ zero_ex.ee_len = 0;
trace_ext4_ext_convert_to_initialized_enter(inode, map, ex);
if (EXT4_EXT_MAY_ZEROOUT & split_flag)
max_zeroout = sbi->s_extent_max_zeroout_kb >>
- inode->i_sb->s_blocksize_bits;
+ (inode->i_sb->s_blocksize_bits - 10);
/* If extent is less than s_max_zeroout_kb, zeroout directly */
if (max_zeroout && (ee_len <= max_zeroout)) {
err = ext4_ext_zeroout(inode, ex);
if (err)
goto out;
+ zero_ex.ee_block = ex->ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(ex);
+ ext4_ext_store_pblock(&zero_ex, ext4_ext_pblock(ex));
err = ext4_ext_get_access(handle, inode, path + depth);
if (err)
err = allocated;
out:
+ /* If we have gotten a failure, don't zero out status tree */
+ if (!err)
+ err = ext4_es_zeroout(inode, &zero_ex);
return err ? err : allocated;
}
"block %llu, max_blocks %u\n", inode->i_ino,
(unsigned long long)ee_block, ee_len);
- /* If extent is larger than requested then split is required */
+ /* If extent is larger than requested it is a clear sign that we still
+ * have some extent state machine issues left. So extent_split is still
+ * required.
+ * TODO: Once all related issues will be fixed this situation should be
+ * illegal.
+ */
if (ee_block != map->m_lblk || ee_len > map->m_len) {
+#ifdef EXT4_DEBUG
+ ext4_warning("Inode (%ld) finished: extent logical block %llu,"
+ " len %u; IO logical block %llu, len %u\n",
+ inode->i_ino, (unsigned long long)ee_block, ee_len,
+ (unsigned long long)map->m_lblk, map->m_len);
+#endif
err = ext4_split_unwritten_extents(handle, inode, map, path,
EXT4_GET_BLOCKS_CONVERT);
if (err < 0)
path, map->m_len);
} else
err = ret;
+ map->m_flags |= EXT4_MAP_MAPPED;
+ if (allocated > map->m_len)
+ allocated = map->m_len;
+ map->m_len = allocated;
goto out2;
}
/* buffered IO case */
allocated - map->m_len);
allocated = map->m_len;
}
+ map->m_len = allocated;
/*
* If we have done fallocate with the offset that is already
}
} else {
BUG_ON(allocated_clusters < reserved_clusters);
- /* We will claim quota for all newly allocated blocks.*/
- ext4_da_update_reserve_space(inode, allocated_clusters,
- 1);
if (reserved_clusters < allocated_clusters) {
struct ext4_inode_info *ei = EXT4_I(inode);
int reservation = allocated_clusters -
ei->i_reserved_data_blocks += reservation;
spin_unlock(&ei->i_block_reservation_lock);
}
+ /*
+ * We will claim quota for all newly allocated blocks.
+ * We're updating the reserved space *after* the
+ * correction above so we do not accidentally free
+ * all the metadata reservation because we might
+ * actually need it later on.
+ */
+ ext4_da_update_reserve_space(inode, allocated_clusters,
+ 1);
}
}
if (len <= EXT_UNINIT_MAX_LEN << blkbits)
flags |= EXT4_GET_BLOCKS_NO_NORMALIZE;
- /* Prevent race condition between unwritten */
- ext4_flush_unwritten_io(inode);
retry:
while (ret >= 0 && ret < max_blocks) {
map.m_lblk = map.m_lblk + ret;
static int ext4_es_can_be_merged(struct extent_status *es1,
struct extent_status *es2)
{
- if (es1->es_lblk + es1->es_len != es2->es_lblk)
+ if (ext4_es_status(es1) != ext4_es_status(es2))
return 0;
- if (ext4_es_status(es1) != ext4_es_status(es2))
+ if (((__u64) es1->es_len) + es2->es_len > 0xFFFFFFFFULL)
return 0;
- if ((ext4_es_is_written(es1) || ext4_es_is_unwritten(es1)) &&
- (ext4_es_pblock(es1) + es1->es_len != ext4_es_pblock(es2)))
+ if (((__u64) es1->es_lblk) + es1->es_len != es2->es_lblk)
return 0;
- return 1;
+ if ((ext4_es_is_written(es1) || ext4_es_is_unwritten(es1)) &&
+ (ext4_es_pblock(es1) + es1->es_len == ext4_es_pblock(es2)))
+ return 1;
+
+ if (ext4_es_is_hole(es1))
+ return 1;
+
+ /* we need to check delayed extent is without unwritten status */
+ if (ext4_es_is_delayed(es1) && !ext4_es_is_unwritten(es1))
+ return 1;
+
+ return 0;
}
static struct extent_status *
return es;
}
+#ifdef ES_AGGRESSIVE_TEST
+static void ext4_es_insert_extent_ext_check(struct inode *inode,
+ struct extent_status *es)
+{
+ struct ext4_ext_path *path = NULL;
+ struct ext4_extent *ex;
+ ext4_lblk_t ee_block;
+ ext4_fsblk_t ee_start;
+ unsigned short ee_len;
+ int depth, ee_status, es_status;
+
+ path = ext4_ext_find_extent(inode, es->es_lblk, NULL);
+ if (IS_ERR(path))
+ return;
+
+ depth = ext_depth(inode);
+ ex = path[depth].p_ext;
+
+ if (ex) {
+
+ ee_block = le32_to_cpu(ex->ee_block);
+ ee_start = ext4_ext_pblock(ex);
+ ee_len = ext4_ext_get_actual_len(ex);
+
+ ee_status = ext4_ext_is_uninitialized(ex) ? 1 : 0;
+ es_status = ext4_es_is_unwritten(es) ? 1 : 0;
+
+ /*
+ * Make sure ex and es are not overlap when we try to insert
+ * a delayed/hole extent.
+ */
+ if (!ext4_es_is_written(es) && !ext4_es_is_unwritten(es)) {
+ if (in_range(es->es_lblk, ee_block, ee_len)) {
+ pr_warn("ES insert assertation failed for "
+ "inode: %lu we can find an extent "
+ "at block [%d/%d/%llu/%c], but we "
+ "want to add an delayed/hole extent "
+ "[%d/%d/%llu/%llx]\n",
+ inode->i_ino, ee_block, ee_len,
+ ee_start, ee_status ? 'u' : 'w',
+ es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ }
+ goto out;
+ }
+
+ /*
+ * We don't check ee_block == es->es_lblk, etc. because es
+ * might be a part of whole extent, vice versa.
+ */
+ if (es->es_lblk < ee_block ||
+ ext4_es_pblock(es) != ee_start + es->es_lblk - ee_block) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "ex_status [%d/%d/%llu/%c] != "
+ "es_status [%d/%d/%llu/%c]\n", inode->i_ino,
+ ee_block, ee_len, ee_start,
+ ee_status ? 'u' : 'w', es->es_lblk, es->es_len,
+ ext4_es_pblock(es), es_status ? 'u' : 'w');
+ goto out;
+ }
+
+ if (ee_status ^ es_status) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "ex_status [%d/%d/%llu/%c] != "
+ "es_status [%d/%d/%llu/%c]\n", inode->i_ino,
+ ee_block, ee_len, ee_start,
+ ee_status ? 'u' : 'w', es->es_lblk, es->es_len,
+ ext4_es_pblock(es), es_status ? 'u' : 'w');
+ }
+ } else {
+ /*
+ * We can't find an extent on disk. So we need to make sure
+ * that we don't want to add an written/unwritten extent.
+ */
+ if (!ext4_es_is_delayed(es) && !ext4_es_is_hole(es)) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "can't find an extent at block %d but we want "
+ "to add an written/unwritten extent "
+ "[%d/%d/%llu/%llx]\n", inode->i_ino,
+ es->es_lblk, es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ }
+ }
+out:
+ if (path) {
+ ext4_ext_drop_refs(path);
+ kfree(path);
+ }
+}
+
+static void ext4_es_insert_extent_ind_check(struct inode *inode,
+ struct extent_status *es)
+{
+ struct ext4_map_blocks map;
+ int retval;
+
+ /*
+ * Here we call ext4_ind_map_blocks to lookup a block mapping because
+ * 'Indirect' structure is defined in indirect.c. So we couldn't
+ * access direct/indirect tree from outside. It is too dirty to define
+ * this function in indirect.c file.
+ */
+
+ map.m_lblk = es->es_lblk;
+ map.m_len = es->es_len;
+
+ retval = ext4_ind_map_blocks(NULL, inode, &map, 0);
+ if (retval > 0) {
+ if (ext4_es_is_delayed(es) || ext4_es_is_hole(es)) {
+ /*
+ * We want to add a delayed/hole extent but this
+ * block has been allocated.
+ */
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "We can find blocks but we want to add a "
+ "delayed/hole extent [%d/%d/%llu/%llx]\n",
+ inode->i_ino, es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ return;
+ } else if (ext4_es_is_written(es)) {
+ if (retval != es->es_len) {
+ pr_warn("ES insert assertation failed for "
+ "inode: %lu retval %d != es_len %d\n",
+ inode->i_ino, retval, es->es_len);
+ return;
+ }
+ if (map.m_pblk != ext4_es_pblock(es)) {
+ pr_warn("ES insert assertation failed for "
+ "inode: %lu m_pblk %llu != "
+ "es_pblk %llu\n",
+ inode->i_ino, map.m_pblk,
+ ext4_es_pblock(es));
+ return;
+ }
+ } else {
+ /*
+ * We don't need to check unwritten extent because
+ * indirect-based file doesn't have it.
+ */
+ BUG_ON(1);
+ }
+ } else if (retval == 0) {
+ if (ext4_es_is_written(es)) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "We can't find the block but we want to add "
+ "an written extent [%d/%d/%llu/%llx]\n",
+ inode->i_ino, es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ return;
+ }
+ }
+}
+
+static inline void ext4_es_insert_extent_check(struct inode *inode,
+ struct extent_status *es)
+{
+ /*
+ * We don't need to worry about the race condition because
+ * caller takes i_data_sem locking.
+ */
+ BUG_ON(!rwsem_is_locked(&EXT4_I(inode)->i_data_sem));
+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ ext4_es_insert_extent_ext_check(inode, es);
+ else
+ ext4_es_insert_extent_ind_check(inode, es);
+}
+#else
+static inline void ext4_es_insert_extent_check(struct inode *inode,
+ struct extent_status *es)
+{
+}
+#endif
+
static int __es_insert_extent(struct inode *inode, struct extent_status *newes)
{
struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
ext4_es_store_status(&newes, status);
trace_ext4_es_insert_extent(inode, &newes);
+ ext4_es_insert_extent_check(inode, &newes);
+
write_lock(&EXT4_I(inode)->i_es_lock);
err = __es_remove_extent(inode, lblk, end);
if (err != 0)
return err;
}
+int ext4_es_zeroout(struct inode *inode, struct ext4_extent *ex)
+{
+ ext4_lblk_t ee_block;
+ ext4_fsblk_t ee_pblock;
+ unsigned int ee_len;
+
+ ee_block = le32_to_cpu(ex->ee_block);
+ ee_len = ext4_ext_get_actual_len(ex);
+ ee_pblock = ext4_ext_pblock(ex);
+
+ if (ee_len == 0)
+ return 0;
+
+ return ext4_es_insert_extent(inode, ee_block, ee_len, ee_pblock,
+ EXTENT_STATUS_WRITTEN);
+}
+
static int ext4_es_shrink(struct shrinker *shrink, struct shrink_control *sc)
{
struct ext4_sb_info *sbi = container_of(shrink,
#define es_debug(fmt, ...) no_printk(fmt, ##__VA_ARGS__)
#endif
+/*
+ * With ES_AGGRESSIVE_TEST defined, the result of es caching will be
+ * checked with old map_block's result.
+ */
+#define ES_AGGRESSIVE_TEST__
+
/*
* These flags live in the high bits of extent_status.es_pblk
*/
EXTENT_STATUS_DELAYED | \
EXTENT_STATUS_HOLE)
+struct ext4_extent;
+
struct extent_status {
struct rb_node rb_node;
ext4_lblk_t es_lblk; /* first logical block extent covers */
struct extent_status *es);
extern int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
struct extent_status *es);
+extern int ext4_es_zeroout(struct inode *inode, struct ext4_extent *ex);
static inline int ext4_es_is_written(struct extent_status *es)
{
}
struct orlov_stats {
+ __u64 free_clusters;
__u32 free_inodes;
- __u32 free_clusters;
__u32 used_dirs;
};
if (flex_size > 1) {
stats->free_inodes = atomic_read(&flex_group[g].free_inodes);
- stats->free_clusters = atomic_read(&flex_group[g].free_clusters);
+ stats->free_clusters = atomic64_read(&flex_group[g].free_clusters);
stats->used_dirs = atomic_read(&flex_group[g].used_dirs);
return;
}
trace_ext4_evict_inode(inode);
- ext4_ioend_wait(inode);
-
if (inode->i_nlink) {
/*
* When journalling data dirty buffers are tracked only in the
* don't use page cache.
*/
if (ext4_should_journal_data(inode) &&
- (S_ISLNK(inode->i_mode) || S_ISREG(inode->i_mode))) {
+ (S_ISLNK(inode->i_mode) || S_ISREG(inode->i_mode)) &&
+ inode->i_ino != EXT4_JOURNAL_INO) {
journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
tid_t commit_tid = EXT4_I(inode)->i_datasync_tid;
filemap_write_and_wait(&inode->i_data);
}
truncate_inode_pages(&inode->i_data, 0);
+ ext4_ioend_shutdown(inode);
goto no_delete;
}
if (ext4_should_order_data(inode))
ext4_begin_ordered_truncate(inode, 0);
truncate_inode_pages(&inode->i_data, 0);
+ ext4_ioend_shutdown(inode);
if (is_bad_inode(inode))
goto no_delete;
return num;
}
+#ifdef ES_AGGRESSIVE_TEST
+static void ext4_map_blocks_es_recheck(handle_t *handle,
+ struct inode *inode,
+ struct ext4_map_blocks *es_map,
+ struct ext4_map_blocks *map,
+ int flags)
+{
+ int retval;
+
+ map->m_flags = 0;
+ /*
+ * There is a race window that the result is not the same.
+ * e.g. xfstests #223 when dioread_nolock enables. The reason
+ * is that we lookup a block mapping in extent status tree with
+ * out taking i_data_sem. So at the time the unwritten extent
+ * could be converted.
+ */
+ if (!(flags & EXT4_GET_BLOCKS_NO_LOCK))
+ down_read((&EXT4_I(inode)->i_data_sem));
+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) {
+ retval = ext4_ext_map_blocks(handle, inode, map, flags &
+ EXT4_GET_BLOCKS_KEEP_SIZE);
+ } else {
+ retval = ext4_ind_map_blocks(handle, inode, map, flags &
+ EXT4_GET_BLOCKS_KEEP_SIZE);
+ }
+ if (!(flags & EXT4_GET_BLOCKS_NO_LOCK))
+ up_read((&EXT4_I(inode)->i_data_sem));
+ /*
+ * Clear EXT4_MAP_FROM_CLUSTER and EXT4_MAP_BOUNDARY flag
+ * because it shouldn't be marked in es_map->m_flags.
+ */
+ map->m_flags &= ~(EXT4_MAP_FROM_CLUSTER | EXT4_MAP_BOUNDARY);
+
+ /*
+ * We don't check m_len because extent will be collpased in status
+ * tree. So the m_len might not equal.
+ */
+ if (es_map->m_lblk != map->m_lblk ||
+ es_map->m_flags != map->m_flags ||
+ es_map->m_pblk != map->m_pblk) {
+ printk("ES cache assertation failed for inode: %lu "
+ "es_cached ex [%d/%d/%llu/%x] != "
+ "found ex [%d/%d/%llu/%x] retval %d flags %x\n",
+ inode->i_ino, es_map->m_lblk, es_map->m_len,
+ es_map->m_pblk, es_map->m_flags, map->m_lblk,
+ map->m_len, map->m_pblk, map->m_flags,
+ retval, flags);
+ }
+}
+#endif /* ES_AGGRESSIVE_TEST */
+
/*
* The ext4_map_blocks() function tries to look up the requested blocks,
* and returns if the blocks are already mapped.
{
struct extent_status es;
int retval;
+#ifdef ES_AGGRESSIVE_TEST
+ struct ext4_map_blocks orig_map;
+
+ memcpy(&orig_map, map, sizeof(*map));
+#endif
map->m_flags = 0;
ext_debug("ext4_map_blocks(): inode %lu, flag %d, max_blocks %u,"
} else {
BUG_ON(1);
}
+#ifdef ES_AGGRESSIVE_TEST
+ ext4_map_blocks_es_recheck(handle, inode, map,
+ &orig_map, flags);
+#endif
goto found;
}
int ret;
unsigned long long status;
+#ifdef ES_AGGRESSIVE_TEST
+ if (retval != map->m_len) {
+ printk("ES len assertation failed for inode: %lu "
+ "retval %d != map->m_len %d "
+ "in %s (lookup)\n", inode->i_ino, retval,
+ map->m_len, __func__);
+ }
+#endif
+
status = map->m_flags & EXT4_MAP_UNWRITTEN ?
EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
int ret;
unsigned long long status;
+#ifdef ES_AGGRESSIVE_TEST
+ if (retval != map->m_len) {
+ printk("ES len assertation failed for inode: %lu "
+ "retval %d != map->m_len %d "
+ "in %s (allocation)\n", inode->i_ino, retval,
+ map->m_len, __func__);
+ }
+#endif
+
+ /*
+ * If the extent has been zeroed out, we don't need to update
+ * extent status tree.
+ */
+ if ((flags & EXT4_GET_BLOCKS_PRE_IO) &&
+ ext4_es_lookup_extent(inode, map->m_lblk, &es)) {
+ if (ext4_es_is_written(&es))
+ goto has_zeroout;
+ }
status = map->m_flags & EXT4_MAP_UNWRITTEN ?
EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
retval = ret;
}
+has_zeroout:
up_write((&EXT4_I(inode)->i_data_sem));
if (retval > 0 && map->m_flags & EXT4_MAP_MAPPED) {
int ret = check_block_validity(inode, map);
return ret ? ret : copied;
}
+/*
+ * Reserve a metadata for a single block located at lblock
+ */
+static int ext4_da_reserve_metadata(struct inode *inode, ext4_lblk_t lblock)
+{
+ int retries = 0;
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ unsigned int md_needed;
+ ext4_lblk_t save_last_lblock;
+ int save_len;
+
+ /*
+ * recalculate the amount of metadata blocks to reserve
+ * in order to allocate nrblocks
+ * worse case is one extent per block
+ */
+repeat:
+ spin_lock(&ei->i_block_reservation_lock);
+ /*
+ * ext4_calc_metadata_amount() has side effects, which we have
+ * to be prepared undo if we fail to claim space.
+ */
+ save_len = ei->i_da_metadata_calc_len;
+ save_last_lblock = ei->i_da_metadata_calc_last_lblock;
+ md_needed = EXT4_NUM_B2C(sbi,
+ ext4_calc_metadata_amount(inode, lblock));
+ trace_ext4_da_reserve_space(inode, md_needed);
+
+ /*
+ * We do still charge estimated metadata to the sb though;
+ * we cannot afford to run out of free blocks.
+ */
+ if (ext4_claim_free_clusters(sbi, md_needed, 0)) {
+ ei->i_da_metadata_calc_len = save_len;
+ ei->i_da_metadata_calc_last_lblock = save_last_lblock;
+ spin_unlock(&ei->i_block_reservation_lock);
+ if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
+ cond_resched();
+ goto repeat;
+ }
+ return -ENOSPC;
+ }
+ ei->i_reserved_meta_blocks += md_needed;
+ spin_unlock(&ei->i_block_reservation_lock);
+
+ return 0; /* success */
+}
+
/*
* Reserve a single cluster located at lblock
*/
ei->i_da_metadata_calc_last_lblock = save_last_lblock;
spin_unlock(&ei->i_block_reservation_lock);
if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
- yield();
+ cond_resched();
goto repeat;
}
dquot_release_reservation_block(inode, EXT4_C2B(sbi, 1));
struct extent_status es;
int retval;
sector_t invalid_block = ~((sector_t) 0xffff);
+#ifdef ES_AGGRESSIVE_TEST
+ struct ext4_map_blocks orig_map;
+
+ memcpy(&orig_map, map, sizeof(*map));
+#endif
if (invalid_block < ext4_blocks_count(EXT4_SB(inode->i_sb)->s_es))
invalid_block = ~0;
else
BUG_ON(1);
+#ifdef ES_AGGRESSIVE_TEST
+ ext4_map_blocks_es_recheck(NULL, inode, map, &orig_map, 0);
+#endif
return retval;
}
* XXX: __block_prepare_write() unmaps passed block,
* is it OK?
*/
- /* If the block was allocated from previously allocated cluster,
- * then we dont need to reserve it again. */
+ /*
+ * If the block was allocated from previously allocated cluster,
+ * then we don't need to reserve it again. However we still need
+ * to reserve metadata for every block we're going to write.
+ */
if (!(map->m_flags & EXT4_MAP_FROM_CLUSTER)) {
ret = ext4_da_reserve_space(inode, iblock);
if (ret) {
retval = ret;
goto out_unlock;
}
+ } else {
+ ret = ext4_da_reserve_metadata(inode, iblock);
+ if (ret) {
+ /* not enough space to reserve */
+ retval = ret;
+ goto out_unlock;
+ }
}
ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
int ret;
unsigned long long status;
+#ifdef ES_AGGRESSIVE_TEST
+ if (retval != map->m_len) {
+ printk("ES len assertation failed for inode: %lu "
+ "retval %d != map->m_len %d "
+ "in %s (lookup)\n", inode->i_ino, retval,
+ map->m_len, __func__);
+ }
+#endif
+
status = map->m_flags & EXT4_MAP_UNWRITTEN ?
EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
trace_ext4_releasepage(page);
- WARN_ON(PageChecked(page));
- if (!page_has_buffers(page))
+ /* Page has dirty journalled data -> cannot release */
+ if (PageChecked(page))
return 0;
if (journal)
return jbd2_journal_try_to_free_buffers(journal, page, wait);
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi,
ac->ac_b_ex.fe_group);
- atomic_sub(ac->ac_b_ex.fe_len,
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_sub(ac->ac_b_ex.fe_len,
+ &sbi->s_flex_groups[flex_group].free_clusters);
}
err = ext4_handle_dirty_metadata(handle, NULL, bitmap_bh);
if (free < needed && busy) {
busy = 0;
ext4_unlock_group(sb, group);
- /*
- * Yield the CPU here so that we don't get soft lockup
- * in non preempt case.
- */
- yield();
+ cond_resched();
goto repeat;
}
ext4_claim_free_clusters(sbi, ar->len, ar->flags)) {
/* let others to free the space */
- yield();
+ cond_resched();
ar->len = ar->len >> 1;
}
if (!ar->len) {
struct buffer_head *bitmap_bh = NULL;
struct super_block *sb = inode->i_sb;
struct ext4_group_desc *gdp;
- unsigned long freed = 0;
unsigned int overflow;
ext4_grpblk_t bit;
struct buffer_head *gd_bh;
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi, block_group);
- atomic_add(count_clusters,
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(count_clusters,
+ &sbi->s_flex_groups[flex_group].free_clusters);
}
ext4_mb_unload_buddy(&e4b);
- freed += count;
-
if (!(flags & EXT4_FREE_BLOCKS_NO_QUOT_UPDATE))
dquot_free_block(inode, EXT4_C2B(sbi, count_clusters));
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi, block_group);
- atomic_add(EXT4_NUM_B2C(sbi, blocks_freed),
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(EXT4_NUM_B2C(sbi, blocks_freed),
+ &sbi->s_flex_groups[flex_group].free_clusters);
}
ext4_mb_unload_buddy(&e4b);
*/
static inline int
get_ext_path(struct inode *inode, ext4_lblk_t lblock,
- struct ext4_ext_path **path)
+ struct ext4_ext_path **orig_path)
{
int ret = 0;
+ struct ext4_ext_path *path;
- *path = ext4_ext_find_extent(inode, lblock, *path);
- if (IS_ERR(*path)) {
- ret = PTR_ERR(*path);
- *path = NULL;
- } else if ((*path)[ext_depth(inode)].p_ext == NULL)
+ path = ext4_ext_find_extent(inode, lblock, *orig_path);
+ if (IS_ERR(path))
+ ret = PTR_ERR(path);
+ else if (path[ext_depth(inode)].p_ext == NULL)
ret = -ENODATA;
+ else
+ *orig_path = path;
return ret;
}
{
struct ext4_ext_path *path = NULL;
struct ext4_extent *ext;
+ int ret = 0;
ext4_lblk_t last = from + count;
while (from < last) {
*err = get_ext_path(inode, from, &path);
if (*err)
- return 0;
+ goto out;
ext = path[ext_depth(inode)].p_ext;
- if (!ext) {
- ext4_ext_drop_refs(path);
- return 0;
- }
- if (uninit != ext4_ext_is_uninitialized(ext)) {
- ext4_ext_drop_refs(path);
- return 0;
- }
+ if (uninit != ext4_ext_is_uninitialized(ext))
+ goto out;
from += ext4_ext_get_actual_len(ext);
ext4_ext_drop_refs(path);
}
- return 1;
+ ret = 1;
+out:
+ if (path) {
+ ext4_ext_drop_refs(path);
+ kfree(path);
+ }
+ return ret;
}
/**
int replaced_count = 0;
int dext_alen;
+ *err = ext4_es_remove_extent(orig_inode, from, count);
+ if (*err)
+ goto out;
+
+ *err = ext4_es_remove_extent(donor_inode, from, count);
+ if (*err)
+ goto out;
+
/* Get the original extent for the block "orig_off" */
*err = get_ext_path(orig_inode, orig_off, &orig_path);
if (*err)
kmem_cache_destroy(io_page_cachep);
}
-void ext4_ioend_wait(struct inode *inode)
+/*
+ * This function is called by ext4_evict_inode() to make sure there is
+ * no more pending I/O completion work left to do.
+ */
+void ext4_ioend_shutdown(struct inode *inode)
{
wait_queue_head_t *wq = ext4_ioend_wq(inode);
wait_event(*wq, (atomic_read(&EXT4_I(inode)->i_ioend_count) == 0));
+ /*
+ * We need to make sure the work structure is finished being
+ * used before we let the inode get destroyed.
+ */
+ if (work_pending(&EXT4_I(inode)->i_unwritten_work))
+ cancel_work_sync(&EXT4_I(inode)->i_unwritten_work);
}
static void put_io_page(struct ext4_io_page *io_page)
sbi->s_log_groups_per_flex) {
ext4_group_t flex_group;
flex_group = ext4_flex_group(sbi, group_data[0].group);
- atomic_add(EXT4_NUM_B2C(sbi, free_blocks),
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(EXT4_NUM_B2C(sbi, free_blocks),
+ &sbi->s_flex_groups[flex_group].free_clusters);
atomic_add(EXT4_INODES_PER_GROUP(sb) * flex_gd->count,
&sbi->s_flex_groups[flex_group].free_inodes);
}
flex_group = ext4_flex_group(sbi, i);
atomic_add(ext4_free_inodes_count(sb, gdp),
&sbi->s_flex_groups[flex_group].free_inodes);
- atomic_add(ext4_free_group_clusters(sb, gdp),
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(ext4_free_group_clusters(sb, gdp),
+ &sbi->s_flex_groups[flex_group].free_clusters);
atomic_add(ext4_used_dirs_count(sb, gdp),
&sbi->s_flex_groups[flex_group].used_dirs);
}
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/kthread.h>
-#include <linux/freezer.h>
#include <linux/writeback.h>
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
#define CREATE_TRACE_POINTS
#include <trace/events/writeback.h>
-/* Wakeup flusher thread or forker thread to fork it. Requires bdi->wb_lock. */
-static void bdi_wakeup_flusher(struct backing_dev_info *bdi)
-{
- if (bdi->wb.task) {
- wake_up_process(bdi->wb.task);
- } else {
- /*
- * The bdi thread isn't there, wake up the forker thread which
- * will create and run it.
- */
- wake_up_process(default_backing_dev_info.wb.task);
- }
-}
-
static void bdi_queue_work(struct backing_dev_info *bdi,
struct wb_writeback_work *work)
{
spin_lock_bh(&bdi->wb_lock);
list_add_tail(&work->list, &bdi->work_list);
- if (!bdi->wb.task)
- trace_writeback_nothread(bdi, work);
- bdi_wakeup_flusher(bdi);
spin_unlock_bh(&bdi->wb_lock);
+
+ mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
}
static void
*/
work = kzalloc(sizeof(*work), GFP_ATOMIC);
if (!work) {
- if (bdi->wb.task) {
- trace_writeback_nowork(bdi);
- wake_up_process(bdi->wb.task);
- }
+ trace_writeback_nowork(bdi);
+ mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
return;
}
* writeback as soon as there is no other work to do.
*/
trace_writeback_wake_background(bdi);
- spin_lock_bh(&bdi->wb_lock);
- bdi_wakeup_flusher(bdi);
- spin_unlock_bh(&bdi->wb_lock);
+ mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
}
/*
/*
* Handle writeback of dirty data for the device backed by this bdi. Also
- * wakes up periodically and does kupdated style flushing.
+ * reschedules periodically and does kupdated style flushing.
*/
-int bdi_writeback_thread(void *data)
+void bdi_writeback_workfn(struct work_struct *work)
{
- struct bdi_writeback *wb = data;
+ struct bdi_writeback *wb = container_of(to_delayed_work(work),
+ struct bdi_writeback, dwork);
struct backing_dev_info *bdi = wb->bdi;
long pages_written;
current->flags |= PF_SWAPWRITE;
- set_freezable();
- wb->last_active = jiffies;
-
- /*
- * Our parent may run at a different priority, just set us to normal
- */
- set_user_nice(current, 0);
-
- trace_writeback_thread_start(bdi);
- while (!kthread_freezable_should_stop(NULL)) {
+ if (likely(!current_is_workqueue_rescuer() ||
+ list_empty(&bdi->bdi_list))) {
/*
- * Remove own delayed wake-up timer, since we are already awake
- * and we'll take care of the periodic write-back.
+ * The normal path. Keep writing back @bdi until its
+ * work_list is empty. Note that this path is also taken
+ * if @bdi is shutting down even when we're running off the
+ * rescuer as work_list needs to be drained.
*/
- del_timer(&wb->wakeup_timer);
-
- pages_written = wb_do_writeback(wb, 0);
-
+ do {
+ pages_written = wb_do_writeback(wb, 0);
+ trace_writeback_pages_written(pages_written);
+ } while (!list_empty(&bdi->work_list));
+ } else {
+ /*
+ * bdi_wq can't get enough workers and we're running off
+ * the emergency worker. Don't hog it. Hopefully, 1024 is
+ * enough for efficient IO.
+ */
+ pages_written = writeback_inodes_wb(&bdi->wb, 1024,
+ WB_REASON_FORKER_THREAD);
trace_writeback_pages_written(pages_written);
-
- if (pages_written)
- wb->last_active = jiffies;
-
- set_current_state(TASK_INTERRUPTIBLE);
- if (!list_empty(&bdi->work_list) || kthread_should_stop()) {
- __set_current_state(TASK_RUNNING);
- continue;
- }
-
- if (wb_has_dirty_io(wb) && dirty_writeback_interval)
- schedule_timeout(msecs_to_jiffies(dirty_writeback_interval * 10));
- else {
- /*
- * We have nothing to do, so can go sleep without any
- * timeout and save power. When a work is queued or
- * something is made dirty - we will be woken up.
- */
- schedule();
- }
}
- /* Flush any work that raced with us exiting */
- if (!list_empty(&bdi->work_list))
- wb_do_writeback(wb, 1);
+ if (!list_empty(&bdi->work_list) ||
+ (wb_has_dirty_io(wb) && dirty_writeback_interval))
+ queue_delayed_work(bdi_wq, &wb->dwork,
+ msecs_to_jiffies(dirty_writeback_interval * 10));
- trace_writeback_thread_stop(bdi);
- return 0;
+ current->flags &= ~PF_SWAPWRITE;
}
-
/*
* Start writeback of `nr_pages' pages. If `nr_pages' is zero, write back
* the whole world.
* dcache.c
*/
extern struct dentry *__d_alloc(struct super_block *, const struct qstr *);
+
+/*
+ * read_write.c
+ */
+extern ssize_t __kernel_write(struct file *, const char *, size_t, loff_t *);
void jbd2_journal_set_triggers(struct buffer_head *bh,
struct jbd2_buffer_trigger_type *type)
{
- struct journal_head *jh = bh2jh(bh);
+ struct journal_head *jh = jbd2_journal_grab_journal_head(bh);
+ if (WARN_ON(!jh))
+ return;
jh->b_triggers = type;
+ jbd2_journal_put_journal_head(jh);
}
void jbd2_buffer_frozen_trigger(struct journal_head *jh, void *mapped_data,
{
transaction_t *transaction = handle->h_transaction;
journal_t *journal = transaction->t_journal;
- struct journal_head *jh = bh2jh(bh);
+ struct journal_head *jh;
int ret = 0;
- jbd_debug(5, "journal_head %p\n", jh);
- JBUFFER_TRACE(jh, "entry");
if (is_handle_aborted(handle))
goto out;
- if (!buffer_jbd(bh)) {
+ jh = jbd2_journal_grab_journal_head(bh);
+ if (!jh) {
ret = -EUCLEAN;
goto out;
}
+ jbd_debug(5, "journal_head %p\n", jh);
+ JBUFFER_TRACE(jh, "entry");
jbd_lock_bh_state(bh);
spin_unlock(&journal->j_list_lock);
out_unlock_bh:
jbd_unlock_bh_state(bh);
+ jbd2_journal_put_journal_head(jh);
out:
JBUFFER_TRACE(jh, "exit");
WARN_ON(ret); /* All errors are bugs, so dump the stack */
}
mnt->mnt.mnt_flags = old->mnt.mnt_flags & ~MNT_WRITE_HOLD;
+ /* Don't allow unprivileged users to change mount flags */
+ if ((flag & CL_UNPRIVILEGED) && (mnt->mnt.mnt_flags & MNT_READONLY))
+ mnt->mnt.mnt_flags |= MNT_LOCK_READONLY;
+
atomic_inc(&sb->s_active);
mnt->mnt.mnt_sb = sb;
mnt->mnt.mnt_root = dget(root);
if (readonly_request == __mnt_is_readonly(mnt))
return 0;
+ if (mnt->mnt_flags & MNT_LOCK_READONLY)
+ return -EPERM;
+
if (readonly_request)
error = mnt_make_readonly(real_mount(mnt));
else
/* First pass: copy the tree topology */
copy_flags = CL_COPY_ALL | CL_EXPIRE;
if (user_ns != mnt_ns->user_ns)
- copy_flags |= CL_SHARED_TO_SLAVE;
+ copy_flags |= CL_SHARED_TO_SLAVE | CL_UNPRIVILEGED;
new = copy_tree(old, old->mnt.mnt_root, copy_flags);
if (IS_ERR(new)) {
up_write(&namespace_sem);
return check_mnt(real_mount(mnt));
}
+bool current_chrooted(void)
+{
+ /* Does the current process have a non-standard root */
+ struct path ns_root;
+ struct path fs_root;
+ bool chrooted;
+
+ /* Find the namespace root */
+ ns_root.mnt = ¤t->nsproxy->mnt_ns->root->mnt;
+ ns_root.dentry = ns_root.mnt->mnt_root;
+ path_get(&ns_root);
+ while (d_mountpoint(ns_root.dentry) && follow_down_one(&ns_root))
+ ;
+
+ get_fs_root(current->fs, &fs_root);
+
+ chrooted = !path_equal(&fs_root, &ns_root);
+
+ path_put(&fs_root);
+ path_put(&ns_root);
+
+ return chrooted;
+}
+
+void update_mnt_policy(struct user_namespace *userns)
+{
+ struct mnt_namespace *ns = current->nsproxy->mnt_ns;
+ struct mount *mnt;
+
+ down_read(&namespace_sem);
+ list_for_each_entry(mnt, &ns->list, mnt_list) {
+ switch (mnt->mnt.mnt_sb->s_magic) {
+ case SYSFS_MAGIC:
+ userns->may_mount_sysfs = true;
+ break;
+ case PROC_SUPER_MAGIC:
+ userns->may_mount_proc = true;
+ break;
+ }
+ if (userns->may_mount_sysfs && userns->may_mount_proc)
+ break;
+ }
+ up_read(&namespace_sem);
+}
+
static void *mntns_get(struct task_struct *task)
{
struct mnt_namespace *ns = NULL;
bl_pipe_msg.bl_wq = &nn->bl_wq;
memset(msg, 0, sizeof(*msg));
- msg->data = kzalloc(1 + sizeof(bl_umount_request), GFP_NOFS);
+ msg->len = sizeof(bl_msg) + bl_msg.totallen;
+ msg->data = kzalloc(msg->len, GFP_NOFS);
if (!msg->data)
goto out;
memcpy(msg->data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg->data;
memcpy(&dataptr[sizeof(bl_msg)], &bl_umount_request, sizeof(bl_umount_request));
- msg->len = sizeof(bl_msg) + bl_msg.totallen;
add_wait_queue(&nn->bl_wq, &wq);
if (rpc_queue_upcall(nn->bl_device_pipe, msg) < 0) {
return ret;
}
-static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data)
+static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data, size_t datalen)
{
- return key_instantiate_and_link(key, data, strlen(data) + 1,
+ return key_instantiate_and_link(key, data, datalen,
id_resolver_cache->thread_keyring,
authkey);
}
struct key *key, struct key *authkey)
{
char id_str[NFS_UINT_MAXLEN];
+ size_t len;
int ret = -ENOKEY;
/* ret = -ENOKEY */
case IDMAP_CONV_NAMETOID:
if (strcmp(upcall->im_name, im->im_name) != 0)
break;
- sprintf(id_str, "%d", im->im_id);
- ret = nfs_idmap_instantiate(key, authkey, id_str);
+ /* Note: here we store the NUL terminator too */
+ len = sprintf(id_str, "%d", im->im_id) + 1;
+ ret = nfs_idmap_instantiate(key, authkey, id_str, len);
break;
case IDMAP_CONV_IDTONAME:
if (upcall->im_id != im->im_id)
break;
- ret = nfs_idmap_instantiate(key, authkey, im->im_name);
+ len = strlen(im->im_name);
+ ret = nfs_idmap_instantiate(key, authkey, im->im_name, len);
break;
default:
ret = -EINVAL;
{
if (!test_and_clear_bit(NFS_LAYOUT_RETURN, &lo->plh_flags))
return;
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(inode)->flags);
pnfs_return_layout(inode);
}
int status;
if (pnfs_ld_layoutret_on_setattr(inode))
- pnfs_return_layout(inode);
+ pnfs_commit_and_return_layout(inode);
nfs_fattr_init(fattr);
static void nfs4_layoutcommit_release(void *calldata)
{
struct nfs4_layoutcommit_data *data = calldata;
- struct pnfs_layout_segment *lseg, *tmp;
- unsigned long *bitlock = &NFS_I(data->args.inode)->flags;
pnfs_cleanup_layoutcommit(data);
- /* Matched by references in pnfs_set_layoutcommit */
- list_for_each_entry_safe(lseg, tmp, &data->lseg_list, pls_lc_list) {
- list_del_init(&lseg->pls_lc_list);
- if (test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT,
- &lseg->pls_flags))
- pnfs_put_lseg(lseg);
- }
-
- clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock);
- smp_mb__after_clear_bit();
- wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING);
-
put_rpccred(data->cred);
kfree(data);
}
lo_seg_intersecting(lseg_range, recall_range);
}
+static bool pnfs_lseg_dec_and_remove_zero(struct pnfs_layout_segment *lseg,
+ struct list_head *tmp_list)
+{
+ if (!atomic_dec_and_test(&lseg->pls_refcount))
+ return false;
+ pnfs_layout_remove_lseg(lseg->pls_layout, lseg);
+ list_add(&lseg->pls_list, tmp_list);
+ return true;
+}
+
/* Returns 1 if lseg is removed from list, 0 otherwise */
static int mark_lseg_invalid(struct pnfs_layout_segment *lseg,
struct list_head *tmp_list)
*/
dprintk("%s: lseg %p ref %d\n", __func__, lseg,
atomic_read(&lseg->pls_refcount));
- if (atomic_dec_and_test(&lseg->pls_refcount)) {
- pnfs_layout_remove_lseg(lseg->pls_layout, lseg);
- list_add(&lseg->pls_list, tmp_list);
+ if (pnfs_lseg_dec_and_remove_zero(lseg, tmp_list))
rv = 1;
- }
}
return rv;
}
return lseg;
}
+static void pnfs_clear_layoutcommit(struct inode *inode,
+ struct list_head *head)
+{
+ struct nfs_inode *nfsi = NFS_I(inode);
+ struct pnfs_layout_segment *lseg, *tmp;
+
+ if (!test_and_clear_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags))
+ return;
+ list_for_each_entry_safe(lseg, tmp, &nfsi->layout->plh_segs, pls_list) {
+ if (!test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
+ continue;
+ pnfs_lseg_dec_and_remove_zero(lseg, head);
+ }
+}
+
/*
* Initiates a LAYOUTRETURN(FILE), and removes the pnfs_layout_hdr
* when the layout segment list is empty.
/* Reference matched in nfs4_layoutreturn_release */
pnfs_get_layout_hdr(lo);
empty = list_empty(&lo->plh_segs);
+ pnfs_clear_layoutcommit(ino, &tmp_list);
pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
/* Don't send a LAYOUTRETURN if list was initially empty */
if (empty) {
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&tmp_list);
- WARN_ON(test_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags));
-
lrp = kzalloc(sizeof(*lrp), GFP_KERNEL);
if (unlikely(lrp == NULL)) {
status = -ENOMEM;
}
EXPORT_SYMBOL_GPL(_pnfs_return_layout);
+int
+pnfs_commit_and_return_layout(struct inode *inode)
+{
+ struct pnfs_layout_hdr *lo;
+ int ret;
+
+ spin_lock(&inode->i_lock);
+ lo = NFS_I(inode)->layout;
+ if (lo == NULL) {
+ spin_unlock(&inode->i_lock);
+ return 0;
+ }
+ pnfs_get_layout_hdr(lo);
+ /* Block new layoutgets and read/write to ds */
+ lo->plh_block_lgets++;
+ spin_unlock(&inode->i_lock);
+ filemap_fdatawait(inode->i_mapping);
+ ret = pnfs_layoutcommit_inode(inode, true);
+ if (ret == 0)
+ ret = _pnfs_return_layout(inode);
+ spin_lock(&inode->i_lock);
+ lo->plh_block_lgets--;
+ spin_unlock(&inode->i_lock);
+ pnfs_put_layout_hdr(lo);
+ return ret;
+}
+
bool pnfs_roc(struct inode *ino)
{
struct pnfs_layout_hdr *lo;
dprintk("pnfs write error = %d\n", hdr->pnfs_error);
if (NFS_SERVER(hdr->inode)->pnfs_curr_ld->flags &
PNFS_LAYOUTRET_ON_ERROR) {
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(hdr->inode)->flags);
pnfs_return_layout(hdr->inode);
}
if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags))
dprintk("pnfs read error = %d\n", hdr->pnfs_error);
if (NFS_SERVER(hdr->inode)->pnfs_curr_ld->flags &
PNFS_LAYOUTRET_ON_ERROR) {
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(hdr->inode)->flags);
pnfs_return_layout(hdr->inode);
}
if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags))
list_for_each_entry(lseg, &NFS_I(inode)->layout->plh_segs, pls_list) {
if (lseg->pls_range.iomode == IOMODE_RW &&
- test_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
+ test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
list_add(&lseg->pls_lc_list, listp);
}
}
+static void pnfs_list_write_lseg_done(struct inode *inode, struct list_head *listp)
+{
+ struct pnfs_layout_segment *lseg, *tmp;
+ unsigned long *bitlock = &NFS_I(inode)->flags;
+
+ /* Matched by references in pnfs_set_layoutcommit */
+ list_for_each_entry_safe(lseg, tmp, listp, pls_lc_list) {
+ list_del_init(&lseg->pls_lc_list);
+ pnfs_put_lseg(lseg);
+ }
+
+ clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock);
+ smp_mb__after_clear_bit();
+ wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING);
+}
+
void pnfs_set_lo_fail(struct pnfs_layout_segment *lseg)
{
pnfs_layout_io_set_failed(lseg->pls_layout, lseg->pls_range.iomode);
if (nfss->pnfs_curr_ld->cleanup_layoutcommit)
nfss->pnfs_curr_ld->cleanup_layoutcommit(data);
+ pnfs_list_write_lseg_done(data->args.inode, &data->lseg_list);
}
/*
void pnfs_cleanup_layoutcommit(struct nfs4_layoutcommit_data *data);
int pnfs_layoutcommit_inode(struct inode *inode, bool sync);
int _pnfs_return_layout(struct inode *);
+int pnfs_commit_and_return_layout(struct inode *);
void pnfs_ld_write_done(struct nfs_write_data *);
void pnfs_ld_read_done(struct nfs_read_data *);
struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino,
return 0;
}
+static inline int pnfs_commit_and_return_layout(struct inode *inode)
+{
+ return 0;
+}
+
static inline bool
pnfs_ld_layoutret_on_setattr(struct inode *inode)
{
{
if (rp->c_type == RC_REPLBUFF)
kfree(rp->c_replvec.iov_base);
- hlist_del(&rp->c_hash);
+ if (!hlist_unhashed(&rp->c_hash))
+ hlist_del(&rp->c_hash);
list_del(&rp->c_lru);
--num_drc_entries;
kmem_cache_free(drc_slab, rp);
int nfsd_reply_cache_init(void)
{
+ INIT_LIST_HEAD(&lru_head);
+ max_drc_entries = nfsd_cache_size_limit();
+ num_drc_entries = 0;
+
register_shrinker(&nfsd_reply_cache_shrinker);
drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep),
0, 0, NULL);
if (!cache_hash)
goto out_nomem;
- INIT_LIST_HEAD(&lru_head);
- max_drc_entries = nfsd_cache_size_limit();
- num_drc_entries = 0;
-
return 0;
out_nomem:
printk(KERN_ERR "nfsd: failed to allocate reply cache\n");
int host_err;
int stable = *stablep;
int use_wgather;
+ loff_t pos = offset;
dentry = file->f_path.dentry;
inode = dentry->d_inode;
/* Write the data. */
oldfs = get_fs(); set_fs(KERNEL_DS);
- host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &offset);
+ host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &pos);
set_fs(oldfs);
if (host_err < 0)
goto out_nfserr;
#include <linux/mnt_namespace.h>
#include <linux/mount.h>
#include <linux/fs.h>
+#include <linux/nsproxy.h>
#include "internal.h"
#include "pnode.h"
int propagate_mnt(struct mount *dest_mnt, struct dentry *dest_dentry,
struct mount *source_mnt, struct list_head *tree_list)
{
+ struct user_namespace *user_ns = current->nsproxy->mnt_ns->user_ns;
struct mount *m, *child;
int ret = 0;
struct mount *prev_dest_mnt = dest_mnt;
source = get_source(m, prev_dest_mnt, prev_src_mnt, &type);
+ /* Notice when we are propagating across user namespaces */
+ if (m->mnt_ns->user_ns != user_ns)
+ type |= CL_UNPRIVILEGED;
+
child = copy_tree(source, source->mnt.mnt_root, type);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
#define CL_MAKE_SHARED 0x08
#define CL_PRIVATE 0x10
#define CL_SHARED_TO_SLAVE 0x20
+#define CL_UNPRIVILEGED 0x40
static inline void set_mnt_shared(struct mount *mnt)
{
struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
{
- struct inode *inode = iget_locked(sb, de->low_ino);
+ struct inode *inode = new_inode_pseudo(sb);
- if (inode && (inode->i_state & I_NEW)) {
+ if (inode) {
+ inode->i_ino = de->low_ino;
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
PROC_I(inode)->pde = de;
inode->i_fop = de->proc_fops;
}
}
- unlock_new_inode(inode);
} else
pde_put(de);
return inode;
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/bitops.h>
+#include <linux/user_namespace.h>
#include <linux/mount.h>
#include <linux/pid_namespace.h>
#include <linux/parser.h>
} else {
ns = task_active_pid_ns(current);
options = data;
+
+ if (!current_user_ns()->may_mount_proc)
+ return ERR_PTR(-EPERM);
}
sb = sget(fs_type, proc_test_super, proc_set_super, flags, ns);
#include <linux/splice.h>
#include <linux/compat.h>
#include "read_write.h"
+#include "internal.h"
#include <asm/uaccess.h>
#include <asm/unistd.h>
EXPORT_SYMBOL(do_sync_write);
+ssize_t __kernel_write(struct file *file, const char *buf, size_t count, loff_t *pos)
+{
+ mm_segment_t old_fs;
+ const char __user *p;
+ ssize_t ret;
+
+ if (!file->f_op || (!file->f_op->write && !file->f_op->aio_write))
+ return -EINVAL;
+
+ old_fs = get_fs();
+ set_fs(get_ds());
+ p = (__force const char __user *)buf;
+ if (count > MAX_RW_COUNT)
+ count = MAX_RW_COUNT;
+ if (file->f_op->write)
+ ret = file->f_op->write(file, p, count, pos);
+ else
+ ret = do_sync_write(file, p, count, pos);
+ set_fs(old_fs);
+ if (ret > 0) {
+ fsnotify_modify(file);
+ add_wchar(current, ret);
+ }
+ inc_syscw(current);
+ return ret;
+}
+
ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_t *pos)
{
ssize_t ret;
#include <linux/security.h>
#include <linux/gfp.h>
#include <linux/socket.h>
+#include "internal.h"
/*
* Attempt to steal a page from a pipe buffer. This should perhaps go into
{
int ret;
void *data;
+ loff_t tmp = sd->pos;
data = buf->ops->map(pipe, buf, 0);
- ret = kernel_write(sd->u.file, data + buf->offset, sd->len, sd->pos);
+ ret = __kernel_write(sd->u.file, data + buf->offset, sd->len, &tmp);
buf->ops->unmap(pipe, buf, data);
return ret;
ino = parent_sd->s_ino;
if (filldir(dirent, ".", 1, filp->f_pos, ino, DT_DIR) == 0)
filp->f_pos++;
+ else
+ return 0;
}
if (filp->f_pos == 1) {
if (parent_sd->s_parent)
ino = parent_sd->s_ino;
if (filldir(dirent, "..", 2, filp->f_pos, ino, DT_DIR) == 0)
filp->f_pos++;
+ else
+ return 0;
}
mutex_lock(&sysfs_mutex);
for (pos = sysfs_dir_pos(ns, parent_sd, filp->f_pos, pos);
return 0;
}
+static loff_t sysfs_dir_llseek(struct file *file, loff_t offset, int whence)
+{
+ struct inode *inode = file_inode(file);
+ loff_t ret;
+
+ mutex_lock(&inode->i_mutex);
+ ret = generic_file_llseek(file, offset, whence);
+ mutex_unlock(&inode->i_mutex);
+
+ return ret;
+}
const struct file_operations sysfs_dir_operations = {
.read = generic_read_dir,
.readdir = sysfs_readdir,
.release = sysfs_dir_release,
- .llseek = generic_file_llseek,
+ .llseek = sysfs_dir_llseek,
};
#include <linux/module.h>
#include <linux/magic.h>
#include <linux/slab.h>
+#include <linux/user_namespace.h>
#include "sysfs.h"
struct super_block *sb;
int error;
+ if (!(flags & MS_KERNMOUNT) && !current_user_ns()->may_mount_sysfs)
+ return ERR_PTR(-EPERM);
+
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);
int size;
int i;
+ /*
+ * Make sure we capture only current IO errors rather than stale errors
+ * left over from previous use of the buffer (e.g. failed readahead).
+ */
+ bp->b_error = 0;
+
if (bp->b_flags & XBF_WRITE) {
if (bp->b_flags & XBF_SYNCIO)
rw = WRITE_SYNC;
* rather than falling short due to things like stripe unit/width alignment of
* real extents.
*/
-STATIC int
+STATIC xfs_fsblock_t
xfs_iomap_eof_prealloc_initial_size(
struct xfs_mount *mp,
struct xfs_inode *ip,
* have a large file on a small filesystem and the above
* lowspace thresholds are smaller than MAXEXTLEN.
*/
- while (alloc_blocks >= freesp)
+ while (alloc_blocks && alloc_blocks >= freesp)
alloc_blocks >>= 4;
}
{0x1002, 0x9908, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9909, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x990A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
- {0x1002, 0x990F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9910, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9913, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9917, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9992, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9993, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9994, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9995, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9996, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9997, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9998, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9999, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x999A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x999B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
#include <linux/writeback.h>
#include <linux/atomic.h>
#include <linux/sysctl.h>
+#include <linux/workqueue.h>
struct page;
struct device;
* Bits in backing_dev_info.state
*/
enum bdi_state {
- BDI_pending, /* On its way to being activated */
BDI_wb_alloc, /* Default embedded wb allocated */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
unsigned int nr;
unsigned long last_old_flush; /* last old data flush */
- unsigned long last_active; /* last time bdi thread was active */
- struct task_struct *task; /* writeback thread */
- struct timer_list wakeup_timer; /* used for delayed bdi thread wakeup */
+ struct delayed_work dwork; /* work item used for writeback */
struct list_head b_dirty; /* dirty inodes */
struct list_head b_io; /* parked for writeback */
struct list_head b_more_io; /* parked for more writeback */
void bdi_start_writeback(struct backing_dev_info *bdi, long nr_pages,
enum wb_reason reason);
void bdi_start_background_writeback(struct backing_dev_info *bdi);
-int bdi_writeback_thread(void *data);
+void bdi_writeback_workfn(struct work_struct *work);
int bdi_has_dirty_io(struct backing_dev_info *bdi);
void bdi_wakeup_thread_delayed(struct backing_dev_info *bdi);
void bdi_lock_two(struct bdi_writeback *wb1, struct bdi_writeback *wb2);
extern spinlock_t bdi_lock;
extern struct list_head bdi_list;
-extern struct list_head bdi_pending_list;
+
+extern struct workqueue_struct *bdi_wq;
static inline int wb_has_dirty_io(struct bdi_writeback *wb)
{
return bdi->capabilities & BDI_CAP_SWAP_BACKED;
}
-static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
-{
- return bdi == &default_backing_dev_info;
-}
-
static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
{
return bdi_cap_writeback_dirty(mapping->backing_dev_info);
nr_cpumask_bits);
}
+/**
+ * cpumask_parse - extract a cpumask from from a string
+ * @buf: the buffer to extract from
+ * @dstp: the cpumask to set.
+ *
+ * Returns -errno, or 0 for success.
+ */
+static inline int cpumask_parse(const char *buf, struct cpumask *dstp)
+{
+ char *nl = strchr(buf, '\n');
+ int len = nl ? nl - buf : strlen(buf);
+
+ return bitmap_parse(buf, len, cpumask_bits(dstp), nr_cpumask_bits);
+}
+
/**
* cpulist_parse - extract a cpumask from a user string of ranges
* @buf: the buffer to extract from
extern void debug_show_all_locks(void);
extern void debug_show_held_locks(struct task_struct *task);
extern void debug_check_no_locks_freed(const void *from, unsigned long len);
-extern void debug_check_no_locks_held(void);
+extern void debug_check_no_locks_held(struct task_struct *task);
#else
static inline void debug_show_all_locks(void)
{
}
static inline void
-debug_check_no_locks_held(void)
+debug_check_no_locks_held(struct task_struct *task)
{
}
#endif
int subsys_system_register(struct bus_type *subsys,
const struct attribute_group **groups);
+int subsys_virtual_register(struct bus_type *subsys,
+ const struct attribute_group **groups);
/**
* struct class - device classes
u32 ue_count; /* Uncorrectable Errors for this csrow */
u32 ce_count; /* Correctable Errors for this csrow */
- u32 nr_pages; /* combined pages count of all channels */
struct mem_ctl_info *mci; /* the parent */
* sees memory sticks ("dimms"), and the ones that sees memory ranks.
* All old memory controllers enumerate memories per rank, but most
* of the recent drivers enumerate memories per DIMM, instead.
- * When the memory controller is per rank, mem_is_per_rank is true.
+ * When the memory controller is per rank, csbased is true.
*/
unsigned n_layers;
struct edac_mc_layer *layers;
- bool mem_is_per_rank;
+ bool csbased;
/*
* DIMM info. Will eventually remove the entire csrows_info some day
u32 fake_inject_ue;
u16 fake_inject_count;
#endif
- __u8 csbased : 1, /* csrow-based memory controller */
- __resv : 7;
};
#endif
#ifndef FREEZER_H_INCLUDED
#define FREEZER_H_INCLUDED
-#include <linux/debug_locks.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/atomic.h>
static inline bool try_to_freeze(void)
{
- if (!(current->flags & PF_NOFREEZE))
- debug_check_no_locks_held();
might_sleep();
if (likely(!freezing(current)))
return false;
spin_unlock(&fs->lock);
}
+extern bool current_chrooted(void);
+
#endif /* _LINUX_FS_STRUCT_H */
*/
#include <asm/types.h>
+#include <linux/compiler.h>
/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
#error Wordsize not 32 or 64
#endif
-static inline u64 hash_64(u64 val, unsigned int bits)
+static __always_inline u64 hash_64(u64 val, unsigned int bits)
{
u64 hash = val;
#ifdef CONFIG_IRQ_WORK
bool irq_work_needs_cpu(void);
#else
-static bool irq_work_needs_cpu(void) { return false; }
+static inline bool irq_work_needs_cpu(void) { return false; }
#endif
#endif /* _LINUX_IRQ_WORK_H */
unsigned long int_sqrt(unsigned long);
extern void bust_spinlocks(int yes);
-extern void wake_up_klogd(void);
extern int oops_in_progress; /* If set, an oops, panic(), BUG() or die() is in progress */
extern int panic_timeout;
extern int panic_on_oops;
MAX77693_MUIC_REG_END,
};
+/* MAX77693 INTMASK1~2 Register */
+#define INTMASK1_ADC1K_SHIFT 3
+#define INTMASK1_ADCERR_SHIFT 2
+#define INTMASK1_ADCLOW_SHIFT 1
+#define INTMASK1_ADC_SHIFT 0
+#define INTMASK1_ADC1K_MASK (1 << INTMASK1_ADC1K_SHIFT)
+#define INTMASK1_ADCERR_MASK (1 << INTMASK1_ADCERR_SHIFT)
+#define INTMASK1_ADCLOW_MASK (1 << INTMASK1_ADCLOW_SHIFT)
+#define INTMASK1_ADC_MASK (1 << INTMASK1_ADC_SHIFT)
+
+#define INTMASK2_VIDRM_SHIFT 5
+#define INTMASK2_VBVOLT_SHIFT 4
+#define INTMASK2_DXOVP_SHIFT 3
+#define INTMASK2_DCDTMR_SHIFT 2
+#define INTMASK2_CHGDETRUN_SHIFT 1
+#define INTMASK2_CHGTYP_SHIFT 0
+#define INTMASK2_VIDRM_MASK (1 << INTMASK2_VIDRM_SHIFT)
+#define INTMASK2_VBVOLT_MASK (1 << INTMASK2_VBVOLT_SHIFT)
+#define INTMASK2_DXOVP_MASK (1 << INTMASK2_DXOVP_SHIFT)
+#define INTMASK2_DCDTMR_MASK (1 << INTMASK2_DCDTMR_SHIFT)
+#define INTMASK2_CHGDETRUN_MASK (1 << INTMASK2_CHGDETRUN_SHIFT)
+#define INTMASK2_CHGTYP_MASK (1 << INTMASK2_CHGTYP_SHIFT)
+
/* MAX77693 MUIC - STATUS1~3 Register */
#define STATUS1_ADC_SHIFT (0)
#define STATUS1_ADCLOW_SHIFT (5)
#define VM_PFNMAP 0x00000400 /* Page-ranges managed without "struct page", just pure PFN */
#define VM_DENYWRITE 0x00000800 /* ETXTBSY on write attempts.. */
-#define VM_POPULATE 0x00001000
#define VM_LOCKED 0x00002000
#define VM_IO 0x00004000 /* Memory mapped I/O or similar */
{
return _calc_vm_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN ) |
_calc_vm_trans(flags, MAP_DENYWRITE, VM_DENYWRITE ) |
- ((flags & MAP_LOCKED) ? (VM_LOCKED | VM_POPULATE) : 0) |
- (((flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE) ?
- VM_POPULATE : 0);
+ _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED );
}
#endif /* _LINUX_MMAN_H */
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
-static inline unsigned zone_end_pfn(const struct zone *zone)
+static inline unsigned long zone_end_pfn(const struct zone *zone)
{
return zone->zone_start_pfn + zone->spanned_pages;
}
#define MNT_INTERNAL 0x4000
+#define MNT_LOCK_READONLY 0x400000
+
struct vfsmount {
struct dentry *mnt_root; /* root of the mounted tree */
struct super_block *mnt_sb; /* pointer to superblock */
* This happens with the Renesas AG-AND chips, possibly others.
*/
#define BBT_AUTO_REFRESH 0x00000080
+/*
+ * Chip requires ready check on read (for auto-incremented sequential read).
+ * True only for small page devices; large page devices do not support
+ * autoincrement.
+ */
+#define NAND_NEED_READRDY 0x00000100
+
/* Chip does not allow subpage writes */
#define NAND_NO_SUBPAGE_WRITE 0x00000200
#define STMLCDIF_18BIT 2 /** pixel data bus to the display is of 18 bit width */
#define STMLCDIF_24BIT 3 /** pixel data bus to the display is of 24 bit width */
-#define FB_SYNC_DATA_ENABLE_HIGH_ACT (1 << 6)
-#define FB_SYNC_DOTCLK_FAILING_ACT (1 << 7) /* failing/negtive edge sampling */
+#define MXSFB_SYNC_DATA_ENABLE_HIGH_ACT (1 << 6)
+#define MXSFB_SYNC_DOTCLK_FAILING_ACT (1 << 7) /* failing/negtive edge sampling */
struct mxsfb_platform_data {
struct fb_videomode *mode_list;
* allocated. If specified,fb_size must also be specified.
* fb_phys must be unused by Linux.
*/
+ u32 sync; /* sync mask, contains MXSFB specifics not
+ * carried in fb_info->var.sync
+ */
};
#endif /* __LINUX_MXSFB_H */
NVME_LBAF_RP_DEGRADED = 3,
};
+struct nvme_smart_log {
+ __u8 critical_warning;
+ __u8 temperature[2];
+ __u8 avail_spare;
+ __u8 spare_thresh;
+ __u8 percent_used;
+ __u8 rsvd6[26];
+ __u8 data_units_read[16];
+ __u8 data_units_written[16];
+ __u8 host_reads[16];
+ __u8 host_writes[16];
+ __u8 ctrl_busy_time[16];
+ __u8 power_cycles[16];
+ __u8 power_on_hours[16];
+ __u8 unsafe_shutdowns[16];
+ __u8 media_errors[16];
+ __u8 num_err_log_entries[16];
+ __u8 rsvd192[320];
+};
+
+enum {
+ NVME_SMART_CRIT_SPARE = 1 << 0,
+ NVME_SMART_CRIT_TEMPERATURE = 1 << 1,
+ NVME_SMART_CRIT_RELIABILITY = 1 << 2,
+ NVME_SMART_CRIT_MEDIA = 1 << 3,
+ NVME_SMART_CRIT_VOLATILE_MEMORY = 1 << 4,
+};
+
struct nvme_lba_range_type {
__u8 type;
__u8 attributes;
extern int dmesg_restrict;
extern int kptr_restrict;
+extern void wake_up_klogd(void);
+
void log_buf_kexec_setup(void);
void __init setup_log_buf(int early);
#else
return false;
}
+static inline void wake_up_klogd(void)
+{
+}
+
static inline void log_buf_kexec_setup(void)
{
}
#define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */
#define PF_SPREAD_PAGE 0x01000000 /* Spread page cache over cpuset */
#define PF_SPREAD_SLAB 0x02000000 /* Spread some slab caches over cpuset */
-#define PF_THREAD_BOUND 0x04000000 /* Thread bound to specific cpu */
+#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */
#define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */
#define PF_MEMPOLICY 0x10000000 /* Non-default NUMA mempolicy */
#define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */
union {
__u32 mark;
__u32 dropcount;
- __u32 avail_size;
+ __u32 reserved_tailroom;
};
sk_buff_data_t inner_transport_header;
* do not lose pfmemalloc information as the pages would not be
* allocated using __GFP_MEMALLOC.
*/
- if (page->pfmemalloc && !page->mapping)
- skb->pfmemalloc = true;
frag->page.p = page;
frag->page_offset = off;
skb_frag_size_set(frag, size);
+
+ page = compound_head(page);
+ if (page->pfmemalloc && !page->mapping)
+ skb->pfmemalloc = true;
}
/**
*/
static inline int skb_availroom(const struct sk_buff *skb)
{
- return skb_is_nonlinear(skb) ? 0 : skb->avail_size - skb->len;
+ if (skb_is_nonlinear(skb))
+ return 0;
+
+ return skb->end - skb->tail - skb->reserved_tailroom;
}
/**
/* Adding event notification support elements */
#define THERMAL_GENL_FAMILY_NAME "thermal_event"
#define THERMAL_GENL_VERSION 0x01
-#define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_group"
+#define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_grp"
/* Default Thermal Governor */
#if defined(CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE)
* For encapsulation sockets.
*/
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
+ void (*encap_destroy)(struct sock *sk);
};
static inline struct udp_sock *udp_sk(const struct sock *sk)
u16 connected;
};
+extern u8 cdc_ncm_select_altsetting(struct usbnet *dev, struct usb_interface *intf);
extern int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting);
extern void cdc_ncm_unbind(struct usbnet *dev, struct usb_interface *intf);
extern struct sk_buff *cdc_ncm_fill_tx_frame(struct cdc_ncm_ctx *ctx, struct sk_buff *skb, __le32 sign);
*/
int (*disable_usb3_lpm_timeout)(struct usb_hcd *,
struct usb_device *, enum usb3_link_state state);
+ int (*find_raw_port_number)(struct usb_hcd *, int);
};
extern int usb_hcd_link_urb_to_ep(struct usb_hcd *hcd, struct urb *urb);
extern int usb_add_hcd(struct usb_hcd *hcd,
unsigned int irqnum, unsigned long irqflags);
extern void usb_remove_hcd(struct usb_hcd *hcd);
+extern int usb_hcd_find_raw_port_number(struct usb_hcd *hcd, int port1);
struct platform_device;
extern void usb_hcd_platform_shutdown(struct platform_device *dev);
* port.
* @flags: usb serial port flags
* @write_wait: a wait_queue_head_t used by the port.
+ * @delta_msr_wait: modem-status-change wait queue
* @work: work queue entry for the line discipline waking up.
* @throttled: nonzero if the read urb is inactive to throttle the device
* @throttle_req: nonzero if the tty wants to throttle us
unsigned long flags;
wait_queue_head_t write_wait;
+ wait_queue_head_t delta_msr_wait;
struct work_struct work;
char throttled;
char throttle_req;
/*-------------------------------------------------------------------------*/
+#if IS_ENABLED(CONFIG_USB_ULPI)
struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops,
unsigned int flags);
+#else
+static inline struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops,
+ unsigned int flags)
+{
+ return NULL;
+}
+#endif
#ifdef CONFIG_USB_ULPI_VIEWPORT
/* access ops for controllers with a viewport register */
kuid_t owner;
kgid_t group;
unsigned int proc_inum;
+ bool may_mount_sysfs;
+ bool may_mount_proc;
};
extern struct user_namespace init_user_ns;
#endif
+void update_mnt_policy(struct user_namespace *userns);
+
#endif /* _LINUX_USER_H */
#include <linux/lockdep.h>
#include <linux/threads.h>
#include <linux/atomic.h>
+#include <linux/cpumask.h>
struct workqueue_struct;
WORK_STRUCT_COLOR_BITS,
/* data contains off-queue information when !WORK_STRUCT_PWQ */
- WORK_OFFQ_FLAG_BASE = WORK_STRUCT_FLAG_BITS,
+ WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT,
WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE),
int cpu;
};
+/*
+ * A struct for workqueue attributes. This can be used to change
+ * attributes of an unbound workqueue.
+ *
+ * Unlike other fields, ->no_numa isn't a property of a worker_pool. It
+ * only modifies how apply_workqueue_attrs() select pools and thus doesn't
+ * participate in pool hash calculations or equality comparisons.
+ */
+struct workqueue_attrs {
+ int nice; /* nice level */
+ cpumask_var_t cpumask; /* allowed CPUs */
+ bool no_numa; /* disable NUMA affinity */
+};
+
static inline struct delayed_work *to_delayed_work(struct work_struct *work)
{
return container_of(work, struct delayed_work, work);
WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */
WQ_HIGHPRI = 1 << 4, /* high priority */
WQ_CPU_INTENSIVE = 1 << 5, /* cpu instensive workqueue */
+ WQ_SYSFS = 1 << 6, /* visible in sysfs, see wq_sysfs_register() */
- WQ_DRAINING = 1 << 6, /* internal: workqueue is draining */
- WQ_RESCUER = 1 << 7, /* internal: workqueue has rescuer */
+ __WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
+ __WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */
* Pointer to the allocated workqueue on success, %NULL on failure.
*/
#define alloc_ordered_workqueue(fmt, flags, args...) \
- alloc_workqueue(fmt, WQ_UNBOUND | (flags), 1, ##args)
+ alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags), 1, ##args)
#define create_workqueue(name) \
alloc_workqueue((name), WQ_MEM_RECLAIM, 1)
extern void destroy_workqueue(struct workqueue_struct *wq);
+struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask);
+void free_workqueue_attrs(struct workqueue_attrs *attrs);
+int apply_workqueue_attrs(struct workqueue_struct *wq,
+ const struct workqueue_attrs *attrs);
+
extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work);
-extern bool queue_work(struct workqueue_struct *wq, struct work_struct *work);
extern bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
struct delayed_work *work, unsigned long delay);
-extern bool queue_delayed_work(struct workqueue_struct *wq,
- struct delayed_work *work, unsigned long delay);
extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
struct delayed_work *dwork, unsigned long delay);
-extern bool mod_delayed_work(struct workqueue_struct *wq,
- struct delayed_work *dwork, unsigned long delay);
extern void flush_workqueue(struct workqueue_struct *wq);
extern void drain_workqueue(struct workqueue_struct *wq);
extern void flush_scheduled_work(void);
-extern bool schedule_work_on(int cpu, struct work_struct *work);
-extern bool schedule_work(struct work_struct *work);
-extern bool schedule_delayed_work_on(int cpu, struct delayed_work *work,
- unsigned long delay);
-extern bool schedule_delayed_work(struct delayed_work *work,
- unsigned long delay);
extern int schedule_on_each_cpu(work_func_t func);
-extern int keventd_up(void);
int execute_in_process_context(work_func_t fn, struct execute_work *);
extern void workqueue_set_max_active(struct workqueue_struct *wq,
int max_active);
-extern bool workqueue_congested(unsigned int cpu, struct workqueue_struct *wq);
+extern bool current_is_workqueue_rescuer(void);
+extern bool workqueue_congested(int cpu, struct workqueue_struct *wq);
extern unsigned int work_busy(struct work_struct *work);
+/**
+ * queue_work - queue work on a workqueue
+ * @wq: workqueue to use
+ * @work: work to queue
+ *
+ * Returns %false if @work was already on a queue, %true otherwise.
+ *
+ * We queue the work to the CPU on which it was submitted, but if the CPU dies
+ * it can be processed by another CPU.
+ */
+static inline bool queue_work(struct workqueue_struct *wq,
+ struct work_struct *work)
+{
+ return queue_work_on(WORK_CPU_UNBOUND, wq, work);
+}
+
+/**
+ * queue_delayed_work - queue work on a workqueue after delay
+ * @wq: workqueue to use
+ * @dwork: delayable work to queue
+ * @delay: number of jiffies to wait before queueing
+ *
+ * Equivalent to queue_delayed_work_on() but tries to use the local CPU.
+ */
+static inline bool queue_delayed_work(struct workqueue_struct *wq,
+ struct delayed_work *dwork,
+ unsigned long delay)
+{
+ return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
+}
+
+/**
+ * mod_delayed_work - modify delay of or queue a delayed work
+ * @wq: workqueue to use
+ * @dwork: work to queue
+ * @delay: number of jiffies to wait before queueing
+ *
+ * mod_delayed_work_on() on local CPU.
+ */
+static inline bool mod_delayed_work(struct workqueue_struct *wq,
+ struct delayed_work *dwork,
+ unsigned long delay)
+{
+ return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
+}
+
+/**
+ * schedule_work_on - put work task on a specific cpu
+ * @cpu: cpu to put the work task on
+ * @work: job to be done
+ *
+ * This puts a job on a specific cpu
+ */
+static inline bool schedule_work_on(int cpu, struct work_struct *work)
+{
+ return queue_work_on(cpu, system_wq, work);
+}
+
+/**
+ * schedule_work - put work task in global workqueue
+ * @work: job to be done
+ *
+ * Returns %false if @work was already on the kernel-global workqueue and
+ * %true otherwise.
+ *
+ * This puts a job in the kernel-global workqueue if it was not already
+ * queued and leaves it in the same position on the kernel-global
+ * workqueue otherwise.
+ */
+static inline bool schedule_work(struct work_struct *work)
+{
+ return queue_work(system_wq, work);
+}
+
+/**
+ * schedule_delayed_work_on - queue work in global workqueue on CPU after delay
+ * @cpu: cpu to use
+ * @dwork: job to be done
+ * @delay: number of jiffies to wait
+ *
+ * After waiting for a given time this puts a job in the kernel-global
+ * workqueue on the specified CPU.
+ */
+static inline bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork,
+ unsigned long delay)
+{
+ return queue_delayed_work_on(cpu, system_wq, dwork, delay);
+}
+
+/**
+ * schedule_delayed_work - put work task in global workqueue after delay
+ * @dwork: job to be done
+ * @delay: number of jiffies to wait or 0 for immediate execution
+ *
+ * After waiting for a given time this puts a job in the kernel-global
+ * workqueue.
+ */
+static inline bool schedule_delayed_work(struct delayed_work *dwork,
+ unsigned long delay)
+{
+ return queue_delayed_work(system_wq, dwork, delay);
+}
+
+/**
+ * keventd_up - is workqueue initialized yet?
+ */
+static inline bool keventd_up(void)
+{
+ return system_wq != NULL;
+}
+
/*
* Like above, but uses del_timer() instead of del_timer_sync(). This means,
* if it returns 0 the timer function may be running and the queueing is in
}
#ifndef CONFIG_SMP
-static inline long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg)
+static inline long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
{
return fn(arg);
}
#else
-long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg);
+long work_on_cpu(int cpu, long (*fn)(void *), void *arg);
#endif /* CONFIG_SMP */
#ifdef CONFIG_FREEZER
extern void thaw_workqueues(void);
#endif /* CONFIG_FREEZER */
+#ifdef CONFIG_SYSFS
+int workqueue_sysfs_register(struct workqueue_struct *wq);
+#else /* CONFIG_SYSFS */
+static inline int workqueue_sysfs_register(struct workqueue_struct *wq)
+{ return 0; }
+#endif /* CONFIG_SYSFS */
+
#endif
static inline struct neighbour *dst_neigh_lookup(const struct dst_entry *dst, const void *daddr)
{
- return dst->ops->neigh_lookup(dst, NULL, daddr);
+ struct neighbour *n = dst->ops->neigh_lookup(dst, NULL, daddr);
+ return IS_ERR(n) ? NULL : n;
}
static inline struct neighbour *dst_neigh_lookup_skb(const struct dst_entry *dst,
struct sk_buff *skb)
{
- return dst->ops->neigh_lookup(dst, skb, NULL);
+ struct neighbour *n = dst->ops->neigh_lookup(dst, skb, NULL);
+ return IS_ERR(n) ? NULL : n;
}
static inline void dst_link_failure(struct sk_buff *skb)
__be32 ports;
__be16 port16[2];
};
+ u16 thoff;
u8 ip_proto;
};
#define INETFRAGS_HASHSZ 64
+/* averaged:
+ * max_depth = default ipfrag_high_thresh / INETFRAGS_HASHSZ /
+ * rounded up (SKB_TRUELEN(0) + sizeof(struct ipq or
+ * struct frag_queue))
+ */
+#define INETFRAGS_MAXDEPTH 128
+
struct inet_frags {
struct hlist_head hash[INETFRAGS_HASHSZ];
/* This rwlock is a global lock (seperate per IPv4, IPv6 and
struct inet_frag_queue *inet_frag_find(struct netns_frags *nf,
struct inet_frags *f, void *key, unsigned int hash)
__releases(&f->lock);
+void inet_frag_maybe_warn_overflow(struct inet_frag_queue *q,
+ const char *prefix);
static inline void inet_frag_put(struct inet_frag_queue *q, struct inet_frags *f)
{
};
#ifdef CONFIG_IP_ROUTE_MULTIPATH
-
#define FIB_RES_NH(res) ((res).fi->fib_nh[(res).nh_sel])
-
-#define FIB_TABLE_HASHSZ 2
-
#else /* CONFIG_IP_ROUTE_MULTIPATH */
-
#define FIB_RES_NH(res) ((res).fi->fib_nh[0])
+#endif /* CONFIG_IP_ROUTE_MULTIPATH */
+#ifdef CONFIG_IP_MULTIPLE_TABLES
#define FIB_TABLE_HASHSZ 256
-
-#endif /* CONFIG_IP_ROUTE_MULTIPATH */
+#else
+#define FIB_TABLE_HASHSZ 2
+#endif
extern __be32 fib_info_update_nh_saddr(struct net *net, struct fib_nh *nh);
int sysctl_sync_retries;
int sysctl_nat_icmp_send;
int sysctl_pmtu_disc;
+ int sysctl_backup_only;
/* ip_vs_lblc */
int sysctl_lblc_expiration;
return ipvs->sysctl_pmtu_disc;
}
+static inline int sysctl_backup_only(struct netns_ipvs *ipvs)
+{
+ return ipvs->sync_state & IP_VS_STATE_BACKUP &&
+ ipvs->sysctl_backup_only;
+}
+
#else
static inline int sysctl_sync_threshold(struct netns_ipvs *ipvs)
return 1;
}
+static inline int sysctl_backup_only(struct netns_ipvs *ipvs)
+{
+ return 0;
+}
+
#endif
/*
{
struct iphdr *iph = ip_hdr(skb);
- if (iph->frag_off & htons(IP_DF))
- iph->id = 0;
- else {
- /* Use inner packet iph-id if possible. */
- if (skb->protocol == htons(ETH_P_IP) && old_iph->id)
- iph->id = old_iph->id;
- else
- __ip_select_ident(iph, dst,
- (skb_shinfo(skb)->gso_segs ?: 1) - 1);
- }
+ /* Use inner packet iph-id if possible. */
+ if (skb->protocol == htons(ETH_P_IP) && old_iph->id)
+ iph->id = old_iph->id;
+ else
+ __ip_select_ident(iph, dst,
+ (skb_shinfo(skb)->gso_segs ?: 1) - 1);
}
#endif
DEFINE_EVENT(writeback_work_class, name, \
TP_PROTO(struct backing_dev_info *bdi, struct wb_writeback_work *work), \
TP_ARGS(bdi, work))
-DEFINE_WRITEBACK_WORK_EVENT(writeback_nothread);
DEFINE_WRITEBACK_WORK_EVENT(writeback_queue);
DEFINE_WRITEBACK_WORK_EVENT(writeback_exec);
DEFINE_WRITEBACK_WORK_EVENT(writeback_start);
DEFINE_WRITEBACK_EVENT(writeback_nowork);
DEFINE_WRITEBACK_EVENT(writeback_wake_background);
-DEFINE_WRITEBACK_EVENT(writeback_wake_thread);
-DEFINE_WRITEBACK_EVENT(writeback_wake_forker_thread);
DEFINE_WRITEBACK_EVENT(writeback_bdi_register);
DEFINE_WRITEBACK_EVENT(writeback_bdi_unregister);
-DEFINE_WRITEBACK_EVENT(writeback_thread_start);
-DEFINE_WRITEBACK_EVENT(writeback_thread_stop);
DECLARE_EVENT_CLASS(wbc_class,
TP_PROTO(struct writeback_control *wbc, struct backing_dev_info *bdi),
PACKET_DIAG_TX_RING,
PACKET_DIAG_FANOUT,
- PACKET_DIAG_MAX,
+ __PACKET_DIAG_MAX,
};
+#define PACKET_DIAG_MAX (__PACKET_DIAG_MAX - 1)
+
struct packet_diag_info {
__u32 pdi_index;
__u32 pdi_version;
UNIX_DIAG_MEMINFO,
UNIX_DIAG_SHUTDOWN,
- UNIX_DIAG_MAX,
+ __UNIX_DIAG_MAX,
};
+#define UNIX_DIAG_MAX (__UNIX_DIAG_MAX - 1)
+
struct unix_diag_vfs {
__u32 udiag_vfs_ino;
__u32 udiag_vfs_dev;
*/
#define ATMEL_LCDC_WIRING_BGR 0
#define ATMEL_LCDC_WIRING_RGB 1
-#define ATMEL_LCDC_WIRING_RGB555 2
/* LCD Controller info data structure, stored in device platform_data */
void (*atmel_lcdfb_power_control)(int on);
struct fb_monspecs *default_monspecs;
u32 pseudo_palette[16];
+ bool have_intensity_bit;
};
#define ATMEL_LCDC_DMABADDR1 0x00
uint8_t _pad3;
} __attribute__((__packed__));
+struct blkif_request_other {
+ uint8_t _pad1;
+ blkif_vdev_t _pad2; /* only for read/write requests */
+#ifdef CONFIG_X86_64
+ uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/
+#endif
+ uint64_t id; /* private guest value, echoed in resp */
+} __attribute__((__packed__));
+
struct blkif_request {
uint8_t operation; /* BLKIF_OP_??? */
union {
struct blkif_request_rw rw;
struct blkif_request_discard discard;
+ struct blkif_request_other other;
} u;
} __attribute__((__packed__));
#define PHYSDEVOP_pci_device_remove 26
#define PHYSDEVOP_restore_msi_ext 27
+/*
+ * Dom0 should use these two to announce MMIO resources assigned to
+ * MSI-X capable devices won't (prepare) or may (release) change.
+ */
+#define PHYSDEVOP_prepare_msix 30
+#define PHYSDEVOP_release_msix 31
struct physdev_pci_device {
/* IN */
uint16_t seg;
int flags, const char *dev_name,
void *data)
{
- if (!(flags & MS_KERNMOUNT))
- data = current->nsproxy->ipc_ns;
+ if (!(flags & MS_KERNMOUNT)) {
+ struct ipc_namespace *ns = current->nsproxy->ipc_ns;
+ /* Don't allow mounting unless the caller has CAP_SYS_ADMIN
+ * over the ipc namespace.
+ */
+ if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
+ return ERR_PTR(-EPERM);
+
+ data = ns;
+ }
return mount_ns(fs_type, flags, data, mqueue_fill_super);
}
fd = error;
}
mutex_unlock(&root->d_inode->i_mutex);
- mnt_drop_write(mnt);
+ if (!ro)
+ mnt_drop_write(mnt);
out_putname:
putname(name);
return fd;
tsk = tsk->group_leader;
/*
- * Workqueue threads may acquire PF_THREAD_BOUND and become
+ * Workqueue threads may acquire PF_NO_SETAFFINITY and become
* trapped in a cpuset, or RT worker may be born in a cgroup
* with no rt_runtime allocated. Just say no.
*/
- if (tsk == kthreadd_task || (tsk->flags & PF_THREAD_BOUND)) {
+ if (tsk == kthreadd_task || (tsk->flags & PF_NO_SETAFFINITY)) {
ret = -EINVAL;
rcu_read_unlock();
goto out_unlock_cgroup;
cgroup_taskset_for_each(task, cgrp, tset) {
/*
- * Kthreads bound to specific cpus cannot be moved to a new
- * cpuset; we cannot change their cpu affinity and
- * isolating such threads by their set of allowed nodes is
- * unnecessary. Thus, cpusets are not applicable for such
- * threads. This prevents checking for success of
- * set_cpus_allowed_ptr() on all attached tasks before
- * cpus_allowed may be changed.
+ * Kthreads which disallow setaffinity shouldn't be moved
+ * to a new cpuset; we don't want to change their cpu
+ * affinity and isolating such threads by their set of
+ * allowed nodes is unnecessary. Thus, cpusets are not
+ * applicable for such threads. This prevents checking for
+ * success of set_cpus_allowed_ptr() on all attached tasks
+ * before cpus_allowed may be changed.
*/
ret = -EINVAL;
- if (task->flags & PF_THREAD_BOUND)
+ if (task->flags & PF_NO_SETAFFINITY)
goto out_unlock;
ret = security_task_setscheduler(task);
if (ret)
if (ctxn < 0)
goto next;
ctx = rcu_dereference(current->perf_event_ctxp[ctxn]);
+ if (ctx)
+ perf_event_task_ctx(ctx, task_event);
}
- if (ctx)
- perf_event_task_ctx(ctx, task_event);
next:
put_cpu_ptr(pmu->pmu_cpu_context);
}
+ if (task_event->task_ctx)
+ perf_event_task_ctx(task_event->task_ctx, task_event);
+
rcu_read_unlock();
}
event->attr.sample_period = NSEC_PER_SEC / freq;
hwc->sample_period = event->attr.sample_period;
local64_set(&hwc->period_left, hwc->sample_period);
+ hwc->last_period = hwc->sample_period;
event->attr.freq = 0;
}
}
/*
* Make sure we are holding no locks:
*/
- debug_check_no_locks_held();
+ debug_check_no_locks_held(tsk);
/*
* We can do this unlocked here. The futex code uses this flag
* just to verify whether the pi state cleanup has been done
{
/* It's safe because the task is inactive. */
do_set_cpus_allowed(p, cpumask_of(cpu));
- p->flags |= PF_THREAD_BOUND;
+ p->flags |= PF_NO_SETAFFINITY;
}
/**
}
EXPORT_SYMBOL_GPL(debug_check_no_locks_freed);
-static void print_held_locks_bug(void)
+static void print_held_locks_bug(struct task_struct *curr)
{
if (!debug_locks_off())
return;
printk("\n");
printk("=====================================\n");
- printk("[ BUG: %s/%d still has locks held! ]\n",
- current->comm, task_pid_nr(current));
+ printk("[ BUG: lock held at task exit time! ]\n");
print_kernel_ident();
printk("-------------------------------------\n");
- lockdep_print_held_locks(current);
+ printk("%s/%d is exiting with locks still held!\n",
+ curr->comm, task_pid_nr(curr));
+ lockdep_print_held_locks(curr);
+
printk("\nstack backtrace:\n");
dump_stack();
}
-void debug_check_no_locks_held(void)
+void debug_check_no_locks_held(struct task_struct *task)
{
- if (unlikely(current->lockdep_depth > 0))
- print_held_locks_bug();
+ if (unlikely(task->lockdep_depth > 0))
+ print_held_locks_bug(task);
}
-EXPORT_SYMBOL_GPL(debug_check_no_locks_held);
void debug_show_all_locks(void)
{
int nr;
int rc;
struct task_struct *task, *me = current;
+ int init_pids = thread_group_leader(me) ? 1 : 2;
/* Don't allow any more processes into the pid namespace */
disable_pid_allocation(pid_ns);
*/
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
- if (pid_ns->nr_hashed == 1)
+ if (pid_ns->nr_hashed == init_pids)
break;
schedule();
}
#define MINIMUM_CONSOLE_LOGLEVEL 1 /* Minimum loglevel we let people use */
#define DEFAULT_CONSOLE_LOGLEVEL 7 /* anything MORE serious than KERN_DEBUG */
-DECLARE_WAIT_QUEUE_HEAD(log_wait);
-
int console_printk[4] = {
DEFAULT_CONSOLE_LOGLEVEL, /* console_loglevel */
DEFAULT_MESSAGE_LOGLEVEL, /* default_message_loglevel */
static DEFINE_RAW_SPINLOCK(logbuf_lock);
#ifdef CONFIG_PRINTK
+DECLARE_WAIT_QUEUE_HEAD(log_wait);
/* the next printk record to read by syslog(READ) or /proc/kmsg */
static u64 syslog_seq;
static u32 syslog_idx;
return console_locked;
}
-/*
- * Delayed printk version, for scheduler-internal messages:
- */
-#define PRINTK_BUF_SIZE 512
-
-#define PRINTK_PENDING_WAKEUP 0x01
-#define PRINTK_PENDING_SCHED 0x02
-
-static DEFINE_PER_CPU(int, printk_pending);
-static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
-
-static void wake_up_klogd_work_func(struct irq_work *irq_work)
-{
- int pending = __this_cpu_xchg(printk_pending, 0);
-
- if (pending & PRINTK_PENDING_SCHED) {
- char *buf = __get_cpu_var(printk_sched_buf);
- printk(KERN_WARNING "[sched_delayed] %s", buf);
- }
-
- if (pending & PRINTK_PENDING_WAKEUP)
- wake_up_interruptible(&log_wait);
-}
-
-static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
- .func = wake_up_klogd_work_func,
- .flags = IRQ_WORK_LAZY,
-};
-
-void wake_up_klogd(void)
-{
- preempt_disable();
- if (waitqueue_active(&log_wait)) {
- this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
- irq_work_queue(&__get_cpu_var(wake_up_klogd_work));
- }
- preempt_enable();
-}
-
static void console_cont_flush(char *text, size_t size)
{
unsigned long flags;
late_initcall(printk_late_init);
#if defined CONFIG_PRINTK
+/*
+ * Delayed printk version, for scheduler-internal messages:
+ */
+#define PRINTK_BUF_SIZE 512
+
+#define PRINTK_PENDING_WAKEUP 0x01
+#define PRINTK_PENDING_SCHED 0x02
+
+static DEFINE_PER_CPU(int, printk_pending);
+static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
+
+static void wake_up_klogd_work_func(struct irq_work *irq_work)
+{
+ int pending = __this_cpu_xchg(printk_pending, 0);
+
+ if (pending & PRINTK_PENDING_SCHED) {
+ char *buf = __get_cpu_var(printk_sched_buf);
+ printk(KERN_WARNING "[sched_delayed] %s", buf);
+ }
+
+ if (pending & PRINTK_PENDING_WAKEUP)
+ wake_up_interruptible(&log_wait);
+}
+
+static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
+ .func = wake_up_klogd_work_func,
+ .flags = IRQ_WORK_LAZY,
+};
+
+void wake_up_klogd(void)
+{
+ preempt_disable();
+ if (waitqueue_active(&log_wait)) {
+ this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
+ irq_work_queue(&__get_cpu_var(wake_up_klogd_work));
+ }
+ preempt_enable();
+}
int printk_sched(const char *fmt, ...)
{
get_task_struct(p);
rcu_read_unlock();
+ if (p->flags & PF_NO_SETAFFINITY) {
+ retval = -EINVAL;
+ goto out_put_task;
+ }
if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
retval = -ENOMEM;
goto out_put_task;
goto out;
}
- if (unlikely((p->flags & PF_THREAD_BOUND) && p != current)) {
- ret = -EINVAL;
- goto out;
- }
-
do_set_cpus_allowed(p, new_mask);
/* Can the task run on the task's current CPU? If so, we're done */
char poweroff_cmd[POWEROFF_CMD_PATH_LEN] = "/sbin/poweroff";
-static int __orderly_poweroff(void)
+static int __orderly_poweroff(bool force)
{
- int argc;
char **argv;
static char *envp[] = {
"HOME=/",
};
int ret;
- argv = argv_split(GFP_ATOMIC, poweroff_cmd, &argc);
- if (argv == NULL) {
+ argv = argv_split(GFP_KERNEL, poweroff_cmd, NULL);
+ if (argv) {
+ ret = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
+ argv_free(argv);
+ } else {
printk(KERN_WARNING "%s failed to allocate memory for \"%s\"\n",
- __func__, poweroff_cmd);
- return -ENOMEM;
+ __func__, poweroff_cmd);
+ ret = -ENOMEM;
}
- ret = call_usermodehelper_fns(argv[0], argv, envp, UMH_WAIT_EXEC,
- NULL, NULL, NULL);
- argv_free(argv);
+ if (ret && force) {
+ printk(KERN_WARNING "Failed to start orderly shutdown: "
+ "forcing the issue\n");
+ /*
+ * I guess this should try to kick off some daemon to sync and
+ * poweroff asap. Or not even bother syncing if we're doing an
+ * emergency shutdown?
+ */
+ emergency_sync();
+ kernel_power_off();
+ }
return ret;
}
+static bool poweroff_force;
+
+static void poweroff_work_func(struct work_struct *work)
+{
+ __orderly_poweroff(poweroff_force);
+}
+
+static DECLARE_WORK(poweroff_work, poweroff_work_func);
+
/**
* orderly_poweroff - Trigger an orderly system poweroff
* @force: force poweroff if command execution fails
*/
int orderly_poweroff(bool force)
{
- int ret = __orderly_poweroff();
-
- if (ret && force) {
- printk(KERN_WARNING "Failed to start orderly shutdown: "
- "forcing the issue\n");
-
- /*
- * I guess this should try to kick off some daemon to sync and
- * poweroff asap. Or not even bother syncing if we're doing an
- * emergency shutdown?
- */
- emergency_sync();
- kernel_power_off();
- }
-
- return ret;
+ if (force) /* do not override the pending "true" */
+ poweroff_force = true;
+ schedule_work(&poweroff_work);
+ return 0;
}
EXPORT_SYMBOL_GPL(orderly_poweroff);
*/
int tick_check_broadcast_device(struct clock_event_device *dev)
{
- if ((tick_broadcast_device.evtdev &&
+ if ((dev->features & CLOCK_EVT_FEAT_DUMMY) ||
+ (tick_broadcast_device.evtdev &&
tick_broadcast_device.evtdev->rating >= dev->rating) ||
(dev->features & CLOCK_EVT_FEAT_C3STOP))
return 0;
continue;
}
- hlist_del(&entry->node);
- call_rcu(&entry->rcu, ftrace_free_entry_rcu);
+ hlist_del_rcu(&entry->node);
+ call_rcu_sched(&entry->rcu, ftrace_free_entry_rcu);
}
}
__disable_ftrace_function_probe();
void
update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
{
- struct ring_buffer *buf = tr->buffer;
+ struct ring_buffer *buf;
if (trace_stop_count)
return;
arch_spin_lock(&ftrace_max_lock);
+ buf = tr->buffer;
tr->buffer = max_tr.buffer;
max_tr.buffer = buf;
return -EINVAL;
}
-static void set_tracer_flags(unsigned int mask, int enabled)
+/* Some tracers require overwrite to stay enabled */
+int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
+{
+ if (tracer->enabled && (mask & TRACE_ITER_OVERWRITE) && !set)
+ return -1;
+
+ return 0;
+}
+
+int set_tracer_flag(unsigned int mask, int enabled)
{
/* do nothing if flag is already set */
if (!!(trace_flags & mask) == !!enabled)
- return;
+ return 0;
+
+ /* Give the tracer a chance to approve the change */
+ if (current_trace->flag_changed)
+ if (current_trace->flag_changed(current_trace, mask, !!enabled))
+ return -EINVAL;
if (enabled)
trace_flags |= mask;
if (mask == TRACE_ITER_RECORD_CMD)
trace_event_enable_cmd_record(enabled);
- if (mask == TRACE_ITER_OVERWRITE)
+ if (mask == TRACE_ITER_OVERWRITE) {
ring_buffer_change_overwrite(global_trace.buffer, enabled);
+#ifdef CONFIG_TRACER_MAX_TRACE
+ ring_buffer_change_overwrite(max_tr.buffer, enabled);
+#endif
+ }
if (mask == TRACE_ITER_PRINTK)
trace_printk_start_stop_comm(enabled);
+
+ return 0;
}
static int trace_set_options(char *option)
{
char *cmp;
int neg = 0;
- int ret = 0;
+ int ret = -ENODEV;
int i;
cmp = strstrip(option);
cmp += 2;
}
+ mutex_lock(&trace_types_lock);
+
for (i = 0; trace_options[i]; i++) {
if (strcmp(cmp, trace_options[i]) == 0) {
- set_tracer_flags(1 << i, !neg);
+ ret = set_tracer_flag(1 << i, !neg);
break;
}
}
/* If no option could be set, test the specific tracer options */
- if (!trace_options[i]) {
- mutex_lock(&trace_types_lock);
+ if (!trace_options[i])
ret = set_tracer_option(current_trace, cmp, neg);
- mutex_unlock(&trace_types_lock);
- }
+
+ mutex_unlock(&trace_types_lock);
return ret;
}
size_t cnt, loff_t *ppos)
{
char buf[64];
+ int ret;
if (cnt >= sizeof(buf))
return -EINVAL;
buf[cnt] = 0;
- trace_set_options(buf);
+ ret = trace_set_options(buf);
+ if (ret < 0)
+ return ret;
*ppos += cnt;
goto out;
trace_branch_disable();
+
+ current_trace->enabled = false;
+
if (current_trace->reset)
current_trace->reset(tr);
}
current_trace = t;
+ current_trace->enabled = true;
trace_branch_enable(tr);
out:
mutex_unlock(&trace_types_lock);
if (val != 0 && val != 1)
return -EINVAL;
- set_tracer_flags(1 << index, val);
+
+ mutex_lock(&trace_types_lock);
+ ret = set_tracer_flag(1 << index, val);
+ mutex_unlock(&trace_types_lock);
+
+ if (ret < 0)
+ return ret;
*ppos += cnt;
enum print_line_t (*print_line)(struct trace_iterator *iter);
/* If you handled the flag setting, return 0 */
int (*set_flag)(u32 old_flags, u32 bit, int set);
+ /* Return 0 if OK with change, else return non-zero */
+ int (*flag_changed)(struct tracer *tracer,
+ u32 mask, int set);
struct tracer *next;
struct tracer_flags *flags;
bool print_max;
bool use_max_tr;
bool allocated_snapshot;
+ bool enabled;
};
void trace_printk_init_buffers(void);
void trace_printk_start_comm(void);
+int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
+int set_tracer_flag(unsigned int mask, int enabled);
#undef FTRACE_ENTRY
#define FTRACE_ENTRY(call, struct_name, id, tstruct, print, filter) \
static int trace_type __read_mostly;
-static int save_lat_flag;
+static int save_flags;
static void stop_irqsoff_tracer(struct trace_array *tr, int graph);
static int start_irqsoff_tracer(struct trace_array *tr, int graph);
static void __irqsoff_tracer_init(struct trace_array *tr)
{
- save_lat_flag = trace_flags & TRACE_ITER_LATENCY_FMT;
- trace_flags |= TRACE_ITER_LATENCY_FMT;
+ save_flags = trace_flags;
+
+ /* non overwrite screws up the latency tracers */
+ set_tracer_flag(TRACE_ITER_OVERWRITE, 1);
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, 1);
tracing_max_latency = 0;
irqsoff_trace = tr;
static void irqsoff_tracer_reset(struct trace_array *tr)
{
+ int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
+ int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
+
stop_irqsoff_tracer(tr, is_graph());
- if (!save_lat_flag)
- trace_flags &= ~TRACE_ITER_LATENCY_FMT;
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, lat_flag);
+ set_tracer_flag(TRACE_ITER_OVERWRITE, overwrite_flag);
}
static void irqsoff_tracer_start(struct trace_array *tr)
.print_line = irqsoff_print_line,
.flags = &tracer_flags,
.set_flag = irqsoff_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_irqsoff,
#endif
.print_line = irqsoff_print_line,
.flags = &tracer_flags,
.set_flag = irqsoff_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_preemptoff,
#endif
.print_line = irqsoff_print_line,
.flags = &tracer_flags,
.set_flag = irqsoff_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_preemptirqsoff,
#endif
static int wakeup_graph_entry(struct ftrace_graph_ent *trace);
static void wakeup_graph_return(struct ftrace_graph_ret *trace);
-static int save_lat_flag;
+static int save_flags;
#define TRACE_DISPLAY_GRAPH 1
static int __wakeup_tracer_init(struct trace_array *tr)
{
- save_lat_flag = trace_flags & TRACE_ITER_LATENCY_FMT;
- trace_flags |= TRACE_ITER_LATENCY_FMT;
+ save_flags = trace_flags;
+
+ /* non overwrite screws up the latency tracers */
+ set_tracer_flag(TRACE_ITER_OVERWRITE, 1);
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, 1);
tracing_max_latency = 0;
wakeup_trace = tr;
static void wakeup_tracer_reset(struct trace_array *tr)
{
+ int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
+ int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
+
stop_wakeup_tracer(tr);
/* make sure we put back any tasks we are tracing */
wakeup_reset(tr);
- if (!save_lat_flag)
- trace_flags &= ~TRACE_ITER_LATENCY_FMT;
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, lat_flag);
+ set_tracer_flag(TRACE_ITER_OVERWRITE, overwrite_flag);
}
static void wakeup_tracer_start(struct trace_array *tr)
.print_line = wakeup_print_line,
.flags = &tracer_flags,
.set_flag = wakeup_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_wakeup,
#endif
.print_line = wakeup_print_line,
.flags = &tracer_flags,
.set_flag = wakeup_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_wakeup,
#endif
.owner = GLOBAL_ROOT_UID,
.group = GLOBAL_ROOT_GID,
.proc_inum = PROC_USER_INIT_INO,
+ .may_mount_sysfs = true,
+ .may_mount_proc = true,
};
EXPORT_SYMBOL_GPL(init_user_ns);
kgid_t group = new->egid;
int ret;
+ /*
+ * Verify that we can not violate the policy of which files
+ * may be accessed that is specified by the root directory,
+ * by verifing that the root directory is at the root of the
+ * mount namespace which allows all files to be accessed.
+ */
+ if (current_chrooted())
+ return -EPERM;
+
/* The creator needs a mapping in the parent user namespace
* or else we won't be able to reasonably tell userspace who
* created a user_namespace.
set_cred_user_ns(new, ns);
+ update_mnt_policy(ns);
+
return 0;
}
#include <linux/debug_locks.h>
#include <linux/lockdep.h>
#include <linux/idr.h>
+#include <linux/jhash.h>
#include <linux/hashtable.h>
+#include <linux/rculist.h>
+#include <linux/nodemask.h>
+#include <linux/moduleparam.h>
#include "workqueue_internal.h"
* %WORKER_UNBOUND set and concurrency management disabled, and may
* be executing on any CPU. The pool behaves as an unbound one.
*
- * Note that DISASSOCIATED can be flipped only while holding
- * assoc_mutex to avoid changing binding state while
+ * Note that DISASSOCIATED should be flipped only while holding
+ * manager_mutex to avoid changing binding state while
* create_worker() is in progress.
*/
POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
- POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
POOL_FREEZING = 1 << 3, /* freeze in progress */
WORKER_PREP = 1 << 3, /* preparing to run works */
WORKER_CPU_INTENSIVE = 1 << 6, /* cpu intensive */
WORKER_UNBOUND = 1 << 7, /* worker is unbound */
+ WORKER_REBOUND = 1 << 8, /* worker was rebound */
- WORKER_NOT_RUNNING = WORKER_PREP | WORKER_UNBOUND |
- WORKER_CPU_INTENSIVE,
+ WORKER_NOT_RUNNING = WORKER_PREP | WORKER_CPU_INTENSIVE |
+ WORKER_UNBOUND | WORKER_REBOUND,
NR_STD_WORKER_POOLS = 2, /* # standard pools per cpu */
+ UNBOUND_POOL_HASH_ORDER = 6, /* hashed by pool->attrs */
BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
MAX_IDLE_WORKERS_RATIO = 4, /* 1/4 of busy can be idle */
*/
RESCUER_NICE_LEVEL = -20,
HIGHPRI_NICE_LEVEL = -20,
+
+ WQ_NAME_LEN = 24,
};
/*
* cpu or grabbing pool->lock is enough for read access. If
* POOL_DISASSOCIATED is set, it's identical to L.
*
- * F: wq->flush_mutex protected.
+ * MG: pool->manager_mutex and pool->lock protected. Writes require both
+ * locks. Reads can happen under either lock.
+ *
+ * PL: wq_pool_mutex protected.
+ *
+ * PR: wq_pool_mutex protected for writes. Sched-RCU protected for reads.
+ *
+ * WQ: wq->mutex protected.
*
- * W: workqueue_lock protected.
+ * WR: wq->mutex protected for writes. Sched-RCU protected for reads.
+ *
+ * MD: wq_mayday_lock protected.
*/
/* struct worker is defined in workqueue_internal.h */
struct worker_pool {
spinlock_t lock; /* the pool lock */
- unsigned int cpu; /* I: the associated cpu */
+ int cpu; /* I: the associated cpu */
+ int node; /* I: the associated node ID */
int id; /* I: pool ID */
unsigned int flags; /* X: flags */
struct timer_list idle_timer; /* L: worker idle timeout */
struct timer_list mayday_timer; /* L: SOS timer for workers */
- /* workers are chained either in busy_hash or idle_list */
+ /* a workers is either on busy_hash or idle_list, or the manager */
DECLARE_HASHTABLE(busy_hash, BUSY_WORKER_HASH_ORDER);
/* L: hash of busy workers */
- struct mutex assoc_mutex; /* protect POOL_DISASSOCIATED */
- struct ida worker_ida; /* L: for worker IDs */
+ /* see manage_workers() for details on the two manager mutexes */
+ struct mutex manager_arb; /* manager arbitration */
+ struct mutex manager_mutex; /* manager exclusion */
+ struct idr worker_idr; /* MG: worker IDs and iteration */
+
+ struct workqueue_attrs *attrs; /* I: worker attributes */
+ struct hlist_node hash_node; /* PL: unbound_pool_hash node */
+ int refcnt; /* PL: refcnt for unbound pools */
/*
* The current concurrency level. As it's likely to be accessed
* cacheline.
*/
atomic_t nr_running ____cacheline_aligned_in_smp;
+
+ /*
+ * Destruction of pool is sched-RCU protected to allow dereferences
+ * from get_work_pool().
+ */
+ struct rcu_head rcu;
} ____cacheline_aligned_in_smp;
/*
struct workqueue_struct *wq; /* I: the owning workqueue */
int work_color; /* L: current color */
int flush_color; /* L: flushing color */
+ int refcnt; /* L: reference count */
int nr_in_flight[WORK_NR_COLORS];
/* L: nr of in_flight works */
int nr_active; /* L: nr of active works */
int max_active; /* L: max active works */
struct list_head delayed_works; /* L: delayed works */
-};
+ struct list_head pwqs_node; /* WR: node on wq->pwqs */
+ struct list_head mayday_node; /* MD: node on wq->maydays */
+
+ /*
+ * Release of unbound pwq is punted to system_wq. See put_pwq()
+ * and pwq_unbound_release_workfn() for details. pool_workqueue
+ * itself is also sched-RCU protected so that the first pwq can be
+ * determined without grabbing wq->mutex.
+ */
+ struct work_struct unbound_release_work;
+ struct rcu_head rcu;
+} __aligned(1 << WORK_STRUCT_FLAG_BITS);
/*
* Structure used to wait for workqueue flush.
*/
struct wq_flusher {
- struct list_head list; /* F: list of flushers */
- int flush_color; /* F: flush color waiting for */
+ struct list_head list; /* WQ: list of flushers */
+ int flush_color; /* WQ: flush color waiting for */
struct completion done; /* flush completion */
};
-/*
- * All cpumasks are assumed to be always set on UP and thus can't be
- * used to determine whether there's something to be done.
- */
-#ifdef CONFIG_SMP
-typedef cpumask_var_t mayday_mask_t;
-#define mayday_test_and_set_cpu(cpu, mask) \
- cpumask_test_and_set_cpu((cpu), (mask))
-#define mayday_clear_cpu(cpu, mask) cpumask_clear_cpu((cpu), (mask))
-#define for_each_mayday_cpu(cpu, mask) for_each_cpu((cpu), (mask))
-#define alloc_mayday_mask(maskp, gfp) zalloc_cpumask_var((maskp), (gfp))
-#define free_mayday_mask(mask) free_cpumask_var((mask))
-#else
-typedef unsigned long mayday_mask_t;
-#define mayday_test_and_set_cpu(cpu, mask) test_and_set_bit(0, &(mask))
-#define mayday_clear_cpu(cpu, mask) clear_bit(0, &(mask))
-#define for_each_mayday_cpu(cpu, mask) if ((cpu) = 0, (mask))
-#define alloc_mayday_mask(maskp, gfp) true
-#define free_mayday_mask(mask) do { } while (0)
-#endif
+struct wq_device;
/*
- * The externally visible workqueue abstraction is an array of
- * per-CPU workqueues:
+ * The externally visible workqueue. It relays the issued work items to
+ * the appropriate worker_pool through its pool_workqueues.
*/
struct workqueue_struct {
- unsigned int flags; /* W: WQ_* flags */
- union {
- struct pool_workqueue __percpu *pcpu;
- struct pool_workqueue *single;
- unsigned long v;
- } pool_wq; /* I: pwq's */
- struct list_head list; /* W: list of all workqueues */
-
- struct mutex flush_mutex; /* protects wq flushing */
- int work_color; /* F: current work color */
- int flush_color; /* F: current flush color */
+ struct list_head pwqs; /* WR: all pwqs of this wq */
+ struct list_head list; /* PL: list of all workqueues */
+
+ struct mutex mutex; /* protects this wq */
+ int work_color; /* WQ: current work color */
+ int flush_color; /* WQ: current flush color */
atomic_t nr_pwqs_to_flush; /* flush in progress */
- struct wq_flusher *first_flusher; /* F: first flusher */
- struct list_head flusher_queue; /* F: flush waiters */
- struct list_head flusher_overflow; /* F: flush overflow list */
+ struct wq_flusher *first_flusher; /* WQ: first flusher */
+ struct list_head flusher_queue; /* WQ: flush waiters */
+ struct list_head flusher_overflow; /* WQ: flush overflow list */
- mayday_mask_t mayday_mask; /* cpus requesting rescue */
+ struct list_head maydays; /* MD: pwqs requesting rescue */
struct worker *rescuer; /* I: rescue worker */
- int nr_drainers; /* W: drain in progress */
- int saved_max_active; /* W: saved pwq max_active */
+ int nr_drainers; /* WQ: drain in progress */
+ int saved_max_active; /* WQ: saved pwq max_active */
+
+ struct workqueue_attrs *unbound_attrs; /* WQ: only for unbound wqs */
+ struct pool_workqueue *dfl_pwq; /* WQ: only for unbound wqs */
+
+#ifdef CONFIG_SYSFS
+ struct wq_device *wq_dev; /* I: for sysfs interface */
+#endif
#ifdef CONFIG_LOCKDEP
struct lockdep_map lockdep_map;
#endif
- char name[]; /* I: workqueue name */
+ char name[WQ_NAME_LEN]; /* I: workqueue name */
+
+ /* hot fields used during command issue, aligned to cacheline */
+ unsigned int flags ____cacheline_aligned; /* WQ: WQ_* flags */
+ struct pool_workqueue __percpu *cpu_pwqs; /* I: per-cpu pwqs */
+ struct pool_workqueue __rcu *numa_pwq_tbl[]; /* FR: unbound pwqs indexed by node */
};
+static struct kmem_cache *pwq_cache;
+
+static int wq_numa_tbl_len; /* highest possible NUMA node id + 1 */
+static cpumask_var_t *wq_numa_possible_cpumask;
+ /* possible CPUs of each node */
+
+static bool wq_disable_numa;
+module_param_named(disable_numa, wq_disable_numa, bool, 0444);
+
+static bool wq_numa_enabled; /* unbound NUMA affinity enabled */
+
+/* buf for wq_update_unbound_numa_attrs(), protected by CPU hotplug exclusion */
+static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
+
+static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
+static DEFINE_SPINLOCK(wq_mayday_lock); /* protects wq->maydays list */
+
+static LIST_HEAD(workqueues); /* PL: list of all workqueues */
+static bool workqueue_freezing; /* PL: have wqs started freezing? */
+
+/* the per-cpu worker pools */
+static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS],
+ cpu_worker_pools);
+
+static DEFINE_IDR(worker_pool_idr); /* PR: idr of all pools */
+
+/* PL: hash of all unbound pools keyed by pool->attrs */
+static DEFINE_HASHTABLE(unbound_pool_hash, UNBOUND_POOL_HASH_ORDER);
+
+/* I: attributes used when instantiating standard unbound pools on demand */
+static struct workqueue_attrs *unbound_std_wq_attrs[NR_STD_WORKER_POOLS];
+
struct workqueue_struct *system_wq __read_mostly;
EXPORT_SYMBOL_GPL(system_wq);
struct workqueue_struct *system_highpri_wq __read_mostly;
struct workqueue_struct *system_freezable_wq __read_mostly;
EXPORT_SYMBOL_GPL(system_freezable_wq);
+static int worker_thread(void *__worker);
+static void copy_workqueue_attrs(struct workqueue_attrs *to,
+ const struct workqueue_attrs *from);
+
#define CREATE_TRACE_POINTS
#include <trace/events/workqueue.h>
-#define for_each_std_worker_pool(pool, cpu) \
- for ((pool) = &std_worker_pools(cpu)[0]; \
- (pool) < &std_worker_pools(cpu)[NR_STD_WORKER_POOLS]; (pool)++)
+#define assert_rcu_or_pool_mutex() \
+ rcu_lockdep_assert(rcu_read_lock_sched_held() || \
+ lockdep_is_held(&wq_pool_mutex), \
+ "sched RCU or wq_pool_mutex should be held")
-#define for_each_busy_worker(worker, i, pool) \
- hash_for_each(pool->busy_hash, i, worker, hentry)
+#define assert_rcu_or_wq_mutex(wq) \
+ rcu_lockdep_assert(rcu_read_lock_sched_held() || \
+ lockdep_is_held(&wq->mutex), \
+ "sched RCU or wq->mutex should be held")
-static inline int __next_wq_cpu(int cpu, const struct cpumask *mask,
- unsigned int sw)
-{
- if (cpu < nr_cpu_ids) {
- if (sw & 1) {
- cpu = cpumask_next(cpu, mask);
- if (cpu < nr_cpu_ids)
- return cpu;
- }
- if (sw & 2)
- return WORK_CPU_UNBOUND;
- }
- return WORK_CPU_END;
-}
+#ifdef CONFIG_LOCKDEP
+#define assert_manager_or_pool_lock(pool) \
+ WARN_ONCE(debug_locks && \
+ !lockdep_is_held(&(pool)->manager_mutex) && \
+ !lockdep_is_held(&(pool)->lock), \
+ "pool->manager_mutex or ->lock should be held")
+#else
+#define assert_manager_or_pool_lock(pool) do { } while (0)
+#endif
-static inline int __next_pwq_cpu(int cpu, const struct cpumask *mask,
- struct workqueue_struct *wq)
-{
- return __next_wq_cpu(cpu, mask, !(wq->flags & WQ_UNBOUND) ? 1 : 2);
-}
+#define for_each_cpu_worker_pool(pool, cpu) \
+ for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0]; \
+ (pool) < &per_cpu(cpu_worker_pools, cpu)[NR_STD_WORKER_POOLS]; \
+ (pool)++)
-/*
- * CPU iterators
+/**
+ * for_each_pool - iterate through all worker_pools in the system
+ * @pool: iteration cursor
+ * @pi: integer used for iteration
*
- * An extra cpu number is defined using an invalid cpu number
- * (WORK_CPU_UNBOUND) to host workqueues which are not bound to any
- * specific CPU. The following iterators are similar to for_each_*_cpu()
- * iterators but also considers the unbound CPU.
+ * This must be called either with wq_pool_mutex held or sched RCU read
+ * locked. If the pool needs to be used beyond the locking in effect, the
+ * caller is responsible for guaranteeing that the pool stays online.
*
- * for_each_wq_cpu() : possible CPUs + WORK_CPU_UNBOUND
- * for_each_online_wq_cpu() : online CPUs + WORK_CPU_UNBOUND
- * for_each_pwq_cpu() : possible CPUs for bound workqueues,
- * WORK_CPU_UNBOUND for unbound workqueues
+ * The if/else clause exists only for the lockdep assertion and can be
+ * ignored.
*/
-#define for_each_wq_cpu(cpu) \
- for ((cpu) = __next_wq_cpu(-1, cpu_possible_mask, 3); \
- (cpu) < WORK_CPU_END; \
- (cpu) = __next_wq_cpu((cpu), cpu_possible_mask, 3))
+#define for_each_pool(pool, pi) \
+ idr_for_each_entry(&worker_pool_idr, pool, pi) \
+ if (({ assert_rcu_or_pool_mutex(); false; })) { } \
+ else
-#define for_each_online_wq_cpu(cpu) \
- for ((cpu) = __next_wq_cpu(-1, cpu_online_mask, 3); \
- (cpu) < WORK_CPU_END; \
- (cpu) = __next_wq_cpu((cpu), cpu_online_mask, 3))
+/**
+ * for_each_pool_worker - iterate through all workers of a worker_pool
+ * @worker: iteration cursor
+ * @wi: integer used for iteration
+ * @pool: worker_pool to iterate workers of
+ *
+ * This must be called with either @pool->manager_mutex or ->lock held.
+ *
+ * The if/else clause exists only for the lockdep assertion and can be
+ * ignored.
+ */
+#define for_each_pool_worker(worker, wi, pool) \
+ idr_for_each_entry(&(pool)->worker_idr, (worker), (wi)) \
+ if (({ assert_manager_or_pool_lock((pool)); false; })) { } \
+ else
-#define for_each_pwq_cpu(cpu, wq) \
- for ((cpu) = __next_pwq_cpu(-1, cpu_possible_mask, (wq)); \
- (cpu) < WORK_CPU_END; \
- (cpu) = __next_pwq_cpu((cpu), cpu_possible_mask, (wq)))
+/**
+ * for_each_pwq - iterate through all pool_workqueues of the specified workqueue
+ * @pwq: iteration cursor
+ * @wq: the target workqueue
+ *
+ * This must be called either with wq->mutex held or sched RCU read locked.
+ * If the pwq needs to be used beyond the locking in effect, the caller is
+ * responsible for guaranteeing that the pwq stays online.
+ *
+ * The if/else clause exists only for the lockdep assertion and can be
+ * ignored.
+ */
+#define for_each_pwq(pwq, wq) \
+ list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node) \
+ if (({ assert_rcu_or_wq_mutex(wq); false; })) { } \
+ else
#ifdef CONFIG_DEBUG_OBJECTS_WORK
static inline void debug_work_deactivate(struct work_struct *work) { }
#endif
-/* Serializes the accesses to the list of workqueues. */
-static DEFINE_SPINLOCK(workqueue_lock);
-static LIST_HEAD(workqueues);
-static bool workqueue_freezing; /* W: have wqs started freezing? */
-
-/*
- * The CPU and unbound standard worker pools. The unbound ones have
- * POOL_DISASSOCIATED set, and their workers have WORKER_UNBOUND set.
- */
-static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS],
- cpu_std_worker_pools);
-static struct worker_pool unbound_std_worker_pools[NR_STD_WORKER_POOLS];
-
-/* idr of all pools */
-static DEFINE_MUTEX(worker_pool_idr_mutex);
-static DEFINE_IDR(worker_pool_idr);
-
-static int worker_thread(void *__worker);
-
-static struct worker_pool *std_worker_pools(int cpu)
-{
- if (cpu != WORK_CPU_UNBOUND)
- return per_cpu(cpu_std_worker_pools, cpu);
- else
- return unbound_std_worker_pools;
-}
-
-static int std_worker_pool_pri(struct worker_pool *pool)
-{
- return pool - std_worker_pools(pool->cpu);
-}
-
/* allocate ID and assign it to @pool */
static int worker_pool_assign_id(struct worker_pool *pool)
{
int ret;
- mutex_lock(&worker_pool_idr_mutex);
+ lockdep_assert_held(&wq_pool_mutex);
+
ret = idr_alloc(&worker_pool_idr, pool, 0, 0, GFP_KERNEL);
- if (ret >= 0)
+ if (ret >= 0) {
pool->id = ret;
- mutex_unlock(&worker_pool_idr_mutex);
-
- return ret < 0 ? ret : 0;
+ return 0;
+ }
+ return ret;
}
-/*
- * Lookup worker_pool by id. The idr currently is built during boot and
- * never modified. Don't worry about locking for now.
+/**
+ * unbound_pwq_by_node - return the unbound pool_workqueue for the given node
+ * @wq: the target workqueue
+ * @node: the node ID
+ *
+ * This must be called either with pwq_lock held or sched RCU read locked.
+ * If the pwq needs to be used beyond the locking in effect, the caller is
+ * responsible for guaranteeing that the pwq stays online.
*/
-static struct worker_pool *worker_pool_by_id(int pool_id)
-{
- return idr_find(&worker_pool_idr, pool_id);
-}
-
-static struct worker_pool *get_std_worker_pool(int cpu, bool highpri)
-{
- struct worker_pool *pools = std_worker_pools(cpu);
-
- return &pools[highpri];
-}
-
-static struct pool_workqueue *get_pwq(unsigned int cpu,
- struct workqueue_struct *wq)
+static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
+ int node)
{
- if (!(wq->flags & WQ_UNBOUND)) {
- if (likely(cpu < nr_cpu_ids))
- return per_cpu_ptr(wq->pool_wq.pcpu, cpu);
- } else if (likely(cpu == WORK_CPU_UNBOUND))
- return wq->pool_wq.single;
- return NULL;
+ assert_rcu_or_wq_mutex(wq);
+ return rcu_dereference_raw(wq->numa_pwq_tbl[node]);
}
static unsigned int work_color_to_flags(int color)
static inline void set_work_data(struct work_struct *work, unsigned long data,
unsigned long flags)
{
- BUG_ON(!work_pending(work));
+ WARN_ON_ONCE(!work_pending(work));
atomic_long_set(&work->data, data | flags | work_static(work));
}
* @work: the work item of interest
*
* Return the worker_pool @work was last associated with. %NULL if none.
+ *
+ * Pools are created and destroyed under wq_pool_mutex, and allows read
+ * access under sched-RCU read lock. As such, this function should be
+ * called under wq_pool_mutex or with preemption disabled.
+ *
+ * All fields of the returned pool are accessible as long as the above
+ * mentioned locking is in effect. If the returned pool needs to be used
+ * beyond the critical section, the caller is responsible for ensuring the
+ * returned pool is and stays online.
*/
static struct worker_pool *get_work_pool(struct work_struct *work)
{
unsigned long data = atomic_long_read(&work->data);
- struct worker_pool *pool;
int pool_id;
+ assert_rcu_or_pool_mutex();
+
if (data & WORK_STRUCT_PWQ)
return ((struct pool_workqueue *)
(data & WORK_STRUCT_WQ_DATA_MASK))->pool;
if (pool_id == WORK_OFFQ_POOL_NONE)
return NULL;
- pool = worker_pool_by_id(pool_id);
- WARN_ON_ONCE(!pool);
- return pool;
+ return idr_find(&worker_pool_idr, pool_id);
}
/**
/* Do we have too many workers and should some go away? */
static bool too_many_workers(struct worker_pool *pool)
{
- bool managing = pool->flags & POOL_MANAGING_WORKERS;
+ bool managing = mutex_is_locked(&pool->manager_arb);
int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
int nr_busy = pool->nr_workers - nr_idle;
* CONTEXT:
* spin_lock_irq(rq->lock)
*/
-void wq_worker_waking_up(struct task_struct *task, unsigned int cpu)
+void wq_worker_waking_up(struct task_struct *task, int cpu)
{
struct worker *worker = kthread_data(task);
* RETURNS:
* Worker task on @cpu to wake up, %NULL if none.
*/
-struct task_struct *wq_worker_sleeping(struct task_struct *task,
- unsigned int cpu)
+struct task_struct *wq_worker_sleeping(struct task_struct *task, int cpu)
{
struct worker *worker = kthread_data(task), *to_wakeup = NULL;
struct worker_pool *pool;
pool = worker->pool;
/* this can only happen on the local cpu */
- BUG_ON(cpu != raw_smp_processor_id());
+ if (WARN_ON_ONCE(cpu != raw_smp_processor_id()))
+ return NULL;
/*
* The counterpart of the following dec_and_test, implied mb,
* recycled work item as currently executing and make it wait until the
* current execution finishes, introducing an unwanted dependency.
*
- * This function checks the work item address, work function and workqueue
- * to avoid false positives. Note that this isn't complete as one may
- * construct a work function which can introduce dependency onto itself
- * through a recycled work item. Well, if somebody wants to shoot oneself
- * in the foot that badly, there's only so much we can do, and if such
- * deadlock actually occurs, it should be easy to locate the culprit work
- * function.
+ * This function checks the work item address and work function to avoid
+ * false positives. Note that this isn't complete as one may construct a
+ * work function which can introduce dependency onto itself through a
+ * recycled work item. Well, if somebody wants to shoot oneself in the
+ * foot that badly, there's only so much we can do, and if such deadlock
+ * actually occurs, it should be easy to locate the culprit work function.
*
* CONTEXT:
* spin_lock_irq(pool->lock).
*nextp = n;
}
+/**
+ * get_pwq - get an extra reference on the specified pool_workqueue
+ * @pwq: pool_workqueue to get
+ *
+ * Obtain an extra reference on @pwq. The caller should guarantee that
+ * @pwq has positive refcnt and be holding the matching pool->lock.
+ */
+static void get_pwq(struct pool_workqueue *pwq)
+{
+ lockdep_assert_held(&pwq->pool->lock);
+ WARN_ON_ONCE(pwq->refcnt <= 0);
+ pwq->refcnt++;
+}
+
+/**
+ * put_pwq - put a pool_workqueue reference
+ * @pwq: pool_workqueue to put
+ *
+ * Drop a reference of @pwq. If its refcnt reaches zero, schedule its
+ * destruction. The caller should be holding the matching pool->lock.
+ */
+static void put_pwq(struct pool_workqueue *pwq)
+{
+ lockdep_assert_held(&pwq->pool->lock);
+ if (likely(--pwq->refcnt))
+ return;
+ if (WARN_ON_ONCE(!(pwq->wq->flags & WQ_UNBOUND)))
+ return;
+ /*
+ * @pwq can't be released under pool->lock, bounce to
+ * pwq_unbound_release_workfn(). This never recurses on the same
+ * pool->lock as this path is taken only for unbound workqueues and
+ * the release work item is scheduled on a per-cpu workqueue. To
+ * avoid lockdep warning, unbound pool->locks are given lockdep
+ * subclass of 1 in get_unbound_pool().
+ */
+ schedule_work(&pwq->unbound_release_work);
+}
+
+/**
+ * put_pwq_unlocked - put_pwq() with surrounding pool lock/unlock
+ * @pwq: pool_workqueue to put (can be %NULL)
+ *
+ * put_pwq() with locking. This function also allows %NULL @pwq.
+ */
+static void put_pwq_unlocked(struct pool_workqueue *pwq)
+{
+ if (pwq) {
+ /*
+ * As both pwqs and pools are sched-RCU protected, the
+ * following lock operations are safe.
+ */
+ spin_lock_irq(&pwq->pool->lock);
+ put_pwq(pwq);
+ spin_unlock_irq(&pwq->pool->lock);
+ }
+}
+
static void pwq_activate_delayed_work(struct work_struct *work)
{
struct pool_workqueue *pwq = get_work_pwq(work);
*/
static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, int color)
{
- /* ignore uncolored works */
+ /* uncolored work items don't participate in flushing or nr_active */
if (color == WORK_NO_COLOR)
- return;
+ goto out_put;
pwq->nr_in_flight[color]--;
/* is flush in progress and are we at the flushing tip? */
if (likely(pwq->flush_color != color))
- return;
+ goto out_put;
/* are there still in-flight works? */
if (pwq->nr_in_flight[color])
- return;
+ goto out_put;
/* this pwq is done, clear flush_color */
pwq->flush_color = -1;
*/
if (atomic_dec_and_test(&pwq->wq->nr_pwqs_to_flush))
complete(&pwq->wq->first_flusher->done);
+out_put:
+ put_pwq(pwq);
}
/**
/* we own @work, set data and link */
set_work_pwq(work, pwq, extra_flags);
list_add_tail(&work->entry, head);
+ get_pwq(pwq);
/*
- * Ensure either worker_sched_deactivated() sees the above
- * list_add_tail() or we see zero nr_running to avoid workers
- * lying around lazily while there are works to be processed.
+ * Ensure either wq_worker_sleeping() sees the above
+ * list_add_tail() or we see zero nr_running to avoid workers lying
+ * around lazily while there are works to be processed.
*/
smp_mb();
return worker && worker->current_pwq->wq == wq;
}
-static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
+static void __queue_work(int cpu, struct workqueue_struct *wq,
struct work_struct *work)
{
struct pool_workqueue *pwq;
+ struct worker_pool *last_pool;
struct list_head *worklist;
unsigned int work_flags;
unsigned int req_cpu = cpu;
debug_work_activate(work);
/* if dying, only works from the same workqueue are allowed */
- if (unlikely(wq->flags & WQ_DRAINING) &&
+ if (unlikely(wq->flags & __WQ_DRAINING) &&
WARN_ON_ONCE(!is_chained_work(wq)))
return;
+retry:
+ if (req_cpu == WORK_CPU_UNBOUND)
+ cpu = raw_smp_processor_id();
- /* determine the pwq to use */
- if (!(wq->flags & WQ_UNBOUND)) {
- struct worker_pool *last_pool;
-
- if (cpu == WORK_CPU_UNBOUND)
- cpu = raw_smp_processor_id();
-
- /*
- * It's multi cpu. If @work was previously on a different
- * cpu, it might still be running there, in which case the
- * work needs to be queued on that cpu to guarantee
- * non-reentrancy.
- */
- pwq = get_pwq(cpu, wq);
- last_pool = get_work_pool(work);
+ /* pwq which will be used unless @work is executing elsewhere */
+ if (!(wq->flags & WQ_UNBOUND))
+ pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
+ else
+ pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
- if (last_pool && last_pool != pwq->pool) {
- struct worker *worker;
+ /*
+ * If @work was previously on a different pool, it might still be
+ * running there, in which case the work needs to be queued on that
+ * pool to guarantee non-reentrancy.
+ */
+ last_pool = get_work_pool(work);
+ if (last_pool && last_pool != pwq->pool) {
+ struct worker *worker;
- spin_lock(&last_pool->lock);
+ spin_lock(&last_pool->lock);
- worker = find_worker_executing_work(last_pool, work);
+ worker = find_worker_executing_work(last_pool, work);
- if (worker && worker->current_pwq->wq == wq) {
- pwq = get_pwq(last_pool->cpu, wq);
- } else {
- /* meh... not running there, queue here */
- spin_unlock(&last_pool->lock);
- spin_lock(&pwq->pool->lock);
- }
+ if (worker && worker->current_pwq->wq == wq) {
+ pwq = worker->current_pwq;
} else {
+ /* meh... not running there, queue here */
+ spin_unlock(&last_pool->lock);
spin_lock(&pwq->pool->lock);
}
} else {
- pwq = get_pwq(WORK_CPU_UNBOUND, wq);
spin_lock(&pwq->pool->lock);
}
+ /*
+ * pwq is determined and locked. For unbound pools, we could have
+ * raced with pwq release and it could already be dead. If its
+ * refcnt is zero, repeat pwq selection. Note that pwqs never die
+ * without another pwq replacing it in the numa_pwq_tbl or while
+ * work items are executing on it, so the retrying is guaranteed to
+ * make forward-progress.
+ */
+ if (unlikely(!pwq->refcnt)) {
+ if (wq->flags & WQ_UNBOUND) {
+ spin_unlock(&pwq->pool->lock);
+ cpu_relax();
+ goto retry;
+ }
+ /* oops */
+ WARN_ONCE(true, "workqueue: per-cpu pwq for %s on cpu%d has 0 refcnt",
+ wq->name, cpu);
+ }
+
/* pwq determined, queue */
trace_workqueue_queue_work(req_cpu, pwq, work);
}
EXPORT_SYMBOL_GPL(queue_work_on);
-/**
- * queue_work - queue work on a workqueue
- * @wq: workqueue to use
- * @work: work to queue
- *
- * Returns %false if @work was already on a queue, %true otherwise.
- *
- * We queue the work to the CPU on which it was submitted, but if the CPU dies
- * it can be processed by another CPU.
- */
-bool queue_work(struct workqueue_struct *wq, struct work_struct *work)
-{
- return queue_work_on(WORK_CPU_UNBOUND, wq, work);
-}
-EXPORT_SYMBOL_GPL(queue_work);
-
void delayed_work_timer_fn(unsigned long __data)
{
struct delayed_work *dwork = (struct delayed_work *)__data;
}
EXPORT_SYMBOL_GPL(queue_delayed_work_on);
-/**
- * queue_delayed_work - queue work on a workqueue after delay
- * @wq: workqueue to use
- * @dwork: delayable work to queue
- * @delay: number of jiffies to wait before queueing
- *
- * Equivalent to queue_delayed_work_on() but tries to use the local CPU.
- */
-bool queue_delayed_work(struct workqueue_struct *wq,
- struct delayed_work *dwork, unsigned long delay)
-{
- return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
-}
-EXPORT_SYMBOL_GPL(queue_delayed_work);
-
/**
* mod_delayed_work_on - modify delay of or queue a delayed work on specific CPU
* @cpu: CPU number to execute work on
}
EXPORT_SYMBOL_GPL(mod_delayed_work_on);
-/**
- * mod_delayed_work - modify delay of or queue a delayed work
- * @wq: workqueue to use
- * @dwork: work to queue
- * @delay: number of jiffies to wait before queueing
- *
- * mod_delayed_work_on() on local CPU.
- */
-bool mod_delayed_work(struct workqueue_struct *wq, struct delayed_work *dwork,
- unsigned long delay)
-{
- return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
-}
-EXPORT_SYMBOL_GPL(mod_delayed_work);
-
/**
* worker_enter_idle - enter idle state
* @worker: worker which is entering idle state
{
struct worker_pool *pool = worker->pool;
- BUG_ON(worker->flags & WORKER_IDLE);
- BUG_ON(!list_empty(&worker->entry) &&
- (worker->hentry.next || worker->hentry.pprev));
+ if (WARN_ON_ONCE(worker->flags & WORKER_IDLE) ||
+ WARN_ON_ONCE(!list_empty(&worker->entry) &&
+ (worker->hentry.next || worker->hentry.pprev)))
+ return;
/* can't use worker_set_flags(), also called from start_worker() */
worker->flags |= WORKER_IDLE;
{
struct worker_pool *pool = worker->pool;
- BUG_ON(!(worker->flags & WORKER_IDLE));
+ if (WARN_ON_ONCE(!(worker->flags & WORKER_IDLE)))
+ return;
worker_clr_flags(worker, WORKER_IDLE);
pool->nr_idle--;
list_del_init(&worker->entry);
}
/**
- * worker_maybe_bind_and_lock - bind worker to its cpu if possible and lock pool
- * @worker: self
+ * worker_maybe_bind_and_lock - try to bind %current to worker_pool and lock it
+ * @pool: target worker_pool
+ *
+ * Bind %current to the cpu of @pool if it is associated and lock @pool.
*
* Works which are scheduled while the cpu is online must at least be
* scheduled to a worker which is bound to the cpu so that if they are
* flushed from cpu callbacks while cpu is going down, they are
* guaranteed to execute on the cpu.
*
- * This function is to be used by rogue workers and rescuers to bind
+ * This function is to be used by unbound workers and rescuers to bind
* themselves to the target cpu and may race with cpu going down or
* coming online. kthread_bind() can't be used because it may put the
* worker to already dead cpu and set_cpus_allowed_ptr() can't be used
* %true if the associated pool is online (@worker is successfully
* bound), %false if offline.
*/
-static bool worker_maybe_bind_and_lock(struct worker *worker)
+static bool worker_maybe_bind_and_lock(struct worker_pool *pool)
__acquires(&pool->lock)
{
- struct worker_pool *pool = worker->pool;
- struct task_struct *task = worker->task;
-
while (true) {
/*
* The following call may fail, succeed or succeed
* against POOL_DISASSOCIATED.
*/
if (!(pool->flags & POOL_DISASSOCIATED))
- set_cpus_allowed_ptr(task, get_cpu_mask(pool->cpu));
+ set_cpus_allowed_ptr(current, pool->attrs->cpumask);
spin_lock_irq(&pool->lock);
if (pool->flags & POOL_DISASSOCIATED)
return false;
- if (task_cpu(task) == pool->cpu &&
- cpumask_equal(¤t->cpus_allowed,
- get_cpu_mask(pool->cpu)))
+ if (task_cpu(current) == pool->cpu &&
+ cpumask_equal(¤t->cpus_allowed, pool->attrs->cpumask))
return true;
spin_unlock_irq(&pool->lock);
}
}
-/*
- * Rebind an idle @worker to its CPU. worker_thread() will test
- * list_empty(@worker->entry) before leaving idle and call this function.
- */
-static void idle_worker_rebind(struct worker *worker)
-{
- /* CPU may go down again inbetween, clear UNBOUND only on success */
- if (worker_maybe_bind_and_lock(worker))
- worker_clr_flags(worker, WORKER_UNBOUND);
-
- /* rebind complete, become available again */
- list_add(&worker->entry, &worker->pool->idle_list);
- spin_unlock_irq(&worker->pool->lock);
-}
-
-/*
- * Function for @worker->rebind.work used to rebind unbound busy workers to
- * the associated cpu which is coming back online. This is scheduled by
- * cpu up but can race with other cpu hotplug operations and may be
- * executed twice without intervening cpu down.
- */
-static void busy_worker_rebind_fn(struct work_struct *work)
-{
- struct worker *worker = container_of(work, struct worker, rebind_work);
-
- if (worker_maybe_bind_and_lock(worker))
- worker_clr_flags(worker, WORKER_UNBOUND);
-
- spin_unlock_irq(&worker->pool->lock);
-}
-
-/**
- * rebind_workers - rebind all workers of a pool to the associated CPU
- * @pool: pool of interest
- *
- * @pool->cpu is coming online. Rebind all workers to the CPU. Rebinding
- * is different for idle and busy ones.
- *
- * Idle ones will be removed from the idle_list and woken up. They will
- * add themselves back after completing rebind. This ensures that the
- * idle_list doesn't contain any unbound workers when re-bound busy workers
- * try to perform local wake-ups for concurrency management.
- *
- * Busy workers can rebind after they finish their current work items.
- * Queueing the rebind work item at the head of the scheduled list is
- * enough. Note that nr_running will be properly bumped as busy workers
- * rebind.
- *
- * On return, all non-manager workers are scheduled for rebind - see
- * manage_workers() for the manager special case. Any idle worker
- * including the manager will not appear on @idle_list until rebind is
- * complete, making local wake-ups safe.
- */
-static void rebind_workers(struct worker_pool *pool)
-{
- struct worker *worker, *n;
- int i;
-
- lockdep_assert_held(&pool->assoc_mutex);
- lockdep_assert_held(&pool->lock);
-
- /* dequeue and kick idle ones */
- list_for_each_entry_safe(worker, n, &pool->idle_list, entry) {
- /*
- * idle workers should be off @pool->idle_list until rebind
- * is complete to avoid receiving premature local wake-ups.
- */
- list_del_init(&worker->entry);
-
- /*
- * worker_thread() will see the above dequeuing and call
- * idle_worker_rebind().
- */
- wake_up_process(worker->task);
- }
-
- /* rebind busy workers */
- for_each_busy_worker(worker, i, pool) {
- struct work_struct *rebind_work = &worker->rebind_work;
- struct workqueue_struct *wq;
-
- if (test_and_set_bit(WORK_STRUCT_PENDING_BIT,
- work_data_bits(rebind_work)))
- continue;
-
- debug_work_activate(rebind_work);
-
- /*
- * wq doesn't really matter but let's keep @worker->pool
- * and @pwq->pool consistent for sanity.
- */
- if (std_worker_pool_pri(worker->pool))
- wq = system_highpri_wq;
- else
- wq = system_wq;
-
- insert_work(get_pwq(pool->cpu, wq), rebind_work,
- worker->scheduled.next,
- work_color_to_flags(WORK_NO_COLOR));
- }
-}
-
static struct worker *alloc_worker(void)
{
struct worker *worker;
if (worker) {
INIT_LIST_HEAD(&worker->entry);
INIT_LIST_HEAD(&worker->scheduled);
- INIT_WORK(&worker->rebind_work, busy_worker_rebind_fn);
/* on creation a worker is in !idle && prep state */
worker->flags = WORKER_PREP;
}
*/
static struct worker *create_worker(struct worker_pool *pool)
{
- const char *pri = std_worker_pool_pri(pool) ? "H" : "";
struct worker *worker = NULL;
int id = -1;
+ char id_buf[16];
+ lockdep_assert_held(&pool->manager_mutex);
+
+ /*
+ * ID is needed to determine kthread name. Allocate ID first
+ * without installing the pointer.
+ */
+ idr_preload(GFP_KERNEL);
spin_lock_irq(&pool->lock);
- while (ida_get_new(&pool->worker_ida, &id)) {
- spin_unlock_irq(&pool->lock);
- if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
- goto fail;
- spin_lock_irq(&pool->lock);
- }
+
+ id = idr_alloc(&pool->worker_idr, NULL, 0, 0, GFP_NOWAIT);
+
spin_unlock_irq(&pool->lock);
+ idr_preload_end();
+ if (id < 0)
+ goto fail;
worker = alloc_worker();
if (!worker)
worker->pool = pool;
worker->id = id;
- if (pool->cpu != WORK_CPU_UNBOUND)
- worker->task = kthread_create_on_node(worker_thread,
- worker, cpu_to_node(pool->cpu),
- "kworker/%u:%d%s", pool->cpu, id, pri);
+ if (pool->cpu >= 0)
+ snprintf(id_buf, sizeof(id_buf), "%d:%d%s", pool->cpu, id,
+ pool->attrs->nice < 0 ? "H" : "");
else
- worker->task = kthread_create(worker_thread, worker,
- "kworker/u:%d%s", id, pri);
+ snprintf(id_buf, sizeof(id_buf), "u%d:%d", pool->id, id);
+
+ worker->task = kthread_create_on_node(worker_thread, worker, pool->node,
+ "kworker/%s", id_buf);
if (IS_ERR(worker->task))
goto fail;
- if (std_worker_pool_pri(pool))
- set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+ /*
+ * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
+ * online CPUs. It'll be re-applied when any of the CPUs come up.
+ */
+ set_user_nice(worker->task, pool->attrs->nice);
+ set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
+
+ /* prevent userland from meddling with cpumask of workqueue workers */
+ worker->task->flags |= PF_NO_SETAFFINITY;
/*
- * Determine CPU binding of the new worker depending on
- * %POOL_DISASSOCIATED. The caller is responsible for ensuring the
- * flag remains stable across this function. See the comments
- * above the flag definition for details.
- *
- * As an unbound worker may later become a regular one if CPU comes
- * online, make sure every worker has %PF_THREAD_BOUND set.
+ * The caller is responsible for ensuring %POOL_DISASSOCIATED
+ * remains stable across this function. See the comments above the
+ * flag definition for details.
*/
- if (!(pool->flags & POOL_DISASSOCIATED)) {
- kthread_bind(worker->task, pool->cpu);
- } else {
- worker->task->flags |= PF_THREAD_BOUND;
+ if (pool->flags & POOL_DISASSOCIATED)
worker->flags |= WORKER_UNBOUND;
- }
+
+ /* successful, commit the pointer to idr */
+ spin_lock_irq(&pool->lock);
+ idr_replace(&pool->worker_idr, worker, worker->id);
+ spin_unlock_irq(&pool->lock);
return worker;
+
fail:
if (id >= 0) {
spin_lock_irq(&pool->lock);
- ida_remove(&pool->worker_ida, id);
+ idr_remove(&pool->worker_idr, id);
spin_unlock_irq(&pool->lock);
}
kfree(worker);
}
/**
- * destroy_worker - destroy a workqueue worker
- * @worker: worker to be destroyed
- *
- * Destroy @worker and adjust @pool stats accordingly.
+ * create_and_start_worker - create and start a worker for a pool
+ * @pool: the target pool
*
- * CONTEXT:
- * spin_lock_irq(pool->lock) which is released and regrabbed.
+ * Grab the managership of @pool and create and start a new worker for it.
*/
-static void destroy_worker(struct worker *worker)
+static int create_and_start_worker(struct worker_pool *pool)
{
- struct worker_pool *pool = worker->pool;
- int id = worker->id;
+ struct worker *worker;
- /* sanity check frenzy */
- BUG_ON(worker->current_work);
- BUG_ON(!list_empty(&worker->scheduled));
+ mutex_lock(&pool->manager_mutex);
- if (worker->flags & WORKER_STARTED)
- pool->nr_workers--;
+ worker = create_worker(pool);
+ if (worker) {
+ spin_lock_irq(&pool->lock);
+ start_worker(worker);
+ spin_unlock_irq(&pool->lock);
+ }
+
+ mutex_unlock(&pool->manager_mutex);
+
+ return worker ? 0 : -ENOMEM;
+}
+
+/**
+ * destroy_worker - destroy a workqueue worker
+ * @worker: worker to be destroyed
+ *
+ * Destroy @worker and adjust @pool stats accordingly.
+ *
+ * CONTEXT:
+ * spin_lock_irq(pool->lock) which is released and regrabbed.
+ */
+static void destroy_worker(struct worker *worker)
+{
+ struct worker_pool *pool = worker->pool;
+
+ lockdep_assert_held(&pool->manager_mutex);
+ lockdep_assert_held(&pool->lock);
+
+ /* sanity check frenzy */
+ if (WARN_ON(worker->current_work) ||
+ WARN_ON(!list_empty(&worker->scheduled)))
+ return;
+
+ if (worker->flags & WORKER_STARTED)
+ pool->nr_workers--;
if (worker->flags & WORKER_IDLE)
pool->nr_idle--;
list_del_init(&worker->entry);
worker->flags |= WORKER_DIE;
+ idr_remove(&pool->worker_idr, worker->id);
+
spin_unlock_irq(&pool->lock);
kthread_stop(worker->task);
kfree(worker);
spin_lock_irq(&pool->lock);
- ida_remove(&pool->worker_ida, id);
}
static void idle_worker_timeout(unsigned long __pool)
spin_unlock_irq(&pool->lock);
}
-static bool send_mayday(struct work_struct *work)
+static void send_mayday(struct work_struct *work)
{
struct pool_workqueue *pwq = get_work_pwq(work);
struct workqueue_struct *wq = pwq->wq;
- unsigned int cpu;
- if (!(wq->flags & WQ_RESCUER))
- return false;
+ lockdep_assert_held(&wq_mayday_lock);
+
+ if (!wq->rescuer)
+ return;
/* mayday mayday mayday */
- cpu = pwq->pool->cpu;
- /* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
- if (cpu == WORK_CPU_UNBOUND)
- cpu = 0;
- if (!mayday_test_and_set_cpu(cpu, wq->mayday_mask))
+ if (list_empty(&pwq->mayday_node)) {
+ list_add_tail(&pwq->mayday_node, &wq->maydays);
wake_up_process(wq->rescuer->task);
- return true;
+ }
}
static void pool_mayday_timeout(unsigned long __pool)
struct worker_pool *pool = (void *)__pool;
struct work_struct *work;
- spin_lock_irq(&pool->lock);
+ spin_lock_irq(&wq_mayday_lock); /* for wq->maydays */
+ spin_lock(&pool->lock);
if (need_to_create_worker(pool)) {
/*
send_mayday(work);
}
- spin_unlock_irq(&pool->lock);
+ spin_unlock(&pool->lock);
+ spin_unlock_irq(&wq_mayday_lock);
mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
}
* sent to all rescuers with works scheduled on @pool to resolve
* possible allocation deadlock.
*
- * On return, need_to_create_worker() is guaranteed to be false and
- * may_start_working() true.
+ * On return, need_to_create_worker() is guaranteed to be %false and
+ * may_start_working() %true.
*
* LOCKING:
* spin_lock_irq(pool->lock) which may be released and regrabbed
* manager.
*
* RETURNS:
- * false if no action was taken and pool->lock stayed locked, true
+ * %false if no action was taken and pool->lock stayed locked, %true
* otherwise.
*/
static bool maybe_create_worker(struct worker_pool *pool)
del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&pool->lock);
start_worker(worker);
- BUG_ON(need_to_create_worker(pool));
+ if (WARN_ON_ONCE(need_to_create_worker(pool)))
+ goto restart;
return true;
}
* multiple times. Called only from manager.
*
* RETURNS:
- * false if no action was taken and pool->lock stayed locked, true
+ * %false if no action was taken and pool->lock stayed locked, %true
* otherwise.
*/
static bool maybe_destroy_workers(struct worker_pool *pool)
struct worker_pool *pool = worker->pool;
bool ret = false;
- if (pool->flags & POOL_MANAGING_WORKERS)
+ /*
+ * Managership is governed by two mutexes - manager_arb and
+ * manager_mutex. manager_arb handles arbitration of manager role.
+ * Anyone who successfully grabs manager_arb wins the arbitration
+ * and becomes the manager. mutex_trylock() on pool->manager_arb
+ * failure while holding pool->lock reliably indicates that someone
+ * else is managing the pool and the worker which failed trylock
+ * can proceed to executing work items. This means that anyone
+ * grabbing manager_arb is responsible for actually performing
+ * manager duties. If manager_arb is grabbed and released without
+ * actual management, the pool may stall indefinitely.
+ *
+ * manager_mutex is used for exclusion of actual management
+ * operations. The holder of manager_mutex can be sure that none
+ * of management operations, including creation and destruction of
+ * workers, won't take place until the mutex is released. Because
+ * manager_mutex doesn't interfere with manager role arbitration,
+ * it is guaranteed that the pool's management, while may be
+ * delayed, won't be disturbed by someone else grabbing
+ * manager_mutex.
+ */
+ if (!mutex_trylock(&pool->manager_arb))
return ret;
- pool->flags |= POOL_MANAGING_WORKERS;
-
/*
- * To simplify both worker management and CPU hotplug, hold off
- * management while hotplug is in progress. CPU hotplug path can't
- * grab %POOL_MANAGING_WORKERS to achieve this because that can
- * lead to idle worker depletion (all become busy thinking someone
- * else is managing) which in turn can result in deadlock under
- * extreme circumstances. Use @pool->assoc_mutex to synchronize
- * manager against CPU hotplug.
- *
- * assoc_mutex would always be free unless CPU hotplug is in
- * progress. trylock first without dropping @pool->lock.
+ * With manager arbitration won, manager_mutex would be free in
+ * most cases. trylock first without dropping @pool->lock.
*/
- if (unlikely(!mutex_trylock(&pool->assoc_mutex))) {
+ if (unlikely(!mutex_trylock(&pool->manager_mutex))) {
spin_unlock_irq(&pool->lock);
- mutex_lock(&pool->assoc_mutex);
- /*
- * CPU hotplug could have happened while we were waiting
- * for assoc_mutex. Hotplug itself can't handle us
- * because manager isn't either on idle or busy list, and
- * @pool's state and ours could have deviated.
- *
- * As hotplug is now excluded via assoc_mutex, we can
- * simply try to bind. It will succeed or fail depending
- * on @pool's current state. Try it and adjust
- * %WORKER_UNBOUND accordingly.
- */
- if (worker_maybe_bind_and_lock(worker))
- worker->flags &= ~WORKER_UNBOUND;
- else
- worker->flags |= WORKER_UNBOUND;
-
+ mutex_lock(&pool->manager_mutex);
ret = true;
}
ret |= maybe_destroy_workers(pool);
ret |= maybe_create_worker(pool);
- pool->flags &= ~POOL_MANAGING_WORKERS;
- mutex_unlock(&pool->assoc_mutex);
+ mutex_unlock(&pool->manager_mutex);
+ mutex_unlock(&pool->manager_arb);
return ret;
}
* worker_thread - the worker thread function
* @__worker: self
*
- * The worker thread function. There are NR_CPU_WORKER_POOLS dynamic pools
- * of these per each cpu. These workers process all works regardless of
- * their specific target workqueue. The only exception is works which
- * belong to workqueues with a rescuer which will be explained in
- * rescuer_thread().
+ * The worker thread function. All workers belong to a worker_pool -
+ * either a per-cpu one or dynamic unbound one. These workers process all
+ * work items regardless of their specific target workqueue. The only
+ * exception is work items which belong to workqueues with a rescuer which
+ * will be explained in rescuer_thread().
*/
static int worker_thread(void *__worker)
{
woke_up:
spin_lock_irq(&pool->lock);
- /* we are off idle list if destruction or rebind is requested */
- if (unlikely(list_empty(&worker->entry))) {
+ /* am I supposed to die? */
+ if (unlikely(worker->flags & WORKER_DIE)) {
spin_unlock_irq(&pool->lock);
-
- /* if DIE is set, destruction is requested */
- if (worker->flags & WORKER_DIE) {
- worker->task->flags &= ~PF_WQ_WORKER;
- return 0;
- }
-
- /* otherwise, rebind */
- idle_worker_rebind(worker);
- goto woke_up;
+ WARN_ON_ONCE(!list_empty(&worker->entry));
+ worker->task->flags &= ~PF_WQ_WORKER;
+ return 0;
}
worker_leave_idle(worker);
* preparing to process a work or actually processing it.
* Make sure nobody diddled with it while I was sleeping.
*/
- BUG_ON(!list_empty(&worker->scheduled));
+ WARN_ON_ONCE(!list_empty(&worker->scheduled));
/*
- * When control reaches this point, we're guaranteed to have
- * at least one idle worker or that someone else has already
- * assumed the manager role.
+ * Finish PREP stage. We're guaranteed to have at least one idle
+ * worker or that someone else has already assumed the manager
+ * role. This is where @worker starts participating in concurrency
+ * management if applicable and concurrency management is restored
+ * after being rebound. See rebind_workers() for details.
*/
- worker_clr_flags(worker, WORKER_PREP);
+ worker_clr_flags(worker, WORKER_PREP | WORKER_REBOUND);
do {
struct work_struct *work =
* @__rescuer: self
*
* Workqueue rescuer thread function. There's one rescuer for each
- * workqueue which has WQ_RESCUER set.
+ * workqueue which has WQ_MEM_RECLAIM set.
*
* Regular work processing on a pool may block trying to create a new
* worker which uses GFP_KERNEL allocation which has slight chance of
struct worker *rescuer = __rescuer;
struct workqueue_struct *wq = rescuer->rescue_wq;
struct list_head *scheduled = &rescuer->scheduled;
- bool is_unbound = wq->flags & WQ_UNBOUND;
- unsigned int cpu;
set_user_nice(current, RESCUER_NICE_LEVEL);
return 0;
}
- /*
- * See whether any cpu is asking for help. Unbounded
- * workqueues use cpu 0 in mayday_mask for CPU_UNBOUND.
- */
- for_each_mayday_cpu(cpu, wq->mayday_mask) {
- unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
- struct pool_workqueue *pwq = get_pwq(tcpu, wq);
+ /* see whether any pwq is asking for help */
+ spin_lock_irq(&wq_mayday_lock);
+
+ while (!list_empty(&wq->maydays)) {
+ struct pool_workqueue *pwq = list_first_entry(&wq->maydays,
+ struct pool_workqueue, mayday_node);
struct worker_pool *pool = pwq->pool;
struct work_struct *work, *n;
__set_current_state(TASK_RUNNING);
- mayday_clear_cpu(cpu, wq->mayday_mask);
+ list_del_init(&pwq->mayday_node);
+
+ spin_unlock_irq(&wq_mayday_lock);
/* migrate to the target cpu if possible */
+ worker_maybe_bind_and_lock(pool);
rescuer->pool = pool;
- worker_maybe_bind_and_lock(rescuer);
/*
* Slurp in all works issued via this workqueue and
* process'em.
*/
- BUG_ON(!list_empty(&rescuer->scheduled));
+ WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
list_for_each_entry_safe(work, n, &pool->worklist, entry)
if (get_work_pwq(work) == pwq)
move_linked_works(work, scheduled, &n);
if (keep_working(pool))
wake_up_worker(pool);
- spin_unlock_irq(&pool->lock);
+ rescuer->pool = NULL;
+ spin_unlock(&pool->lock);
+ spin_lock(&wq_mayday_lock);
}
+ spin_unlock_irq(&wq_mayday_lock);
+
/* rescuers should never participate in concurrency management */
WARN_ON_ONCE(!(rescuer->flags & WORKER_NOT_RUNNING));
schedule();
* advanced to @work_color.
*
* CONTEXT:
- * mutex_lock(wq->flush_mutex).
+ * mutex_lock(wq->mutex).
*
* RETURNS:
* %true if @flush_color >= 0 and there's something to flush. %false
int flush_color, int work_color)
{
bool wait = false;
- unsigned int cpu;
+ struct pool_workqueue *pwq;
if (flush_color >= 0) {
- BUG_ON(atomic_read(&wq->nr_pwqs_to_flush));
+ WARN_ON_ONCE(atomic_read(&wq->nr_pwqs_to_flush));
atomic_set(&wq->nr_pwqs_to_flush, 1);
}
- for_each_pwq_cpu(cpu, wq) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
+ for_each_pwq(pwq, wq) {
struct worker_pool *pool = pwq->pool;
spin_lock_irq(&pool->lock);
if (flush_color >= 0) {
- BUG_ON(pwq->flush_color != -1);
+ WARN_ON_ONCE(pwq->flush_color != -1);
if (pwq->nr_in_flight[flush_color]) {
pwq->flush_color = flush_color;
}
if (work_color >= 0) {
- BUG_ON(work_color != work_next_color(pwq->work_color));
+ WARN_ON_ONCE(work_color != work_next_color(pwq->work_color));
pwq->work_color = work_color;
}
* flush_workqueue - ensure that any scheduled work has run to completion.
* @wq: workqueue to flush
*
- * Forces execution of the workqueue and blocks until its completion.
- * This is typically used in driver shutdown handlers.
- *
- * We sleep until all works which were queued on entry have been handled,
- * but we are not livelocked by new incoming ones.
+ * This function sleeps until all work items which were queued on entry
+ * have finished execution, but it is not livelocked by new incoming ones.
*/
void flush_workqueue(struct workqueue_struct *wq)
{
lock_map_acquire(&wq->lockdep_map);
lock_map_release(&wq->lockdep_map);
- mutex_lock(&wq->flush_mutex);
+ mutex_lock(&wq->mutex);
/*
* Start-to-wait phase
* becomes our flush_color and work_color is advanced
* by one.
*/
- BUG_ON(!list_empty(&wq->flusher_overflow));
+ WARN_ON_ONCE(!list_empty(&wq->flusher_overflow));
this_flusher.flush_color = wq->work_color;
wq->work_color = next_color;
if (!wq->first_flusher) {
/* no flush in progress, become the first flusher */
- BUG_ON(wq->flush_color != this_flusher.flush_color);
+ WARN_ON_ONCE(wq->flush_color != this_flusher.flush_color);
wq->first_flusher = &this_flusher;
}
} else {
/* wait in queue */
- BUG_ON(wq->flush_color == this_flusher.flush_color);
+ WARN_ON_ONCE(wq->flush_color == this_flusher.flush_color);
list_add_tail(&this_flusher.list, &wq->flusher_queue);
flush_workqueue_prep_pwqs(wq, -1, wq->work_color);
}
list_add_tail(&this_flusher.list, &wq->flusher_overflow);
}
- mutex_unlock(&wq->flush_mutex);
+ mutex_unlock(&wq->mutex);
wait_for_completion(&this_flusher.done);
if (wq->first_flusher != &this_flusher)
return;
- mutex_lock(&wq->flush_mutex);
+ mutex_lock(&wq->mutex);
/* we might have raced, check again with mutex held */
if (wq->first_flusher != &this_flusher)
wq->first_flusher = NULL;
- BUG_ON(!list_empty(&this_flusher.list));
- BUG_ON(wq->flush_color != this_flusher.flush_color);
+ WARN_ON_ONCE(!list_empty(&this_flusher.list));
+ WARN_ON_ONCE(wq->flush_color != this_flusher.flush_color);
while (true) {
struct wq_flusher *next, *tmp;
complete(&next->done);
}
- BUG_ON(!list_empty(&wq->flusher_overflow) &&
- wq->flush_color != work_next_color(wq->work_color));
+ WARN_ON_ONCE(!list_empty(&wq->flusher_overflow) &&
+ wq->flush_color != work_next_color(wq->work_color));
/* this flush_color is finished, advance by one */
wq->flush_color = work_next_color(wq->flush_color);
}
if (list_empty(&wq->flusher_queue)) {
- BUG_ON(wq->flush_color != wq->work_color);
+ WARN_ON_ONCE(wq->flush_color != wq->work_color);
break;
}
* Need to flush more colors. Make the next flusher
* the new first flusher and arm pwqs.
*/
- BUG_ON(wq->flush_color == wq->work_color);
- BUG_ON(wq->flush_color != next->flush_color);
+ WARN_ON_ONCE(wq->flush_color == wq->work_color);
+ WARN_ON_ONCE(wq->flush_color != next->flush_color);
list_del_init(&next->list);
wq->first_flusher = next;
}
out_unlock:
- mutex_unlock(&wq->flush_mutex);
+ mutex_unlock(&wq->mutex);
}
EXPORT_SYMBOL_GPL(flush_workqueue);
void drain_workqueue(struct workqueue_struct *wq)
{
unsigned int flush_cnt = 0;
- unsigned int cpu;
+ struct pool_workqueue *pwq;
/*
* __queue_work() needs to test whether there are drainers, is much
* hotter than drain_workqueue() and already looks at @wq->flags.
- * Use WQ_DRAINING so that queue doesn't have to check nr_drainers.
+ * Use __WQ_DRAINING so that queue doesn't have to check nr_drainers.
*/
- spin_lock(&workqueue_lock);
+ mutex_lock(&wq->mutex);
if (!wq->nr_drainers++)
- wq->flags |= WQ_DRAINING;
- spin_unlock(&workqueue_lock);
+ wq->flags |= __WQ_DRAINING;
+ mutex_unlock(&wq->mutex);
reflush:
flush_workqueue(wq);
- for_each_pwq_cpu(cpu, wq) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
+ mutex_lock(&wq->mutex);
+
+ for_each_pwq(pwq, wq) {
bool drained;
spin_lock_irq(&pwq->pool->lock);
if (++flush_cnt == 10 ||
(flush_cnt % 100 == 0 && flush_cnt <= 1000))
- pr_warn("workqueue %s: flush on destruction isn't complete after %u tries\n",
+ pr_warn("workqueue %s: drain_workqueue() isn't complete after %u tries\n",
wq->name, flush_cnt);
+
+ mutex_unlock(&wq->mutex);
goto reflush;
}
- spin_lock(&workqueue_lock);
if (!--wq->nr_drainers)
- wq->flags &= ~WQ_DRAINING;
- spin_unlock(&workqueue_lock);
+ wq->flags &= ~__WQ_DRAINING;
+ mutex_unlock(&wq->mutex);
}
EXPORT_SYMBOL_GPL(drain_workqueue);
struct pool_workqueue *pwq;
might_sleep();
+
+ local_irq_disable();
pool = get_work_pool(work);
- if (!pool)
+ if (!pool) {
+ local_irq_enable();
return false;
+ }
- spin_lock_irq(&pool->lock);
+ spin_lock(&pool->lock);
/* see the comment in try_to_grab_pending() with the same code */
pwq = get_work_pwq(work);
if (pwq) {
* flusher is not running on the same workqueue by verifying write
* access.
*/
- if (pwq->wq->saved_max_active == 1 || pwq->wq->flags & WQ_RESCUER)
+ if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)
lock_map_acquire(&pwq->wq->lockdep_map);
else
lock_map_acquire_read(&pwq->wq->lockdep_map);
}
EXPORT_SYMBOL(cancel_delayed_work_sync);
-/**
- * schedule_work_on - put work task on a specific cpu
- * @cpu: cpu to put the work task on
- * @work: job to be done
- *
- * This puts a job on a specific cpu
- */
-bool schedule_work_on(int cpu, struct work_struct *work)
-{
- return queue_work_on(cpu, system_wq, work);
-}
-EXPORT_SYMBOL(schedule_work_on);
-
-/**
- * schedule_work - put work task in global workqueue
- * @work: job to be done
- *
- * Returns %false if @work was already on the kernel-global workqueue and
- * %true otherwise.
- *
- * This puts a job in the kernel-global workqueue if it was not already
- * queued and leaves it in the same position on the kernel-global
- * workqueue otherwise.
- */
-bool schedule_work(struct work_struct *work)
-{
- return queue_work(system_wq, work);
-}
-EXPORT_SYMBOL(schedule_work);
-
-/**
- * schedule_delayed_work_on - queue work in global workqueue on CPU after delay
- * @cpu: cpu to use
- * @dwork: job to be done
- * @delay: number of jiffies to wait
- *
- * After waiting for a given time this puts a job in the kernel-global
- * workqueue on the specified CPU.
- */
-bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork,
- unsigned long delay)
-{
- return queue_delayed_work_on(cpu, system_wq, dwork, delay);
-}
-EXPORT_SYMBOL(schedule_delayed_work_on);
-
-/**
- * schedule_delayed_work - put work task in global workqueue after delay
- * @dwork: job to be done
- * @delay: number of jiffies to wait or 0 for immediate execution
- *
- * After waiting for a given time this puts a job in the kernel-global
- * workqueue.
- */
-bool schedule_delayed_work(struct delayed_work *dwork, unsigned long delay)
-{
- return queue_delayed_work(system_wq, dwork, delay);
-}
-EXPORT_SYMBOL(schedule_delayed_work);
-
/**
* schedule_on_each_cpu - execute a function synchronously on each online CPU
* @func: the function to call
}
EXPORT_SYMBOL_GPL(execute_in_process_context);
-int keventd_up(void)
+#ifdef CONFIG_SYSFS
+/*
+ * Workqueues with WQ_SYSFS flag set is visible to userland via
+ * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
+ * following attributes.
+ *
+ * per_cpu RO bool : whether the workqueue is per-cpu or unbound
+ * max_active RW int : maximum number of in-flight work items
+ *
+ * Unbound workqueues have the following extra attributes.
+ *
+ * id RO int : the associated pool ID
+ * nice RW int : nice value of the workers
+ * cpumask RW mask : bitmask of allowed CPUs for the workers
+ */
+struct wq_device {
+ struct workqueue_struct *wq;
+ struct device dev;
+};
+
+static struct workqueue_struct *dev_to_wq(struct device *dev)
{
- return system_wq != NULL;
+ struct wq_device *wq_dev = container_of(dev, struct wq_device, dev);
+
+ return wq_dev->wq;
}
-static int alloc_pwqs(struct workqueue_struct *wq)
+static ssize_t wq_per_cpu_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
{
- /*
- * pwqs are forced aligned according to WORK_STRUCT_FLAG_BITS.
- * Make sure that the alignment isn't lower than that of
- * unsigned long long.
- */
- const size_t size = sizeof(struct pool_workqueue);
- const size_t align = max_t(size_t, 1 << WORK_STRUCT_FLAG_BITS,
- __alignof__(unsigned long long));
+ struct workqueue_struct *wq = dev_to_wq(dev);
- if (!(wq->flags & WQ_UNBOUND))
- wq->pool_wq.pcpu = __alloc_percpu(size, align);
- else {
- void *ptr;
+ return scnprintf(buf, PAGE_SIZE, "%d\n", (bool)!(wq->flags & WQ_UNBOUND));
+}
- /*
- * Allocate enough room to align pwq and put an extra
- * pointer at the end pointing back to the originally
- * allocated pointer which will be used for free.
- */
- ptr = kzalloc(size + align + sizeof(void *), GFP_KERNEL);
- if (ptr) {
- wq->pool_wq.single = PTR_ALIGN(ptr, align);
- *(void **)(wq->pool_wq.single + 1) = ptr;
- }
- }
+static ssize_t wq_max_active_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
- /* just in case, make sure it's actually aligned */
- BUG_ON(!IS_ALIGNED(wq->pool_wq.v, align));
- return wq->pool_wq.v ? 0 : -ENOMEM;
+ return scnprintf(buf, PAGE_SIZE, "%d\n", wq->saved_max_active);
}
-static void free_pwqs(struct workqueue_struct *wq)
+static ssize_t wq_max_active_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
{
- if (!(wq->flags & WQ_UNBOUND))
- free_percpu(wq->pool_wq.pcpu);
- else if (wq->pool_wq.single) {
- /* the pointer to free is stored right after the pwq */
- kfree(*(void **)(wq->pool_wq.single + 1));
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ int val;
+
+ if (sscanf(buf, "%d", &val) != 1 || val <= 0)
+ return -EINVAL;
+
+ workqueue_set_max_active(wq, val);
+ return count;
+}
+
+static struct device_attribute wq_sysfs_attrs[] = {
+ __ATTR(per_cpu, 0444, wq_per_cpu_show, NULL),
+ __ATTR(max_active, 0644, wq_max_active_show, wq_max_active_store),
+ __ATTR_NULL,
+};
+
+static ssize_t wq_pool_ids_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ const char *delim = "";
+ int node, written = 0;
+
+ rcu_read_lock_sched();
+ for_each_node(node) {
+ written += scnprintf(buf + written, PAGE_SIZE - written,
+ "%s%d:%d", delim, node,
+ unbound_pwq_by_node(wq, node)->pool->id);
+ delim = " ";
}
+ written += scnprintf(buf + written, PAGE_SIZE - written, "\n");
+ rcu_read_unlock_sched();
+
+ return written;
}
-static int wq_clamp_max_active(int max_active, unsigned int flags,
- const char *name)
+static ssize_t wq_nice_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
{
- int lim = flags & WQ_UNBOUND ? WQ_UNBOUND_MAX_ACTIVE : WQ_MAX_ACTIVE;
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ int written;
- if (max_active < 1 || max_active > lim)
- pr_warn("workqueue: max_active %d requested for %s is out of range, clamping between %d and %d\n",
- max_active, name, 1, lim);
+ mutex_lock(&wq->mutex);
+ written = scnprintf(buf, PAGE_SIZE, "%d\n", wq->unbound_attrs->nice);
+ mutex_unlock(&wq->mutex);
- return clamp_val(max_active, 1, lim);
+ return written;
}
-struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
- unsigned int flags,
- int max_active,
- struct lock_class_key *key,
- const char *lock_name, ...)
+/* prepare workqueue_attrs for sysfs store operations */
+static struct workqueue_attrs *wq_sysfs_prep_attrs(struct workqueue_struct *wq)
{
- va_list args, args1;
- struct workqueue_struct *wq;
- unsigned int cpu;
- size_t namelen;
+ struct workqueue_attrs *attrs;
- /* determine namelen, allocate wq and format name */
- va_start(args, lock_name);
- va_copy(args1, args);
- namelen = vsnprintf(NULL, 0, fmt, args) + 1;
+ attrs = alloc_workqueue_attrs(GFP_KERNEL);
+ if (!attrs)
+ return NULL;
- wq = kzalloc(sizeof(*wq) + namelen, GFP_KERNEL);
- if (!wq)
- goto err;
+ mutex_lock(&wq->mutex);
+ copy_workqueue_attrs(attrs, wq->unbound_attrs);
+ mutex_unlock(&wq->mutex);
+ return attrs;
+}
- vsnprintf(wq->name, namelen, fmt, args1);
- va_end(args);
- va_end(args1);
+static ssize_t wq_nice_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ struct workqueue_attrs *attrs;
+ int ret;
- /*
- * Workqueues which may be used during memory reclaim should
- * have a rescuer to guarantee forward progress.
- */
- if (flags & WQ_MEM_RECLAIM)
- flags |= WQ_RESCUER;
+ attrs = wq_sysfs_prep_attrs(wq);
+ if (!attrs)
+ return -ENOMEM;
- max_active = max_active ?: WQ_DFL_ACTIVE;
- max_active = wq_clamp_max_active(max_active, flags, wq->name);
+ if (sscanf(buf, "%d", &attrs->nice) == 1 &&
+ attrs->nice >= -20 && attrs->nice <= 19)
+ ret = apply_workqueue_attrs(wq, attrs);
+ else
+ ret = -EINVAL;
- /* init wq */
- wq->flags = flags;
- wq->saved_max_active = max_active;
- mutex_init(&wq->flush_mutex);
- atomic_set(&wq->nr_pwqs_to_flush, 0);
- INIT_LIST_HEAD(&wq->flusher_queue);
- INIT_LIST_HEAD(&wq->flusher_overflow);
+ free_workqueue_attrs(attrs);
+ return ret ?: count;
+}
- lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
- INIT_LIST_HEAD(&wq->list);
+static ssize_t wq_cpumask_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ int written;
- if (alloc_pwqs(wq) < 0)
- goto err;
+ mutex_lock(&wq->mutex);
+ written = cpumask_scnprintf(buf, PAGE_SIZE, wq->unbound_attrs->cpumask);
+ mutex_unlock(&wq->mutex);
- for_each_pwq_cpu(cpu, wq) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
+ written += scnprintf(buf + written, PAGE_SIZE - written, "\n");
+ return written;
+}
- BUG_ON((unsigned long)pwq & WORK_STRUCT_FLAG_MASK);
- pwq->pool = get_std_worker_pool(cpu, flags & WQ_HIGHPRI);
- pwq->wq = wq;
- pwq->flush_color = -1;
- pwq->max_active = max_active;
- INIT_LIST_HEAD(&pwq->delayed_works);
- }
+static ssize_t wq_cpumask_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ struct workqueue_attrs *attrs;
+ int ret;
- if (flags & WQ_RESCUER) {
- struct worker *rescuer;
+ attrs = wq_sysfs_prep_attrs(wq);
+ if (!attrs)
+ return -ENOMEM;
- if (!alloc_mayday_mask(&wq->mayday_mask, GFP_KERNEL))
- goto err;
+ ret = cpumask_parse(buf, attrs->cpumask);
+ if (!ret)
+ ret = apply_workqueue_attrs(wq, attrs);
- wq->rescuer = rescuer = alloc_worker();
- if (!rescuer)
- goto err;
+ free_workqueue_attrs(attrs);
+ return ret ?: count;
+}
- rescuer->rescue_wq = wq;
- rescuer->task = kthread_create(rescuer_thread, rescuer, "%s",
- wq->name);
- if (IS_ERR(rescuer->task))
- goto err;
+static ssize_t wq_numa_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ int written;
- rescuer->task->flags |= PF_THREAD_BOUND;
- wake_up_process(rescuer->task);
+ mutex_lock(&wq->mutex);
+ written = scnprintf(buf, PAGE_SIZE, "%d\n",
+ !wq->unbound_attrs->no_numa);
+ mutex_unlock(&wq->mutex);
+
+ return written;
+}
+
+static ssize_t wq_numa_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct workqueue_struct *wq = dev_to_wq(dev);
+ struct workqueue_attrs *attrs;
+ int v, ret;
+
+ attrs = wq_sysfs_prep_attrs(wq);
+ if (!attrs)
+ return -ENOMEM;
+
+ ret = -EINVAL;
+ if (sscanf(buf, "%d", &v) == 1) {
+ attrs->no_numa = !v;
+ ret = apply_workqueue_attrs(wq, attrs);
}
- /*
- * workqueue_lock protects global freeze state and workqueues
- * list. Grab it, set max_active accordingly and add the new
- * workqueue to workqueues list.
- */
- spin_lock(&workqueue_lock);
+ free_workqueue_attrs(attrs);
+ return ret ?: count;
+}
- if (workqueue_freezing && wq->flags & WQ_FREEZABLE)
- for_each_pwq_cpu(cpu, wq)
- get_pwq(cpu, wq)->max_active = 0;
+static struct device_attribute wq_sysfs_unbound_attrs[] = {
+ __ATTR(pool_ids, 0444, wq_pool_ids_show, NULL),
+ __ATTR(nice, 0644, wq_nice_show, wq_nice_store),
+ __ATTR(cpumask, 0644, wq_cpumask_show, wq_cpumask_store),
+ __ATTR(numa, 0644, wq_numa_show, wq_numa_store),
+ __ATTR_NULL,
+};
- list_add(&wq->list, &workqueues);
+static struct bus_type wq_subsys = {
+ .name = "workqueue",
+ .dev_attrs = wq_sysfs_attrs,
+};
- spin_unlock(&workqueue_lock);
+static int __init wq_sysfs_init(void)
+{
+ return subsys_virtual_register(&wq_subsys, NULL);
+}
+core_initcall(wq_sysfs_init);
- return wq;
-err:
- if (wq) {
- free_pwqs(wq);
- free_mayday_mask(wq->mayday_mask);
- kfree(wq->rescuer);
- kfree(wq);
- }
- return NULL;
+static void wq_device_release(struct device *dev)
+{
+ struct wq_device *wq_dev = container_of(dev, struct wq_device, dev);
+
+ kfree(wq_dev);
}
-EXPORT_SYMBOL_GPL(__alloc_workqueue_key);
/**
- * destroy_workqueue - safely terminate a workqueue
- * @wq: target workqueue
+ * workqueue_sysfs_register - make a workqueue visible in sysfs
+ * @wq: the workqueue to register
*
- * Safely destroy a workqueue. All work currently pending will be done first.
+ * Expose @wq in sysfs under /sys/bus/workqueue/devices.
+ * alloc_workqueue*() automatically calls this function if WQ_SYSFS is set
+ * which is the preferred method.
+ *
+ * Workqueue user should use this function directly iff it wants to apply
+ * workqueue_attrs before making the workqueue visible in sysfs; otherwise,
+ * apply_workqueue_attrs() may race against userland updating the
+ * attributes.
+ *
+ * Returns 0 on success, -errno on failure.
*/
-void destroy_workqueue(struct workqueue_struct *wq)
+int workqueue_sysfs_register(struct workqueue_struct *wq)
{
- unsigned int cpu;
-
- /* drain it before proceeding with destruction */
- drain_workqueue(wq);
+ struct wq_device *wq_dev;
+ int ret;
/*
- * wq list is used to freeze wq, remove from list after
- * flushing is complete in case freeze races us.
+ * Adjusting max_active or creating new pwqs by applyting
+ * attributes breaks ordering guarantee. Disallow exposing ordered
+ * workqueues.
*/
- spin_lock(&workqueue_lock);
- list_del(&wq->list);
- spin_unlock(&workqueue_lock);
+ if (WARN_ON(wq->flags & __WQ_ORDERED))
+ return -EINVAL;
- /* sanity check */
- for_each_pwq_cpu(cpu, wq) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
- int i;
+ wq->wq_dev = wq_dev = kzalloc(sizeof(*wq_dev), GFP_KERNEL);
+ if (!wq_dev)
+ return -ENOMEM;
+
+ wq_dev->wq = wq;
+ wq_dev->dev.bus = &wq_subsys;
+ wq_dev->dev.init_name = wq->name;
+ wq_dev->dev.release = wq_device_release;
+
+ /*
+ * unbound_attrs are created separately. Suppress uevent until
+ * everything is ready.
+ */
+ dev_set_uevent_suppress(&wq_dev->dev, true);
- for (i = 0; i < WORK_NR_COLORS; i++)
- BUG_ON(pwq->nr_in_flight[i]);
- BUG_ON(pwq->nr_active);
- BUG_ON(!list_empty(&pwq->delayed_works));
+ ret = device_register(&wq_dev->dev);
+ if (ret) {
+ kfree(wq_dev);
+ wq->wq_dev = NULL;
+ return ret;
}
- if (wq->flags & WQ_RESCUER) {
- kthread_stop(wq->rescuer->task);
- free_mayday_mask(wq->mayday_mask);
- kfree(wq->rescuer);
+ if (wq->flags & WQ_UNBOUND) {
+ struct device_attribute *attr;
+
+ for (attr = wq_sysfs_unbound_attrs; attr->attr.name; attr++) {
+ ret = device_create_file(&wq_dev->dev, attr);
+ if (ret) {
+ device_unregister(&wq_dev->dev);
+ wq->wq_dev = NULL;
+ return ret;
+ }
+ }
}
- free_pwqs(wq);
- kfree(wq);
+ kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD);
+ return 0;
}
-EXPORT_SYMBOL_GPL(destroy_workqueue);
/**
- * pwq_set_max_active - adjust max_active of a pwq
- * @pwq: target pool_workqueue
- * @max_active: new max_active value.
- *
- * Set @pwq->max_active to @max_active and activate delayed works if
- * increased.
+ * workqueue_sysfs_unregister - undo workqueue_sysfs_register()
+ * @wq: the workqueue to unregister
*
- * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * If @wq is registered to sysfs by workqueue_sysfs_register(), unregister.
*/
-static void pwq_set_max_active(struct pool_workqueue *pwq, int max_active)
+static void workqueue_sysfs_unregister(struct workqueue_struct *wq)
{
- pwq->max_active = max_active;
+ struct wq_device *wq_dev = wq->wq_dev;
+
+ if (!wq->wq_dev)
+ return;
- while (!list_empty(&pwq->delayed_works) &&
- pwq->nr_active < pwq->max_active)
- pwq_activate_first_delayed(pwq);
+ wq->wq_dev = NULL;
+ device_unregister(&wq_dev->dev);
}
+#else /* CONFIG_SYSFS */
+static void workqueue_sysfs_unregister(struct workqueue_struct *wq) { }
+#endif /* CONFIG_SYSFS */
/**
- * workqueue_set_max_active - adjust max_active of a workqueue
- * @wq: target workqueue
- * @max_active: new max_active value.
+ * free_workqueue_attrs - free a workqueue_attrs
+ * @attrs: workqueue_attrs to free
*
- * Set max_active of @wq to @max_active.
+ * Undo alloc_workqueue_attrs().
+ */
+void free_workqueue_attrs(struct workqueue_attrs *attrs)
+{
+ if (attrs) {
+ free_cpumask_var(attrs->cpumask);
+ kfree(attrs);
+ }
+}
+
+/**
+ * alloc_workqueue_attrs - allocate a workqueue_attrs
+ * @gfp_mask: allocation mask to use
*
- * CONTEXT:
- * Don't call from IRQ context.
+ * Allocate a new workqueue_attrs, initialize with default settings and
+ * return it. Returns NULL on failure.
*/
-void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
+struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask)
{
- unsigned int cpu;
+ struct workqueue_attrs *attrs;
- max_active = wq_clamp_max_active(max_active, wq->flags, wq->name);
+ attrs = kzalloc(sizeof(*attrs), gfp_mask);
+ if (!attrs)
+ goto fail;
+ if (!alloc_cpumask_var(&attrs->cpumask, gfp_mask))
+ goto fail;
- spin_lock(&workqueue_lock);
+ cpumask_copy(attrs->cpumask, cpu_possible_mask);
+ return attrs;
+fail:
+ free_workqueue_attrs(attrs);
+ return NULL;
+}
- wq->saved_max_active = max_active;
+static void copy_workqueue_attrs(struct workqueue_attrs *to,
+ const struct workqueue_attrs *from)
+{
+ to->nice = from->nice;
+ cpumask_copy(to->cpumask, from->cpumask);
+}
- for_each_pwq_cpu(cpu, wq) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
- struct worker_pool *pool = pwq->pool;
+/* hash value of the content of @attr */
+static u32 wqattrs_hash(const struct workqueue_attrs *attrs)
+{
+ u32 hash = 0;
- spin_lock_irq(&pool->lock);
+ hash = jhash_1word(attrs->nice, hash);
+ hash = jhash(cpumask_bits(attrs->cpumask),
+ BITS_TO_LONGS(nr_cpumask_bits) * sizeof(long), hash);
+ return hash;
+}
+
+/* content equality test */
+static bool wqattrs_equal(const struct workqueue_attrs *a,
+ const struct workqueue_attrs *b)
+{
+ if (a->nice != b->nice)
+ return false;
+ if (!cpumask_equal(a->cpumask, b->cpumask))
+ return false;
+ return true;
+}
+
+/**
+ * init_worker_pool - initialize a newly zalloc'd worker_pool
+ * @pool: worker_pool to initialize
+ *
+ * Initiailize a newly zalloc'd @pool. It also allocates @pool->attrs.
+ * Returns 0 on success, -errno on failure. Even on failure, all fields
+ * inside @pool proper are initialized and put_unbound_pool() can be called
+ * on @pool safely to release it.
+ */
+static int init_worker_pool(struct worker_pool *pool)
+{
+ spin_lock_init(&pool->lock);
+ pool->id = -1;
+ pool->cpu = -1;
+ pool->node = NUMA_NO_NODE;
+ pool->flags |= POOL_DISASSOCIATED;
+ INIT_LIST_HEAD(&pool->worklist);
+ INIT_LIST_HEAD(&pool->idle_list);
+ hash_init(pool->busy_hash);
+
+ init_timer_deferrable(&pool->idle_timer);
+ pool->idle_timer.function = idle_worker_timeout;
+ pool->idle_timer.data = (unsigned long)pool;
+
+ setup_timer(&pool->mayday_timer, pool_mayday_timeout,
+ (unsigned long)pool);
+
+ mutex_init(&pool->manager_arb);
+ mutex_init(&pool->manager_mutex);
+ idr_init(&pool->worker_idr);
+
+ INIT_HLIST_NODE(&pool->hash_node);
+ pool->refcnt = 1;
+
+ /* shouldn't fail above this point */
+ pool->attrs = alloc_workqueue_attrs(GFP_KERNEL);
+ if (!pool->attrs)
+ return -ENOMEM;
+ return 0;
+}
+
+static void rcu_free_pool(struct rcu_head *rcu)
+{
+ struct worker_pool *pool = container_of(rcu, struct worker_pool, rcu);
+
+ idr_destroy(&pool->worker_idr);
+ free_workqueue_attrs(pool->attrs);
+ kfree(pool);
+}
+
+/**
+ * put_unbound_pool - put a worker_pool
+ * @pool: worker_pool to put
+ *
+ * Put @pool. If its refcnt reaches zero, it gets destroyed in sched-RCU
+ * safe manner. get_unbound_pool() calls this function on its failure path
+ * and this function should be able to release pools which went through,
+ * successfully or not, init_worker_pool().
+ *
+ * Should be called with wq_pool_mutex held.
+ */
+static void put_unbound_pool(struct worker_pool *pool)
+{
+ struct worker *worker;
+
+ lockdep_assert_held(&wq_pool_mutex);
+
+ if (--pool->refcnt)
+ return;
+
+ /* sanity checks */
+ if (WARN_ON(!(pool->flags & POOL_DISASSOCIATED)) ||
+ WARN_ON(!list_empty(&pool->worklist)))
+ return;
+
+ /* release id and unhash */
+ if (pool->id >= 0)
+ idr_remove(&worker_pool_idr, pool->id);
+ hash_del(&pool->hash_node);
+
+ /*
+ * Become the manager and destroy all workers. Grabbing
+ * manager_arb prevents @pool's workers from blocking on
+ * manager_mutex.
+ */
+ mutex_lock(&pool->manager_arb);
+ mutex_lock(&pool->manager_mutex);
+ spin_lock_irq(&pool->lock);
+
+ while ((worker = first_worker(pool)))
+ destroy_worker(worker);
+ WARN_ON(pool->nr_workers || pool->nr_idle);
+
+ spin_unlock_irq(&pool->lock);
+ mutex_unlock(&pool->manager_mutex);
+ mutex_unlock(&pool->manager_arb);
+
+ /* shut down the timers */
+ del_timer_sync(&pool->idle_timer);
+ del_timer_sync(&pool->mayday_timer);
+
+ /* sched-RCU protected to allow dereferences from get_work_pool() */
+ call_rcu_sched(&pool->rcu, rcu_free_pool);
+}
+
+/**
+ * get_unbound_pool - get a worker_pool with the specified attributes
+ * @attrs: the attributes of the worker_pool to get
+ *
+ * Obtain a worker_pool which has the same attributes as @attrs, bump the
+ * reference count and return it. If there already is a matching
+ * worker_pool, it will be used; otherwise, this function attempts to
+ * create a new one. On failure, returns NULL.
+ *
+ * Should be called with wq_pool_mutex held.
+ */
+static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
+{
+ u32 hash = wqattrs_hash(attrs);
+ struct worker_pool *pool;
+ int node;
+
+ lockdep_assert_held(&wq_pool_mutex);
+
+ /* do we already have a matching pool? */
+ hash_for_each_possible(unbound_pool_hash, pool, hash_node, hash) {
+ if (wqattrs_equal(pool->attrs, attrs)) {
+ pool->refcnt++;
+ goto out_unlock;
+ }
+ }
+
+ /* nope, create a new one */
+ pool = kzalloc(sizeof(*pool), GFP_KERNEL);
+ if (!pool || init_worker_pool(pool) < 0)
+ goto fail;
+
+ if (workqueue_freezing)
+ pool->flags |= POOL_FREEZING;
+
+ lockdep_set_subclass(&pool->lock, 1); /* see put_pwq() */
+ copy_workqueue_attrs(pool->attrs, attrs);
+
+ /* if cpumask is contained inside a NUMA node, we belong to that node */
+ if (wq_numa_enabled) {
+ for_each_node(node) {
+ if (cpumask_subset(pool->attrs->cpumask,
+ wq_numa_possible_cpumask[node])) {
+ pool->node = node;
+ break;
+ }
+ }
+ }
+
+ if (worker_pool_assign_id(pool) < 0)
+ goto fail;
+
+ /* create and start the initial worker */
+ if (create_and_start_worker(pool) < 0)
+ goto fail;
+
+ /* install */
+ hash_add(unbound_pool_hash, &pool->hash_node, hash);
+out_unlock:
+ return pool;
+fail:
+ if (pool)
+ put_unbound_pool(pool);
+ return NULL;
+}
+
+static void rcu_free_pwq(struct rcu_head *rcu)
+{
+ kmem_cache_free(pwq_cache,
+ container_of(rcu, struct pool_workqueue, rcu));
+}
+
+/*
+ * Scheduled on system_wq by put_pwq() when an unbound pwq hits zero refcnt
+ * and needs to be destroyed.
+ */
+static void pwq_unbound_release_workfn(struct work_struct *work)
+{
+ struct pool_workqueue *pwq = container_of(work, struct pool_workqueue,
+ unbound_release_work);
+ struct workqueue_struct *wq = pwq->wq;
+ struct worker_pool *pool = pwq->pool;
+ bool is_last;
+
+ if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
+ return;
+
+ /*
+ * Unlink @pwq. Synchronization against wq->mutex isn't strictly
+ * necessary on release but do it anyway. It's easier to verify
+ * and consistent with the linking path.
+ */
+ mutex_lock(&wq->mutex);
+ list_del_rcu(&pwq->pwqs_node);
+ is_last = list_empty(&wq->pwqs);
+ mutex_unlock(&wq->mutex);
+
+ mutex_lock(&wq_pool_mutex);
+ put_unbound_pool(pool);
+ mutex_unlock(&wq_pool_mutex);
+
+ call_rcu_sched(&pwq->rcu, rcu_free_pwq);
+
+ /*
+ * If we're the last pwq going away, @wq is already dead and no one
+ * is gonna access it anymore. Free it.
+ */
+ if (is_last) {
+ free_workqueue_attrs(wq->unbound_attrs);
+ kfree(wq);
+ }
+}
+
+/**
+ * pwq_adjust_max_active - update a pwq's max_active to the current setting
+ * @pwq: target pool_workqueue
+ *
+ * If @pwq isn't freezing, set @pwq->max_active to the associated
+ * workqueue's saved_max_active and activate delayed work items
+ * accordingly. If @pwq is freezing, clear @pwq->max_active to zero.
+ */
+static void pwq_adjust_max_active(struct pool_workqueue *pwq)
+{
+ struct workqueue_struct *wq = pwq->wq;
+ bool freezable = wq->flags & WQ_FREEZABLE;
+
+ /* for @wq->saved_max_active */
+ lockdep_assert_held(&wq->mutex);
+
+ /* fast exit for non-freezable wqs */
+ if (!freezable && pwq->max_active == wq->saved_max_active)
+ return;
+
+ spin_lock_irq(&pwq->pool->lock);
+
+ if (!freezable || !(pwq->pool->flags & POOL_FREEZING)) {
+ pwq->max_active = wq->saved_max_active;
+
+ while (!list_empty(&pwq->delayed_works) &&
+ pwq->nr_active < pwq->max_active)
+ pwq_activate_first_delayed(pwq);
+
+ /*
+ * Need to kick a worker after thawed or an unbound wq's
+ * max_active is bumped. It's a slow path. Do it always.
+ */
+ wake_up_worker(pwq->pool);
+ } else {
+ pwq->max_active = 0;
+ }
+
+ spin_unlock_irq(&pwq->pool->lock);
+}
+
+/* initialize newly alloced @pwq which is associated with @wq and @pool */
+static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq,
+ struct worker_pool *pool)
+{
+ BUG_ON((unsigned long)pwq & WORK_STRUCT_FLAG_MASK);
+
+ memset(pwq, 0, sizeof(*pwq));
+
+ pwq->pool = pool;
+ pwq->wq = wq;
+ pwq->flush_color = -1;
+ pwq->refcnt = 1;
+ INIT_LIST_HEAD(&pwq->delayed_works);
+ INIT_LIST_HEAD(&pwq->pwqs_node);
+ INIT_LIST_HEAD(&pwq->mayday_node);
+ INIT_WORK(&pwq->unbound_release_work, pwq_unbound_release_workfn);
+}
+
+/* sync @pwq with the current state of its associated wq and link it */
+static void link_pwq(struct pool_workqueue *pwq)
+{
+ struct workqueue_struct *wq = pwq->wq;
+
+ lockdep_assert_held(&wq->mutex);
+
+ /* may be called multiple times, ignore if already linked */
+ if (!list_empty(&pwq->pwqs_node))
+ return;
+
+ /*
+ * Set the matching work_color. This is synchronized with
+ * wq->mutex to avoid confusing flush_workqueue().
+ */
+ pwq->work_color = wq->work_color;
+
+ /* sync max_active to the current setting */
+ pwq_adjust_max_active(pwq);
+
+ /* link in @pwq */
+ list_add_rcu(&pwq->pwqs_node, &wq->pwqs);
+}
+
+/* obtain a pool matching @attr and create a pwq associating the pool and @wq */
+static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq,
+ const struct workqueue_attrs *attrs)
+{
+ struct worker_pool *pool;
+ struct pool_workqueue *pwq;
+
+ lockdep_assert_held(&wq_pool_mutex);
+
+ pool = get_unbound_pool(attrs);
+ if (!pool)
+ return NULL;
+
+ pwq = kmem_cache_alloc_node(pwq_cache, GFP_KERNEL, pool->node);
+ if (!pwq) {
+ put_unbound_pool(pool);
+ return NULL;
+ }
+
+ init_pwq(pwq, wq, pool);
+ return pwq;
+}
+
+/* undo alloc_unbound_pwq(), used only in the error path */
+static void free_unbound_pwq(struct pool_workqueue *pwq)
+{
+ lockdep_assert_held(&wq_pool_mutex);
+
+ if (pwq) {
+ put_unbound_pool(pwq->pool);
+ kfree(pwq);
+ }
+}
+
+/**
+ * wq_calc_node_mask - calculate a wq_attrs' cpumask for the specified node
+ * @attrs: the wq_attrs of interest
+ * @node: the target NUMA node
+ * @cpu_going_down: if >= 0, the CPU to consider as offline
+ * @cpumask: outarg, the resulting cpumask
+ *
+ * Calculate the cpumask a workqueue with @attrs should use on @node. If
+ * @cpu_going_down is >= 0, that cpu is considered offline during
+ * calculation. The result is stored in @cpumask. This function returns
+ * %true if the resulting @cpumask is different from @attrs->cpumask,
+ * %false if equal.
+ *
+ * If NUMA affinity is not enabled, @attrs->cpumask is always used. If
+ * enabled and @node has online CPUs requested by @attrs, the returned
+ * cpumask is the intersection of the possible CPUs of @node and
+ * @attrs->cpumask.
+ *
+ * The caller is responsible for ensuring that the cpumask of @node stays
+ * stable.
+ */
+static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node,
+ int cpu_going_down, cpumask_t *cpumask)
+{
+ if (!wq_numa_enabled || attrs->no_numa)
+ goto use_dfl;
+
+ /* does @node have any online CPUs @attrs wants? */
+ cpumask_and(cpumask, cpumask_of_node(node), attrs->cpumask);
+ if (cpu_going_down >= 0)
+ cpumask_clear_cpu(cpu_going_down, cpumask);
+
+ if (cpumask_empty(cpumask))
+ goto use_dfl;
+
+ /* yeap, return possible CPUs in @node that @attrs wants */
+ cpumask_and(cpumask, attrs->cpumask, wq_numa_possible_cpumask[node]);
+ return !cpumask_equal(cpumask, attrs->cpumask);
+
+use_dfl:
+ cpumask_copy(cpumask, attrs->cpumask);
+ return false;
+}
+
+/* install @pwq into @wq's numa_pwq_tbl[] for @node and return the old pwq */
+static struct pool_workqueue *numa_pwq_tbl_install(struct workqueue_struct *wq,
+ int node,
+ struct pool_workqueue *pwq)
+{
+ struct pool_workqueue *old_pwq;
+
+ lockdep_assert_held(&wq->mutex);
+
+ /* link_pwq() can handle duplicate calls */
+ link_pwq(pwq);
+
+ old_pwq = rcu_access_pointer(wq->numa_pwq_tbl[node]);
+ rcu_assign_pointer(wq->numa_pwq_tbl[node], pwq);
+ return old_pwq;
+}
+
+/**
+ * apply_workqueue_attrs - apply new workqueue_attrs to an unbound workqueue
+ * @wq: the target workqueue
+ * @attrs: the workqueue_attrs to apply, allocated with alloc_workqueue_attrs()
+ *
+ * Apply @attrs to an unbound workqueue @wq. Unless disabled, on NUMA
+ * machines, this function maps a separate pwq to each NUMA node with
+ * possibles CPUs in @attrs->cpumask so that work items are affine to the
+ * NUMA node it was issued on. Older pwqs are released as in-flight work
+ * items finish. Note that a work item which repeatedly requeues itself
+ * back-to-back will stay on its current pwq.
+ *
+ * Performs GFP_KERNEL allocations. Returns 0 on success and -errno on
+ * failure.
+ */
+int apply_workqueue_attrs(struct workqueue_struct *wq,
+ const struct workqueue_attrs *attrs)
+{
+ struct workqueue_attrs *new_attrs, *tmp_attrs;
+ struct pool_workqueue **pwq_tbl, *dfl_pwq;
+ int node, ret;
+
+ /* only unbound workqueues can change attributes */
+ if (WARN_ON(!(wq->flags & WQ_UNBOUND)))
+ return -EINVAL;
+
+ /* creating multiple pwqs breaks ordering guarantee */
+ if (WARN_ON((wq->flags & __WQ_ORDERED) && !list_empty(&wq->pwqs)))
+ return -EINVAL;
+
+ pwq_tbl = kzalloc(wq_numa_tbl_len * sizeof(pwq_tbl[0]), GFP_KERNEL);
+ new_attrs = alloc_workqueue_attrs(GFP_KERNEL);
+ tmp_attrs = alloc_workqueue_attrs(GFP_KERNEL);
+ if (!pwq_tbl || !new_attrs || !tmp_attrs)
+ goto enomem;
+
+ /* make a copy of @attrs and sanitize it */
+ copy_workqueue_attrs(new_attrs, attrs);
+ cpumask_and(new_attrs->cpumask, new_attrs->cpumask, cpu_possible_mask);
+
+ /*
+ * We may create multiple pwqs with differing cpumasks. Make a
+ * copy of @new_attrs which will be modified and used to obtain
+ * pools.
+ */
+ copy_workqueue_attrs(tmp_attrs, new_attrs);
+
+ /*
+ * CPUs should stay stable across pwq creations and installations.
+ * Pin CPUs, determine the target cpumask for each node and create
+ * pwqs accordingly.
+ */
+ get_online_cpus();
+
+ mutex_lock(&wq_pool_mutex);
+
+ /*
+ * If something goes wrong during CPU up/down, we'll fall back to
+ * the default pwq covering whole @attrs->cpumask. Always create
+ * it even if we don't use it immediately.
+ */
+ dfl_pwq = alloc_unbound_pwq(wq, new_attrs);
+ if (!dfl_pwq)
+ goto enomem_pwq;
+
+ for_each_node(node) {
+ if (wq_calc_node_cpumask(attrs, node, -1, tmp_attrs->cpumask)) {
+ pwq_tbl[node] = alloc_unbound_pwq(wq, tmp_attrs);
+ if (!pwq_tbl[node])
+ goto enomem_pwq;
+ } else {
+ dfl_pwq->refcnt++;
+ pwq_tbl[node] = dfl_pwq;
+ }
+ }
+
+ mutex_unlock(&wq_pool_mutex);
+
+ /* all pwqs have been created successfully, let's install'em */
+ mutex_lock(&wq->mutex);
+
+ copy_workqueue_attrs(wq->unbound_attrs, new_attrs);
+
+ /* save the previous pwq and install the new one */
+ for_each_node(node)
+ pwq_tbl[node] = numa_pwq_tbl_install(wq, node, pwq_tbl[node]);
+
+ /* @dfl_pwq might not have been used, ensure it's linked */
+ link_pwq(dfl_pwq);
+ swap(wq->dfl_pwq, dfl_pwq);
+
+ mutex_unlock(&wq->mutex);
+
+ /* put the old pwqs */
+ for_each_node(node)
+ put_pwq_unlocked(pwq_tbl[node]);
+ put_pwq_unlocked(dfl_pwq);
+
+ put_online_cpus();
+ ret = 0;
+ /* fall through */
+out_free:
+ free_workqueue_attrs(tmp_attrs);
+ free_workqueue_attrs(new_attrs);
+ kfree(pwq_tbl);
+ return ret;
+
+enomem_pwq:
+ free_unbound_pwq(dfl_pwq);
+ for_each_node(node)
+ if (pwq_tbl && pwq_tbl[node] != dfl_pwq)
+ free_unbound_pwq(pwq_tbl[node]);
+ mutex_unlock(&wq_pool_mutex);
+ put_online_cpus();
+enomem:
+ ret = -ENOMEM;
+ goto out_free;
+}
+
+/**
+ * wq_update_unbound_numa - update NUMA affinity of a wq for CPU hot[un]plug
+ * @wq: the target workqueue
+ * @cpu: the CPU coming up or going down
+ * @online: whether @cpu is coming up or going down
+ *
+ * This function is to be called from %CPU_DOWN_PREPARE, %CPU_ONLINE and
+ * %CPU_DOWN_FAILED. @cpu is being hot[un]plugged, update NUMA affinity of
+ * @wq accordingly.
+ *
+ * If NUMA affinity can't be adjusted due to memory allocation failure, it
+ * falls back to @wq->dfl_pwq which may not be optimal but is always
+ * correct.
+ *
+ * Note that when the last allowed CPU of a NUMA node goes offline for a
+ * workqueue with a cpumask spanning multiple nodes, the workers which were
+ * already executing the work items for the workqueue will lose their CPU
+ * affinity and may execute on any CPU. This is similar to how per-cpu
+ * workqueues behave on CPU_DOWN. If a workqueue user wants strict
+ * affinity, it's the user's responsibility to flush the work item from
+ * CPU_DOWN_PREPARE.
+ */
+static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu,
+ bool online)
+{
+ int node = cpu_to_node(cpu);
+ int cpu_off = online ? -1 : cpu;
+ struct pool_workqueue *old_pwq = NULL, *pwq;
+ struct workqueue_attrs *target_attrs;
+ cpumask_t *cpumask;
+
+ lockdep_assert_held(&wq_pool_mutex);
+
+ if (!wq_numa_enabled || !(wq->flags & WQ_UNBOUND))
+ return;
+
+ /*
+ * We don't wanna alloc/free wq_attrs for each wq for each CPU.
+ * Let's use a preallocated one. The following buf is protected by
+ * CPU hotplug exclusion.
+ */
+ target_attrs = wq_update_unbound_numa_attrs_buf;
+ cpumask = target_attrs->cpumask;
+
+ mutex_lock(&wq->mutex);
+ if (wq->unbound_attrs->no_numa)
+ goto out_unlock;
+
+ copy_workqueue_attrs(target_attrs, wq->unbound_attrs);
+ pwq = unbound_pwq_by_node(wq, node);
+
+ /*
+ * Let's determine what needs to be done. If the target cpumask is
+ * different from wq's, we need to compare it to @pwq's and create
+ * a new one if they don't match. If the target cpumask equals
+ * wq's, the default pwq should be used. If @pwq is already the
+ * default one, nothing to do; otherwise, install the default one.
+ */
+ if (wq_calc_node_cpumask(wq->unbound_attrs, node, cpu_off, cpumask)) {
+ if (cpumask_equal(cpumask, pwq->pool->attrs->cpumask))
+ goto out_unlock;
+ } else {
+ if (pwq == wq->dfl_pwq)
+ goto out_unlock;
+ else
+ goto use_dfl_pwq;
+ }
+
+ mutex_unlock(&wq->mutex);
+
+ /* create a new pwq */
+ pwq = alloc_unbound_pwq(wq, target_attrs);
+ if (!pwq) {
+ pr_warning("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n",
+ wq->name);
+ goto out_unlock;
+ }
+
+ /*
+ * Install the new pwq. As this function is called only from CPU
+ * hotplug callbacks and applying a new attrs is wrapped with
+ * get/put_online_cpus(), @wq->unbound_attrs couldn't have changed
+ * inbetween.
+ */
+ mutex_lock(&wq->mutex);
+ old_pwq = numa_pwq_tbl_install(wq, node, pwq);
+ goto out_unlock;
+
+use_dfl_pwq:
+ spin_lock_irq(&wq->dfl_pwq->pool->lock);
+ get_pwq(wq->dfl_pwq);
+ spin_unlock_irq(&wq->dfl_pwq->pool->lock);
+ old_pwq = numa_pwq_tbl_install(wq, node, wq->dfl_pwq);
+out_unlock:
+ mutex_unlock(&wq->mutex);
+ put_pwq_unlocked(old_pwq);
+}
+
+static int alloc_and_link_pwqs(struct workqueue_struct *wq)
+{
+ bool highpri = wq->flags & WQ_HIGHPRI;
+ int cpu;
+
+ if (!(wq->flags & WQ_UNBOUND)) {
+ wq->cpu_pwqs = alloc_percpu(struct pool_workqueue);
+ if (!wq->cpu_pwqs)
+ return -ENOMEM;
+
+ for_each_possible_cpu(cpu) {
+ struct pool_workqueue *pwq =
+ per_cpu_ptr(wq->cpu_pwqs, cpu);
+ struct worker_pool *cpu_pools =
+ per_cpu(cpu_worker_pools, cpu);
+
+ init_pwq(pwq, wq, &cpu_pools[highpri]);
+
+ mutex_lock(&wq->mutex);
+ link_pwq(pwq);
+ mutex_unlock(&wq->mutex);
+ }
+ return 0;
+ } else {
+ return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
+ }
+}
+
+static int wq_clamp_max_active(int max_active, unsigned int flags,
+ const char *name)
+{
+ int lim = flags & WQ_UNBOUND ? WQ_UNBOUND_MAX_ACTIVE : WQ_MAX_ACTIVE;
+
+ if (max_active < 1 || max_active > lim)
+ pr_warn("workqueue: max_active %d requested for %s is out of range, clamping between %d and %d\n",
+ max_active, name, 1, lim);
+
+ return clamp_val(max_active, 1, lim);
+}
+
+struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
+ unsigned int flags,
+ int max_active,
+ struct lock_class_key *key,
+ const char *lock_name, ...)
+{
+ size_t tbl_size = 0;
+ va_list args;
+ struct workqueue_struct *wq;
+ struct pool_workqueue *pwq;
+
+ /* allocate wq and format name */
+ if (flags & WQ_UNBOUND)
+ tbl_size = wq_numa_tbl_len * sizeof(wq->numa_pwq_tbl[0]);
+
+ wq = kzalloc(sizeof(*wq) + tbl_size, GFP_KERNEL);
+ if (!wq)
+ return NULL;
+
+ if (flags & WQ_UNBOUND) {
+ wq->unbound_attrs = alloc_workqueue_attrs(GFP_KERNEL);
+ if (!wq->unbound_attrs)
+ goto err_free_wq;
+ }
+
+ va_start(args, lock_name);
+ vsnprintf(wq->name, sizeof(wq->name), fmt, args);
+ va_end(args);
+
+ max_active = max_active ?: WQ_DFL_ACTIVE;
+ max_active = wq_clamp_max_active(max_active, flags, wq->name);
+
+ /* init wq */
+ wq->flags = flags;
+ wq->saved_max_active = max_active;
+ mutex_init(&wq->mutex);
+ atomic_set(&wq->nr_pwqs_to_flush, 0);
+ INIT_LIST_HEAD(&wq->pwqs);
+ INIT_LIST_HEAD(&wq->flusher_queue);
+ INIT_LIST_HEAD(&wq->flusher_overflow);
+ INIT_LIST_HEAD(&wq->maydays);
- if (!(wq->flags & WQ_FREEZABLE) ||
- !(pool->flags & POOL_FREEZING))
- pwq_set_max_active(pwq, max_active);
+ lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
+ INIT_LIST_HEAD(&wq->list);
- spin_unlock_irq(&pool->lock);
+ if (alloc_and_link_pwqs(wq) < 0)
+ goto err_free_wq;
+
+ /*
+ * Workqueues which may be used during memory reclaim should
+ * have a rescuer to guarantee forward progress.
+ */
+ if (flags & WQ_MEM_RECLAIM) {
+ struct worker *rescuer;
+
+ rescuer = alloc_worker();
+ if (!rescuer)
+ goto err_destroy;
+
+ rescuer->rescue_wq = wq;
+ rescuer->task = kthread_create(rescuer_thread, rescuer, "%s",
+ wq->name);
+ if (IS_ERR(rescuer->task)) {
+ kfree(rescuer);
+ goto err_destroy;
+ }
+
+ wq->rescuer = rescuer;
+ rescuer->task->flags |= PF_NO_SETAFFINITY;
+ wake_up_process(rescuer->task);
+ }
+
+ if ((wq->flags & WQ_SYSFS) && workqueue_sysfs_register(wq))
+ goto err_destroy;
+
+ /*
+ * wq_pool_mutex protects global freeze state and workqueues list.
+ * Grab it, adjust max_active and add the new @wq to workqueues
+ * list.
+ */
+ mutex_lock(&wq_pool_mutex);
+
+ mutex_lock(&wq->mutex);
+ for_each_pwq(pwq, wq)
+ pwq_adjust_max_active(pwq);
+ mutex_unlock(&wq->mutex);
+
+ list_add(&wq->list, &workqueues);
+
+ mutex_unlock(&wq_pool_mutex);
+
+ return wq;
+
+err_free_wq:
+ free_workqueue_attrs(wq->unbound_attrs);
+ kfree(wq);
+ return NULL;
+err_destroy:
+ destroy_workqueue(wq);
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(__alloc_workqueue_key);
+
+/**
+ * destroy_workqueue - safely terminate a workqueue
+ * @wq: target workqueue
+ *
+ * Safely destroy a workqueue. All work currently pending will be done first.
+ */
+void destroy_workqueue(struct workqueue_struct *wq)
+{
+ struct pool_workqueue *pwq;
+ int node;
+
+ /* drain it before proceeding with destruction */
+ drain_workqueue(wq);
+
+ /* sanity checks */
+ mutex_lock(&wq->mutex);
+ for_each_pwq(pwq, wq) {
+ int i;
+
+ for (i = 0; i < WORK_NR_COLORS; i++) {
+ if (WARN_ON(pwq->nr_in_flight[i])) {
+ mutex_unlock(&wq->mutex);
+ return;
+ }
+ }
+
+ if (WARN_ON(pwq->refcnt > 1) ||
+ WARN_ON(pwq->nr_active) ||
+ WARN_ON(!list_empty(&pwq->delayed_works))) {
+ mutex_unlock(&wq->mutex);
+ return;
+ }
+ }
+ mutex_unlock(&wq->mutex);
+
+ /*
+ * wq list is used to freeze wq, remove from list after
+ * flushing is complete in case freeze races us.
+ */
+ mutex_lock(&wq_pool_mutex);
+ list_del_init(&wq->list);
+ mutex_unlock(&wq_pool_mutex);
+
+ workqueue_sysfs_unregister(wq);
+
+ if (wq->rescuer) {
+ kthread_stop(wq->rescuer->task);
+ kfree(wq->rescuer);
+ wq->rescuer = NULL;
+ }
+
+ if (!(wq->flags & WQ_UNBOUND)) {
+ /*
+ * The base ref is never dropped on per-cpu pwqs. Directly
+ * free the pwqs and wq.
+ */
+ free_percpu(wq->cpu_pwqs);
+ kfree(wq);
+ } else {
+ /*
+ * We're the sole accessor of @wq at this point. Directly
+ * access numa_pwq_tbl[] and dfl_pwq to put the base refs.
+ * @wq will be freed when the last pwq is released.
+ */
+ for_each_node(node) {
+ pwq = rcu_access_pointer(wq->numa_pwq_tbl[node]);
+ RCU_INIT_POINTER(wq->numa_pwq_tbl[node], NULL);
+ put_pwq_unlocked(pwq);
+ }
+
+ /*
+ * Put dfl_pwq. @wq may be freed any time after dfl_pwq is
+ * put. Don't access it afterwards.
+ */
+ pwq = wq->dfl_pwq;
+ wq->dfl_pwq = NULL;
+ put_pwq_unlocked(pwq);
}
+}
+EXPORT_SYMBOL_GPL(destroy_workqueue);
+
+/**
+ * workqueue_set_max_active - adjust max_active of a workqueue
+ * @wq: target workqueue
+ * @max_active: new max_active value.
+ *
+ * Set max_active of @wq to @max_active.
+ *
+ * CONTEXT:
+ * Don't call from IRQ context.
+ */
+void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
+{
+ struct pool_workqueue *pwq;
+
+ /* disallow meddling with max_active for ordered workqueues */
+ if (WARN_ON(wq->flags & __WQ_ORDERED))
+ return;
+
+ max_active = wq_clamp_max_active(max_active, wq->flags, wq->name);
+
+ mutex_lock(&wq->mutex);
+
+ wq->saved_max_active = max_active;
+
+ for_each_pwq(pwq, wq)
+ pwq_adjust_max_active(pwq);
- spin_unlock(&workqueue_lock);
+ mutex_unlock(&wq->mutex);
}
EXPORT_SYMBOL_GPL(workqueue_set_max_active);
+/**
+ * current_is_workqueue_rescuer - is %current workqueue rescuer?
+ *
+ * Determine whether %current is a workqueue rescuer. Can be used from
+ * work functions to determine whether it's being run off the rescuer task.
+ */
+bool current_is_workqueue_rescuer(void)
+{
+ struct worker *worker = current_wq_worker();
+
+ return worker && worker->rescue_wq;
+}
+
/**
* workqueue_congested - test whether a workqueue is congested
* @cpu: CPU in question
* RETURNS:
* %true if congested, %false otherwise.
*/
-bool workqueue_congested(unsigned int cpu, struct workqueue_struct *wq)
+bool workqueue_congested(int cpu, struct workqueue_struct *wq)
{
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
+ struct pool_workqueue *pwq;
+ bool ret;
+
+ rcu_read_lock_sched();
+
+ if (!(wq->flags & WQ_UNBOUND))
+ pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
+ else
+ pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
- return !list_empty(&pwq->delayed_works);
+ ret = !list_empty(&pwq->delayed_works);
+ rcu_read_unlock_sched();
+
+ return ret;
}
EXPORT_SYMBOL_GPL(workqueue_congested);
*/
unsigned int work_busy(struct work_struct *work)
{
- struct worker_pool *pool = get_work_pool(work);
+ struct worker_pool *pool;
unsigned long flags;
unsigned int ret = 0;
if (work_pending(work))
ret |= WORK_BUSY_PENDING;
+ local_irq_save(flags);
+ pool = get_work_pool(work);
if (pool) {
- spin_lock_irqsave(&pool->lock, flags);
+ spin_lock(&pool->lock);
if (find_worker_executing_work(pool, work))
ret |= WORK_BUSY_RUNNING;
- spin_unlock_irqrestore(&pool->lock, flags);
+ spin_unlock(&pool->lock);
}
+ local_irq_restore(flags);
return ret;
}
int cpu = smp_processor_id();
struct worker_pool *pool;
struct worker *worker;
- int i;
+ int wi;
- for_each_std_worker_pool(pool, cpu) {
- BUG_ON(cpu != smp_processor_id());
+ for_each_cpu_worker_pool(pool, cpu) {
+ WARN_ON_ONCE(cpu != smp_processor_id());
- mutex_lock(&pool->assoc_mutex);
+ mutex_lock(&pool->manager_mutex);
spin_lock_irq(&pool->lock);
/*
- * We've claimed all manager positions. Make all workers
+ * We've blocked all manager operations. Make all workers
* unbound and set DISASSOCIATED. Before this, all workers
* except for the ones which are still executing works from
* before the last CPU down must be on the cpu. After
* this, they may become diasporas.
*/
- list_for_each_entry(worker, &pool->idle_list, entry)
- worker->flags |= WORKER_UNBOUND;
-
- for_each_busy_worker(worker, i, pool)
+ for_each_pool_worker(worker, wi, pool)
worker->flags |= WORKER_UNBOUND;
pool->flags |= POOL_DISASSOCIATED;
spin_unlock_irq(&pool->lock);
- mutex_unlock(&pool->assoc_mutex);
+ mutex_unlock(&pool->manager_mutex);
+
+ /*
+ * Call schedule() so that we cross rq->lock and thus can
+ * guarantee sched callbacks see the %WORKER_UNBOUND flag.
+ * This is necessary as scheduler callbacks may be invoked
+ * from other cpus.
+ */
+ schedule();
+
+ /*
+ * Sched callbacks are disabled now. Zap nr_running.
+ * After this, nr_running stays zero and need_more_worker()
+ * and keep_working() are always true as long as the
+ * worklist is not empty. This pool now behaves as an
+ * unbound (in terms of concurrency management) pool which
+ * are served by workers tied to the pool.
+ */
+ atomic_set(&pool->nr_running, 0);
+
+ /*
+ * With concurrency management just turned off, a busy
+ * worker blocking could lead to lengthy stalls. Kick off
+ * unbound chain execution of currently pending work items.
+ */
+ spin_lock_irq(&pool->lock);
+ wake_up_worker(pool);
+ spin_unlock_irq(&pool->lock);
}
+}
- /*
- * Call schedule() so that we cross rq->lock and thus can guarantee
- * sched callbacks see the %WORKER_UNBOUND flag. This is necessary
- * as scheduler callbacks may be invoked from other cpus.
- */
- schedule();
+/**
+ * rebind_workers - rebind all workers of a pool to the associated CPU
+ * @pool: pool of interest
+ *
+ * @pool->cpu is coming online. Rebind all workers to the CPU.
+ */
+static void rebind_workers(struct worker_pool *pool)
+{
+ struct worker *worker;
+ int wi;
+
+ lockdep_assert_held(&pool->manager_mutex);
/*
- * Sched callbacks are disabled now. Zap nr_running. After this,
- * nr_running stays zero and need_more_worker() and keep_working()
- * are always true as long as the worklist is not empty. Pools on
- * @cpu now behave as unbound (in terms of concurrency management)
- * pools which are served by workers tied to the CPU.
- *
- * On return from this function, the current worker would trigger
- * unbound chain execution of pending work items if other workers
- * didn't already.
+ * Restore CPU affinity of all workers. As all idle workers should
+ * be on the run-queue of the associated CPU before any local
+ * wake-ups for concurrency management happen, restore CPU affinty
+ * of all workers first and then clear UNBOUND. As we're called
+ * from CPU_ONLINE, the following shouldn't fail.
*/
- for_each_std_worker_pool(pool, cpu)
- atomic_set(&pool->nr_running, 0);
+ for_each_pool_worker(worker, wi, pool)
+ WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
+ pool->attrs->cpumask) < 0);
+
+ spin_lock_irq(&pool->lock);
+
+ for_each_pool_worker(worker, wi, pool) {
+ unsigned int worker_flags = worker->flags;
+
+ /*
+ * A bound idle worker should actually be on the runqueue
+ * of the associated CPU for local wake-ups targeting it to
+ * work. Kick all idle workers so that they migrate to the
+ * associated CPU. Doing this in the same loop as
+ * replacing UNBOUND with REBOUND is safe as no worker will
+ * be bound before @pool->lock is released.
+ */
+ if (worker_flags & WORKER_IDLE)
+ wake_up_process(worker->task);
+
+ /*
+ * We want to clear UNBOUND but can't directly call
+ * worker_clr_flags() or adjust nr_running. Atomically
+ * replace UNBOUND with another NOT_RUNNING flag REBOUND.
+ * @worker will clear REBOUND using worker_clr_flags() when
+ * it initiates the next execution cycle thus restoring
+ * concurrency management. Note that when or whether
+ * @worker clears REBOUND doesn't affect correctness.
+ *
+ * ACCESS_ONCE() is necessary because @worker->flags may be
+ * tested without holding any lock in
+ * wq_worker_waking_up(). Without it, NOT_RUNNING test may
+ * fail incorrectly leading to premature concurrency
+ * management operations.
+ */
+ WARN_ON_ONCE(!(worker_flags & WORKER_UNBOUND));
+ worker_flags |= WORKER_REBOUND;
+ worker_flags &= ~WORKER_UNBOUND;
+ ACCESS_ONCE(worker->flags) = worker_flags;
+ }
+
+ spin_unlock_irq(&pool->lock);
+}
+
+/**
+ * restore_unbound_workers_cpumask - restore cpumask of unbound workers
+ * @pool: unbound pool of interest
+ * @cpu: the CPU which is coming up
+ *
+ * An unbound pool may end up with a cpumask which doesn't have any online
+ * CPUs. When a worker of such pool get scheduled, the scheduler resets
+ * its cpus_allowed. If @cpu is in @pool's cpumask which didn't have any
+ * online CPU before, cpus_allowed of all its workers should be restored.
+ */
+static void restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
+{
+ static cpumask_t cpumask;
+ struct worker *worker;
+ int wi;
+
+ lockdep_assert_held(&pool->manager_mutex);
+
+ /* is @cpu allowed for @pool? */
+ if (!cpumask_test_cpu(cpu, pool->attrs->cpumask))
+ return;
+
+ /* is @cpu the only online CPU? */
+ cpumask_and(&cpumask, pool->attrs->cpumask, cpu_online_mask);
+ if (cpumask_weight(&cpumask) != 1)
+ return;
+
+ /* as we're called from CPU_ONLINE, the following shouldn't fail */
+ for_each_pool_worker(worker, wi, pool)
+ WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
+ pool->attrs->cpumask) < 0);
}
/*
unsigned long action,
void *hcpu)
{
- unsigned int cpu = (unsigned long)hcpu;
+ int cpu = (unsigned long)hcpu;
struct worker_pool *pool;
+ struct workqueue_struct *wq;
+ int pi;
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_UP_PREPARE:
- for_each_std_worker_pool(pool, cpu) {
- struct worker *worker;
-
+ for_each_cpu_worker_pool(pool, cpu) {
if (pool->nr_workers)
continue;
-
- worker = create_worker(pool);
- if (!worker)
+ if (create_and_start_worker(pool) < 0)
return NOTIFY_BAD;
-
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- spin_unlock_irq(&pool->lock);
}
break;
case CPU_DOWN_FAILED:
case CPU_ONLINE:
- for_each_std_worker_pool(pool, cpu) {
- mutex_lock(&pool->assoc_mutex);
- spin_lock_irq(&pool->lock);
+ mutex_lock(&wq_pool_mutex);
- pool->flags &= ~POOL_DISASSOCIATED;
- rebind_workers(pool);
+ for_each_pool(pool, pi) {
+ mutex_lock(&pool->manager_mutex);
+
+ if (pool->cpu == cpu) {
+ spin_lock_irq(&pool->lock);
+ pool->flags &= ~POOL_DISASSOCIATED;
+ spin_unlock_irq(&pool->lock);
+
+ rebind_workers(pool);
+ } else if (pool->cpu < 0) {
+ restore_unbound_workers_cpumask(pool, cpu);
+ }
- spin_unlock_irq(&pool->lock);
- mutex_unlock(&pool->assoc_mutex);
+ mutex_unlock(&pool->manager_mutex);
}
+
+ /* update NUMA affinity of unbound workqueues */
+ list_for_each_entry(wq, &workqueues, list)
+ wq_update_unbound_numa(wq, cpu, true);
+
+ mutex_unlock(&wq_pool_mutex);
break;
}
return NOTIFY_OK;
unsigned long action,
void *hcpu)
{
- unsigned int cpu = (unsigned long)hcpu;
+ int cpu = (unsigned long)hcpu;
struct work_struct unbind_work;
+ struct workqueue_struct *wq;
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_DOWN_PREPARE:
- /* unbinding should happen on the local CPU */
+ /* unbinding per-cpu workers should happen on the local CPU */
INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn);
queue_work_on(cpu, system_highpri_wq, &unbind_work);
+
+ /* update NUMA affinity of unbound workqueues */
+ mutex_lock(&wq_pool_mutex);
+ list_for_each_entry(wq, &workqueues, list)
+ wq_update_unbound_numa(wq, cpu, false);
+ mutex_unlock(&wq_pool_mutex);
+
+ /* wait for per-cpu unbinding to finish */
flush_work(&unbind_work);
break;
}
* It is up to the caller to ensure that the cpu doesn't go offline.
* The caller must not hold any locks which would prevent @fn from completing.
*/
-long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg)
+long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
{
struct work_for_cpu wfc = { .fn = fn, .arg = arg };
* freeze_workqueues_begin - begin freezing workqueues
*
* Start freezing workqueues. After this function returns, all freezable
- * workqueues will queue new works to their frozen_works list instead of
+ * workqueues will queue new works to their delayed_works list instead of
* pool->worklist.
*
* CONTEXT:
- * Grabs and releases workqueue_lock and pool->lock's.
+ * Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.
*/
void freeze_workqueues_begin(void)
{
- unsigned int cpu;
+ struct worker_pool *pool;
+ struct workqueue_struct *wq;
+ struct pool_workqueue *pwq;
+ int pi;
- spin_lock(&workqueue_lock);
+ mutex_lock(&wq_pool_mutex);
- BUG_ON(workqueue_freezing);
+ WARN_ON_ONCE(workqueue_freezing);
workqueue_freezing = true;
- for_each_wq_cpu(cpu) {
- struct worker_pool *pool;
- struct workqueue_struct *wq;
-
- for_each_std_worker_pool(pool, cpu) {
- spin_lock_irq(&pool->lock);
-
- WARN_ON_ONCE(pool->flags & POOL_FREEZING);
- pool->flags |= POOL_FREEZING;
-
- list_for_each_entry(wq, &workqueues, list) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
-
- if (pwq && pwq->pool == pool &&
- (wq->flags & WQ_FREEZABLE))
- pwq->max_active = 0;
- }
+ /* set FREEZING */
+ for_each_pool(pool, pi) {
+ spin_lock_irq(&pool->lock);
+ WARN_ON_ONCE(pool->flags & POOL_FREEZING);
+ pool->flags |= POOL_FREEZING;
+ spin_unlock_irq(&pool->lock);
+ }
- spin_unlock_irq(&pool->lock);
- }
+ list_for_each_entry(wq, &workqueues, list) {
+ mutex_lock(&wq->mutex);
+ for_each_pwq(pwq, wq)
+ pwq_adjust_max_active(pwq);
+ mutex_unlock(&wq->mutex);
}
- spin_unlock(&workqueue_lock);
+ mutex_unlock(&wq_pool_mutex);
}
/**
* between freeze_workqueues_begin() and thaw_workqueues().
*
* CONTEXT:
- * Grabs and releases workqueue_lock.
+ * Grabs and releases wq_pool_mutex.
*
* RETURNS:
* %true if some freezable workqueues are still busy. %false if freezing
*/
bool freeze_workqueues_busy(void)
{
- unsigned int cpu;
bool busy = false;
+ struct workqueue_struct *wq;
+ struct pool_workqueue *pwq;
- spin_lock(&workqueue_lock);
+ mutex_lock(&wq_pool_mutex);
- BUG_ON(!workqueue_freezing);
+ WARN_ON_ONCE(!workqueue_freezing);
- for_each_wq_cpu(cpu) {
- struct workqueue_struct *wq;
+ list_for_each_entry(wq, &workqueues, list) {
+ if (!(wq->flags & WQ_FREEZABLE))
+ continue;
/*
* nr_active is monotonically decreasing. It's safe
* to peek without lock.
*/
- list_for_each_entry(wq, &workqueues, list) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
-
- if (!pwq || !(wq->flags & WQ_FREEZABLE))
- continue;
-
- BUG_ON(pwq->nr_active < 0);
+ rcu_read_lock_sched();
+ for_each_pwq(pwq, wq) {
+ WARN_ON_ONCE(pwq->nr_active < 0);
if (pwq->nr_active) {
busy = true;
+ rcu_read_unlock_sched();
goto out_unlock;
}
}
+ rcu_read_unlock_sched();
}
out_unlock:
- spin_unlock(&workqueue_lock);
+ mutex_unlock(&wq_pool_mutex);
return busy;
}
* frozen works are transferred to their respective pool worklists.
*
* CONTEXT:
- * Grabs and releases workqueue_lock and pool->lock's.
+ * Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.
*/
void thaw_workqueues(void)
{
- unsigned int cpu;
+ struct workqueue_struct *wq;
+ struct pool_workqueue *pwq;
+ struct worker_pool *pool;
+ int pi;
- spin_lock(&workqueue_lock);
+ mutex_lock(&wq_pool_mutex);
if (!workqueue_freezing)
goto out_unlock;
- for_each_wq_cpu(cpu) {
- struct worker_pool *pool;
- struct workqueue_struct *wq;
+ /* clear FREEZING */
+ for_each_pool(pool, pi) {
+ spin_lock_irq(&pool->lock);
+ WARN_ON_ONCE(!(pool->flags & POOL_FREEZING));
+ pool->flags &= ~POOL_FREEZING;
+ spin_unlock_irq(&pool->lock);
+ }
- for_each_std_worker_pool(pool, cpu) {
- spin_lock_irq(&pool->lock);
+ /* restore max_active and repopulate worklist */
+ list_for_each_entry(wq, &workqueues, list) {
+ mutex_lock(&wq->mutex);
+ for_each_pwq(pwq, wq)
+ pwq_adjust_max_active(pwq);
+ mutex_unlock(&wq->mutex);
+ }
- WARN_ON_ONCE(!(pool->flags & POOL_FREEZING));
- pool->flags &= ~POOL_FREEZING;
+ workqueue_freezing = false;
+out_unlock:
+ mutex_unlock(&wq_pool_mutex);
+}
+#endif /* CONFIG_FREEZER */
- list_for_each_entry(wq, &workqueues, list) {
- struct pool_workqueue *pwq = get_pwq(cpu, wq);
+static void __init wq_numa_init(void)
+{
+ cpumask_var_t *tbl;
+ int node, cpu;
- if (!pwq || pwq->pool != pool ||
- !(wq->flags & WQ_FREEZABLE))
- continue;
+ /* determine NUMA pwq table len - highest node id + 1 */
+ for_each_node(node)
+ wq_numa_tbl_len = max(wq_numa_tbl_len, node + 1);
- /* restore max_active and repopulate worklist */
- pwq_set_max_active(pwq, wq->saved_max_active);
- }
+ if (num_possible_nodes() <= 1)
+ return;
- wake_up_worker(pool);
+ if (wq_disable_numa) {
+ pr_info("workqueue: NUMA affinity support disabled\n");
+ return;
+ }
- spin_unlock_irq(&pool->lock);
+ wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs(GFP_KERNEL);
+ BUG_ON(!wq_update_unbound_numa_attrs_buf);
+
+ /*
+ * We want masks of possible CPUs of each node which isn't readily
+ * available. Build one from cpu_to_node() which should have been
+ * fully initialized by now.
+ */
+ tbl = kzalloc(wq_numa_tbl_len * sizeof(tbl[0]), GFP_KERNEL);
+ BUG_ON(!tbl);
+
+ for_each_node(node)
+ BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, node));
+
+ for_each_possible_cpu(cpu) {
+ node = cpu_to_node(cpu);
+ if (WARN_ON(node == NUMA_NO_NODE)) {
+ pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
+ /* happens iff arch is bonkers, let's just proceed */
+ return;
}
+ cpumask_set_cpu(cpu, tbl[node]);
}
- workqueue_freezing = false;
-out_unlock:
- spin_unlock(&workqueue_lock);
+ wq_numa_possible_cpumask = tbl;
+ wq_numa_enabled = true;
}
-#endif /* CONFIG_FREEZER */
static int __init init_workqueues(void)
{
- unsigned int cpu;
+ int std_nice[NR_STD_WORKER_POOLS] = { 0, HIGHPRI_NICE_LEVEL };
+ int i, cpu;
/* make sure we have enough bits for OFFQ pool ID */
BUILD_BUG_ON((1LU << (BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT)) <
WORK_CPU_END * NR_STD_WORKER_POOLS);
+ WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long));
+
+ pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);
+
cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_UP);
hotcpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
+ wq_numa_init();
+
/* initialize CPU pools */
- for_each_wq_cpu(cpu) {
+ for_each_possible_cpu(cpu) {
struct worker_pool *pool;
- for_each_std_worker_pool(pool, cpu) {
- spin_lock_init(&pool->lock);
+ i = 0;
+ for_each_cpu_worker_pool(pool, cpu) {
+ BUG_ON(init_worker_pool(pool));
pool->cpu = cpu;
- pool->flags |= POOL_DISASSOCIATED;
- INIT_LIST_HEAD(&pool->worklist);
- INIT_LIST_HEAD(&pool->idle_list);
- hash_init(pool->busy_hash);
-
- init_timer_deferrable(&pool->idle_timer);
- pool->idle_timer.function = idle_worker_timeout;
- pool->idle_timer.data = (unsigned long)pool;
-
- setup_timer(&pool->mayday_timer, pool_mayday_timeout,
- (unsigned long)pool);
-
- mutex_init(&pool->assoc_mutex);
- ida_init(&pool->worker_ida);
+ cpumask_copy(pool->attrs->cpumask, cpumask_of(cpu));
+ pool->attrs->nice = std_nice[i++];
+ pool->node = cpu_to_node(cpu);
/* alloc pool ID */
+ mutex_lock(&wq_pool_mutex);
BUG_ON(worker_pool_assign_id(pool));
+ mutex_unlock(&wq_pool_mutex);
}
}
/* create the initial worker */
- for_each_online_wq_cpu(cpu) {
+ for_each_online_cpu(cpu) {
struct worker_pool *pool;
- for_each_std_worker_pool(pool, cpu) {
- struct worker *worker;
+ for_each_cpu_worker_pool(pool, cpu) {
+ pool->flags &= ~POOL_DISASSOCIATED;
+ BUG_ON(create_and_start_worker(pool) < 0);
+ }
+ }
- if (cpu != WORK_CPU_UNBOUND)
- pool->flags &= ~POOL_DISASSOCIATED;
+ /* create default unbound wq attrs */
+ for (i = 0; i < NR_STD_WORKER_POOLS; i++) {
+ struct workqueue_attrs *attrs;
- worker = create_worker(pool);
- BUG_ON(!worker);
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- spin_unlock_irq(&pool->lock);
- }
+ BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
+ attrs->nice = std_nice[i];
+ unbound_std_wq_attrs[i] = attrs;
}
system_wq = alloc_workqueue("events", 0, 0);
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
struct worker_pool *pool; /* I: the associated pool */
+ /* L: for rescuers */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
int id; /* I: worker id */
- /* for rebinding worker to CPU */
- struct work_struct rebind_work; /* L: for busy worker */
-
/* used only by rescuers to point to the target workqueue */
struct workqueue_struct *rescue_wq; /* I: the workqueue to rescue */
};
* Scheduler hooks for concurrency managed workqueue. Only to be used from
* sched.c and workqueue.c.
*/
-void wq_worker_waking_up(struct task_struct *task, unsigned int cpu);
-struct task_struct *wq_worker_sleeping(struct task_struct *task,
- unsigned int cpu);
+void wq_worker_waking_up(struct task_struct *task, int cpu);
+struct task_struct *wq_worker_sleeping(struct task_struct *task, int cpu);
#endif /* _KERNEL_WORKQUEUE_INTERNAL_H */
*/
#include <linux/kernel.h>
+#include <linux/printk.h>
#include <linux/spinlock.h>
#include <linux/tty.h>
#include <linux/wait.h>
wake_up_klogd();
}
}
-
-
entry = bucket_find_exact(bucket, ref);
if (!entry) {
+ /* must drop lock before calling dma_mapping_error */
+ put_hash_bucket(bucket, &flags);
+
if (dma_mapping_error(ref->dev, ref->dev_addr)) {
err_printk(ref->dev, NULL,
- "DMA-API: device driver tries "
- "to free an invalid DMA memory address\n");
- return;
+ "DMA-API: device driver tries to free an "
+ "invalid DMA memory address\n");
+ } else {
+ err_printk(ref->dev, NULL,
+ "DMA-API: device driver tries to free DMA "
+ "memory it has not allocated [device "
+ "address=0x%016llx] [size=%llu bytes]\n",
+ ref->dev_addr, ref->size);
}
- err_printk(ref->dev, NULL, "DMA-API: device driver tries "
- "to free DMA memory it has not allocated "
- "[device address=0x%016llx] [size=%llu bytes]\n",
- ref->dev_addr, ref->size);
- goto out;
+ return;
}
if (ref->size != entry->size) {
hash_bucket_del(entry);
dma_entry_free(entry);
-out:
put_hash_bucket(bucket, &flags);
}
ref.dev = dev;
ref.dev_addr = dma_addr;
bucket = get_hash_bucket(&ref, &flags);
- entry = bucket_find_exact(bucket, &ref);
- if (!entry)
- goto out;
+ list_for_each_entry(entry, &bucket->list, list) {
+ if (!exact_match(&ref, entry))
+ continue;
+
+ /*
+ * The same physical address can be mapped multiple
+ * times. Without a hardware IOMMU this results in the
+ * same device addresses being put into the dma-debug
+ * hash multiple times too. This can result in false
+ * positives being reported. Therefore we implement a
+ * best-fit algorithm here which updates the first entry
+ * from the hash which fits the reference value and is
+ * not currently listed as being checked.
+ */
+ if (entry->map_err_type == MAP_ERR_NOT_CHECKED) {
+ entry->map_err_type = MAP_ERR_CHECKED;
+ break;
+ }
+ }
- entry->map_err_type = MAP_ERR_CHECKED;
-out:
put_hash_bucket(bucket, &flags);
}
EXPORT_SYMBOL(debug_dma_mapping_error);
static struct class *bdi_class;
/*
- * bdi_lock protects updates to bdi_list and bdi_pending_list, as well as
- * reader side protection for bdi_pending_list. bdi_list has RCU reader side
+ * bdi_lock protects updates to bdi_list. bdi_list has RCU reader side
* locking.
*/
DEFINE_SPINLOCK(bdi_lock);
LIST_HEAD(bdi_list);
-LIST_HEAD(bdi_pending_list);
+
+/* bdi_wq serves all asynchronous writeback tasks */
+struct workqueue_struct *bdi_wq;
void bdi_lock_two(struct bdi_writeback *wb1, struct bdi_writeback *wb2)
{
{
int err;
+ bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_FREEZABLE |
+ WQ_UNBOUND | WQ_SYSFS, 0);
+ if (!bdi_wq)
+ return -ENOMEM;
+
err = bdi_init(&default_backing_dev_info);
if (!err)
bdi_register(&default_backing_dev_info, NULL, "default");
return wb_has_dirty_io(&bdi->wb);
}
-static void wakeup_timer_fn(unsigned long data)
-{
- struct backing_dev_info *bdi = (struct backing_dev_info *)data;
-
- spin_lock_bh(&bdi->wb_lock);
- if (bdi->wb.task) {
- trace_writeback_wake_thread(bdi);
- wake_up_process(bdi->wb.task);
- } else if (bdi->dev) {
- /*
- * When bdi tasks are inactive for long time, they are killed.
- * In this case we have to wake-up the forker thread which
- * should create and run the bdi thread.
- */
- trace_writeback_wake_forker_thread(bdi);
- wake_up_process(default_backing_dev_info.wb.task);
- }
- spin_unlock_bh(&bdi->wb_lock);
-}
-
/*
* This function is used when the first inode for this bdi is marked dirty. It
* wakes-up the corresponding bdi thread which should then take care of the
unsigned long timeout;
timeout = msecs_to_jiffies(dirty_writeback_interval * 10);
- mod_timer(&bdi->wb.wakeup_timer, jiffies + timeout);
-}
-
-/*
- * Calculate the longest interval (jiffies) bdi threads are allowed to be
- * inactive.
- */
-static unsigned long bdi_longest_inactive(void)
-{
- unsigned long interval;
-
- interval = msecs_to_jiffies(dirty_writeback_interval * 10);
- return max(5UL * 60 * HZ, interval);
-}
-
-/*
- * Clear pending bit and wakeup anybody waiting for flusher thread creation or
- * shutdown
- */
-static void bdi_clear_pending(struct backing_dev_info *bdi)
-{
- clear_bit(BDI_pending, &bdi->state);
- smp_mb__after_clear_bit();
- wake_up_bit(&bdi->state, BDI_pending);
-}
-
-static int bdi_forker_thread(void *ptr)
-{
- struct bdi_writeback *me = ptr;
-
- current->flags |= PF_SWAPWRITE;
- set_freezable();
-
- /*
- * Our parent may run at a different priority, just set us to normal
- */
- set_user_nice(current, 0);
-
- for (;;) {
- struct task_struct *task = NULL;
- struct backing_dev_info *bdi;
- enum {
- NO_ACTION, /* Nothing to do */
- FORK_THREAD, /* Fork bdi thread */
- KILL_THREAD, /* Kill inactive bdi thread */
- } action = NO_ACTION;
-
- /*
- * Temporary measure, we want to make sure we don't see
- * dirty data on the default backing_dev_info
- */
- if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list)) {
- del_timer(&me->wakeup_timer);
- wb_do_writeback(me, 0);
- }
-
- spin_lock_bh(&bdi_lock);
- /*
- * In the following loop we are going to check whether we have
- * some work to do without any synchronization with tasks
- * waking us up to do work for them. Set the task state here
- * so that we don't miss wakeups after verifying conditions.
- */
- set_current_state(TASK_INTERRUPTIBLE);
-
- list_for_each_entry(bdi, &bdi_list, bdi_list) {
- bool have_dirty_io;
-
- if (!bdi_cap_writeback_dirty(bdi) ||
- bdi_cap_flush_forker(bdi))
- continue;
-
- WARN(!test_bit(BDI_registered, &bdi->state),
- "bdi %p/%s is not registered!\n", bdi, bdi->name);
-
- have_dirty_io = !list_empty(&bdi->work_list) ||
- wb_has_dirty_io(&bdi->wb);
-
- /*
- * If the bdi has work to do, but the thread does not
- * exist - create it.
- */
- if (!bdi->wb.task && have_dirty_io) {
- /*
- * Set the pending bit - if someone will try to
- * unregister this bdi - it'll wait on this bit.
- */
- set_bit(BDI_pending, &bdi->state);
- action = FORK_THREAD;
- break;
- }
-
- spin_lock(&bdi->wb_lock);
-
- /*
- * If there is no work to do and the bdi thread was
- * inactive long enough - kill it. The wb_lock is taken
- * to make sure no-one adds more work to this bdi and
- * wakes the bdi thread up.
- */
- if (bdi->wb.task && !have_dirty_io &&
- time_after(jiffies, bdi->wb.last_active +
- bdi_longest_inactive())) {
- task = bdi->wb.task;
- bdi->wb.task = NULL;
- spin_unlock(&bdi->wb_lock);
- set_bit(BDI_pending, &bdi->state);
- action = KILL_THREAD;
- break;
- }
- spin_unlock(&bdi->wb_lock);
- }
- spin_unlock_bh(&bdi_lock);
-
- /* Keep working if default bdi still has things to do */
- if (!list_empty(&me->bdi->work_list))
- __set_current_state(TASK_RUNNING);
-
- switch (action) {
- case FORK_THREAD:
- __set_current_state(TASK_RUNNING);
- task = kthread_create(bdi_writeback_thread, &bdi->wb,
- "flush-%s", dev_name(bdi->dev));
- if (IS_ERR(task)) {
- /*
- * If thread creation fails, force writeout of
- * the bdi from the thread. Hopefully 1024 is
- * large enough for efficient IO.
- */
- writeback_inodes_wb(&bdi->wb, 1024,
- WB_REASON_FORKER_THREAD);
- } else {
- /*
- * The spinlock makes sure we do not lose
- * wake-ups when racing with 'bdi_queue_work()'.
- * And as soon as the bdi thread is visible, we
- * can start it.
- */
- spin_lock_bh(&bdi->wb_lock);
- bdi->wb.task = task;
- spin_unlock_bh(&bdi->wb_lock);
- wake_up_process(task);
- }
- bdi_clear_pending(bdi);
- break;
-
- case KILL_THREAD:
- __set_current_state(TASK_RUNNING);
- kthread_stop(task);
- bdi_clear_pending(bdi);
- break;
-
- case NO_ACTION:
- if (!wb_has_dirty_io(me) || !dirty_writeback_interval)
- /*
- * There are no dirty data. The only thing we
- * should now care about is checking for
- * inactive bdi threads and killing them. Thus,
- * let's sleep for longer time, save energy and
- * be friendly for battery-driven devices.
- */
- schedule_timeout(bdi_longest_inactive());
- else
- schedule_timeout(msecs_to_jiffies(dirty_writeback_interval * 10));
- try_to_freeze();
- break;
- }
- }
-
- return 0;
+ mod_delayed_work(bdi_wq, &bdi->wb.dwork, timeout);
}
/*
spin_unlock_bh(&bdi_lock);
synchronize_rcu_expedited();
+
+ /* bdi_list is now unused, clear it to mark @bdi dying */
+ INIT_LIST_HEAD(&bdi->bdi_list);
}
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
bdi->dev = dev;
- /*
- * Just start the forker thread for our default backing_dev_info,
- * and add other bdi's to the list. They will get a thread created
- * on-demand when they need it.
- */
- if (bdi_cap_flush_forker(bdi)) {
- struct bdi_writeback *wb = &bdi->wb;
-
- wb->task = kthread_run(bdi_forker_thread, wb, "bdi-%s",
- dev_name(dev));
- if (IS_ERR(wb->task))
- return PTR_ERR(wb->task);
- }
-
bdi_debug_register(bdi, dev_name(dev));
set_bit(BDI_registered, &bdi->state);
*/
static void bdi_wb_shutdown(struct backing_dev_info *bdi)
{
- struct task_struct *task;
-
if (!bdi_cap_writeback_dirty(bdi))
return;
bdi_remove_from_list(bdi);
/*
- * If setup is pending, wait for that to complete first
+ * Drain work list and shutdown the delayed_work. At this point,
+ * @bdi->bdi_list is empty telling bdi_Writeback_workfn() that @bdi
+ * is dying and its work_list needs to be drained no matter what.
*/
- wait_on_bit(&bdi->state, BDI_pending, bdi_sched_wait,
- TASK_UNINTERRUPTIBLE);
+ mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
+ flush_delayed_work(&bdi->wb.dwork);
+ WARN_ON(!list_empty(&bdi->work_list));
/*
- * Finally, kill the kernel thread. We don't need to be RCU
- * safe anymore, since the bdi is gone from visibility.
+ * This shouldn't be necessary unless @bdi for some reason has
+ * unflushed dirty IO after work_list is drained. Do it anyway
+ * just in case.
*/
- spin_lock_bh(&bdi->wb_lock);
- task = bdi->wb.task;
- bdi->wb.task = NULL;
- spin_unlock_bh(&bdi->wb_lock);
-
- if (task)
- kthread_stop(task);
+ cancel_delayed_work_sync(&bdi->wb.dwork);
}
/*
bdi_set_min_ratio(bdi, 0);
trace_writeback_bdi_unregister(bdi);
bdi_prune_sb(bdi);
- del_timer_sync(&bdi->wb.wakeup_timer);
- if (!bdi_cap_flush_forker(bdi))
- bdi_wb_shutdown(bdi);
+ bdi_wb_shutdown(bdi);
bdi_debug_unregister(bdi);
spin_lock_bh(&bdi->wb_lock);
INIT_LIST_HEAD(&wb->b_io);
INIT_LIST_HEAD(&wb->b_more_io);
spin_lock_init(&wb->list_lock);
- setup_timer(&wb->wakeup_timer, wakeup_timer_fn, (unsigned long)bdi);
+ INIT_DELAYED_WORK(&wb->dwork, bdi_writeback_workfn);
}
/*
bdi_unregister(bdi);
/*
- * If bdi_unregister() had already been called earlier, the
- * wakeup_timer could still be armed because bdi_prune_sb()
- * can race with the bdi_wakeup_thread_delayed() calls from
- * __mark_inode_dirty().
+ * If bdi_unregister() had already been called earlier, the dwork
+ * could still be pending because bdi_prune_sb() can race with the
+ * bdi_wakeup_thread_delayed() calls from __mark_inode_dirty().
*/
- del_timer_sync(&bdi->wb.wakeup_timer);
+ cancel_delayed_work_sync(&bdi->wb.dwork);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
percpu_counter_destroy(&bdi->bdi_stat[i]);
unsigned long addr;
struct file *file = get_file(vma->vm_file);
- vm_flags = vma->vm_flags;
- if (!(flags & MAP_NONBLOCK))
- vm_flags |= VM_POPULATE;
- addr = mmap_region(file, start, size, vm_flags, pgoff);
+ addr = mmap_region(file, start, size,
+ vma->vm_flags, pgoff);
fput(file);
if (IS_ERR_VALUE(addr)) {
err = addr;
mutex_unlock(&mapping->i_mmap_mutex);
}
- if (!(flags & MAP_NONBLOCK) && !(vma->vm_flags & VM_POPULATE)) {
- if (!has_write_lock)
- goto get_write_lock;
- vma->vm_flags |= VM_POPULATE;
- }
-
if (vma->vm_flags & VM_LOCKED) {
/*
* drop PG_Mlocked flag for over-mapped range
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
- struct hstate *h = &default_hstate;
- return h->nr_huge_pages * pages_per_huge_page(h);
+ struct hstate *h;
+ unsigned long nr_total_pages = 0;
+
+ for_each_hstate(h)
+ nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+ return nr_total_pages;
}
static int hugetlb_acct_memory(struct hstate *h, long delta)
for (i = 0; i < MAX_NR_ZONES; i++) {
struct zone *zone = pgdat->node_zones + i;
- if (zone->wait_table)
+ /*
+ * wait_table may be allocated from boot memory,
+ * here only free if it's allocated by vmalloc.
+ */
+ if (is_vmalloc_addr(zone->wait_table))
vfree(zone->wait_table);
}
newflags = vma->vm_flags & ~VM_LOCKED;
if (on)
- newflags |= VM_LOCKED | VM_POPULATE;
+ newflags |= VM_LOCKED;
tmp = vma->vm_end;
if (tmp > end)
* range with the first VMA. Also, skip undesirable VMA types.
*/
nend = min(end, vma->vm_end);
- if ((vma->vm_flags & (VM_IO | VM_PFNMAP | VM_POPULATE)) !=
- VM_POPULATE)
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP))
continue;
if (nstart < vma->vm_start)
nstart = vma->vm_start;
struct vm_area_struct * vma, * prev = NULL;
if (flags & MCL_FUTURE)
- current->mm->def_flags |= VM_LOCKED | VM_POPULATE;
+ current->mm->def_flags |= VM_LOCKED;
else
- current->mm->def_flags &= ~(VM_LOCKED | VM_POPULATE);
+ current->mm->def_flags &= ~VM_LOCKED;
if (flags == MCL_FUTURE)
goto out;
newflags = vma->vm_flags & ~VM_LOCKED;
if (flags & MCL_CURRENT)
- newflags |= VM_LOCKED | VM_POPULATE;
+ newflags |= VM_LOCKED;
/* Ignore errors */
mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags);
}
addr = mmap_region(file, addr, len, vm_flags, pgoff);
- if (!IS_ERR_VALUE(addr) && (vm_flags & VM_POPULATE))
+ if (!IS_ERR_VALUE(addr) &&
+ ((vm_flags & VM_LOCKED) ||
+ (flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE))
*populate = len;
return addr;
}
grp = &vlan_info->grp;
- /* Take it out of our own structures, but be sure to interlock with
- * HW accelerating devices or SW vlan input packet processing if
- * VLAN is not 0 (leave it there for 802.1p).
- */
- if (vlan_id)
- vlan_vid_del(real_dev, vlan_id);
-
grp->nr_vlan_devs--;
if (vlan->flags & VLAN_FLAG_MVRP)
vlan_gvrp_uninit_applicant(real_dev);
}
+ /* Take it out of our own structures, but be sure to interlock with
+ * HW accelerating devices or SW vlan input packet processing if
+ * VLAN is not 0 (leave it there for 802.1p).
+ */
+ if (vlan_id)
+ vlan_vid_del(real_dev, vlan_id);
+
/* Get rid of the vlan's reference to real_dev */
dev_put(real_dev);
}
batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff;
/* unpack the aggregated packets and process them one by one */
- do {
+ while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len,
+ batadv_ogm_packet->tt_num_changes)) {
tt_buff = packet_buff + buff_pos + BATADV_OGM_HLEN;
batadv_iv_ogm_process(ethhdr, batadv_ogm_packet, tt_buff,
packet_pos = packet_buff + buff_pos;
batadv_ogm_packet = (struct batadv_ogm_packet *)packet_pos;
- } while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len,
- batadv_ogm_packet->tt_num_changes));
+ }
kfree_skb(skb);
return NET_RX_SUCCESS;
sco_chan_del(sk, ECONNRESET);
break;
+ case BT_CONNECT2:
case BT_CONNECT:
case BT_DISCONN:
sco_chan_del(sk, ECONNRESET);
return 0;
br_warn(br, "adding interface %s with same address "
"as a received packet\n",
- source->dev->name);
+ source ? source->dev->name : br->dev->name);
fdb_delete(br, fdb);
}
+ nla_total_size(1) /* IFLA_BRPORT_MODE */
+ nla_total_size(1) /* IFLA_BRPORT_GUARD */
+ nla_total_size(1) /* IFLA_BRPORT_PROTECT */
+ + nla_total_size(1) /* IFLA_BRPORT_FAST_LEAVE */
+ 0;
}
br_set_port_flag(p, tb, IFLA_BRPORT_MODE, BR_HAIRPIN_MODE);
br_set_port_flag(p, tb, IFLA_BRPORT_GUARD, BR_BPDU_GUARD);
br_set_port_flag(p, tb, IFLA_BRPORT_FAST_LEAVE, BR_MULTICAST_FAST_LEAVE);
+ br_set_port_flag(p, tb, IFLA_BRPORT_PROTECT, BR_ROOT_BLOCK);
if (tb[IFLA_BRPORT_COST]) {
err = br_stp_set_path_cost(p, nla_get_u32(tb[IFLA_BRPORT_COST]));
return;
}
#endif
- WARN_ON(in_interrupt());
static_key_slow_inc(&netstamp_needed);
}
EXPORT_SYMBOL(net_enable_timestamp);
struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT);
struct packet_offload *ptype;
__be16 type = skb->protocol;
+ int vlan_depth = ETH_HLEN;
while (type == htons(ETH_P_8021Q)) {
- int vlan_depth = ETH_HLEN;
struct vlan_hdr *vh;
if (unlikely(!pskb_may_pull(skb, vlan_depth + VLAN_HLEN)))
flow->ports = *ports;
}
+ flow->thoff = (u16) nhoff;
+
return true;
}
EXPORT_SYMBOL(skb_flow_dissect);
struct rtattr *attr = (void *)nlh + NLMSG_ALIGN(min_len);
while (RTA_OK(attr, attrlen)) {
- unsigned int flavor = attr->rta_type;
+ unsigned int flavor = attr->rta_type & NLA_TYPE_MASK;
if (flavor) {
if (flavor > rta_max[sz_idx])
return -EINVAL;
#include <linux/interrupt.h>
#include <linux/netdevice.h>
#include <linux/security.h>
+#include <linux/pid_namespace.h>
#include <linux/pid.h>
#include <linux/nsproxy.h>
#include <linux/slab.h>
if (!uid_valid(uid) || !gid_valid(gid))
return -EINVAL;
- if ((creds->pid == task_tgid_vnr(current) || nsown_capable(CAP_SYS_ADMIN)) &&
+ if ((creds->pid == task_tgid_vnr(current) ||
+ ns_capable(current->nsproxy->pid_ns->user_ns, CAP_SYS_ADMIN)) &&
((uid_eq(uid, cred->uid) || uid_eq(uid, cred->euid) ||
uid_eq(uid, cred->suid)) || nsown_capable(CAP_SETUID)) &&
((gid_eq(gid, cred->gid) || gid_eq(gid, cred->egid) ||
iph->frag_off |= htons(IP_MF);
offset += (skb->len - skb->mac_len - iph->ihl * 4);
} else {
- if (!(iph->frag_off & htons(IP_DF)))
- iph->id = htons(id++);
+ iph->id = htons(id++);
}
iph->tot_len = htons(skb->len - skb->mac_len);
iph->check = 0;
#include <linux/rtnetlink.h>
#include <linux/slab.h>
+#include <net/sock.h>
#include <net/inet_frag.h>
static void inet_frag_secret_rebuild(unsigned long dummy)
__releases(&f->lock)
{
struct inet_frag_queue *q;
+ int depth = 0;
hlist_for_each_entry(q, &f->hash[hash], list) {
if (q->net == nf && f->match(q, key)) {
read_unlock(&f->lock);
return q;
}
+ depth++;
}
read_unlock(&f->lock);
- return inet_frag_create(nf, f, key);
+ if (depth <= INETFRAGS_MAXDEPTH)
+ return inet_frag_create(nf, f, key);
+ else
+ return ERR_PTR(-ENOBUFS);
}
EXPORT_SYMBOL(inet_frag_find);
+
+void inet_frag_maybe_warn_overflow(struct inet_frag_queue *q,
+ const char *prefix)
+{
+ static const char msg[] = "inet_frag_find: Fragment hash bucket"
+ " list length grew over limit " __stringify(INETFRAGS_MAXDEPTH)
+ ". Dropping fragment.\n";
+
+ if (PTR_ERR(q) == -ENOBUFS)
+ LIMIT_NETDEBUG(KERN_WARNING "%s%s", prefix, msg);
+}
+EXPORT_SYMBOL(inet_frag_maybe_warn_overflow);
hash = ipqhashfn(iph->id, iph->saddr, iph->daddr, iph->protocol);
q = inet_frag_find(&net->ipv4.frags, &ip4_frags, &arg, hash);
- if (q == NULL)
- goto out_nomem;
-
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
+ return NULL;
+ }
return container_of(q, struct ipq, q);
-
-out_nomem:
- LIMIT_NETDEBUG(KERN_ERR pr_fmt("ip_frag_create: no memory left !\n"));
- return NULL;
}
/* Is the fragment too far ahead to be part of ipq? */
if (dev->header_ops && dev->type == ARPHRD_IPGRE) {
gre_hlen = 0;
- if (skb->protocol == htons(ETH_P_IP))
- tiph = (const struct iphdr *)skb->data;
- else
- tiph = &tunnel->parms.iph;
+ tiph = (const struct iphdr *)skb->data;
} else {
gre_hlen = tunnel->hlen;
tiph = &tunnel->parms.iph;
}
switch (optptr[3]&0xF) {
case IPOPT_TS_TSONLY:
- opt->ts = optptr - iph;
if (skb)
timeptr = &optptr[optptr[2]-1];
opt->ts_needtime = 1;
pp_ptr = optptr + 2;
goto error;
}
- opt->ts = optptr - iph;
if (rt) {
spec_dst_fill(&spec_dst, skb);
memcpy(&optptr[optptr[2]-1], &spec_dst, 4);
pp_ptr = optptr + 2;
goto error;
}
- opt->ts = optptr - iph;
{
__be32 addr;
memcpy(&addr, &optptr[optptr[2]-1], 4);
pp_ptr = optptr + 3;
goto error;
}
- opt->ts = optptr - iph;
if (skb) {
optptr[3] = (optptr[3]&0xF)|((overflow+1)<<4);
opt->is_changed = 1;
}
}
+ opt->ts = optptr - iph;
break;
case IPOPT_RA:
if (optlen < 4) {
}
for (i++; i < CONF_NAMESERVERS_MAX; i++)
if (ic_nameservers[i] != NONE)
- pr_cont(", nameserver%u=%pI4\n", i, &ic_nameservers[i]);
+ pr_cont(", nameserver%u=%pI4", i, &ic_nameservers[i]);
+ pr_cont("\n");
#endif /* !SILENT */
return 0;
If unsure, say Y.
-config IP_NF_QUEUE
- tristate "IP Userspace queueing via NETLINK (OBSOLETE)"
- depends on NETFILTER_ADVANCED
- help
- Netfilter has the ability to queue packets to user space: the
- netlink device can be used to access them using this driver.
-
- This option enables the old IPv4-only "ip_queue" implementation
- which has been obsoleted by the new "nfnetlink_queue" code (see
- CONFIG_NETFILTER_NETLINK_QUEUE).
-
- To compile it as a module, choose M here. If unsure, say N.
-
config IP_NF_IPTABLES
tristate "IP tables support (required for filtering/masq/NAT)"
default m if NETFILTER_ADVANCED=n
* Make sure that we have exactly size bytes
* available to the caller, no more, no less.
*/
- skb->avail_size = size;
+ skb->reserved_tailroom = skb->end - skb->tail - size;
return skb;
}
__kfree_skb(skb);
if (tcp_is_reno(tp))
tcp_reset_reno_sack(tp);
- if (!how) {
- /* Push undo marker, if it was plain RTO and nothing
- * was retransmitted. */
- tp->undo_marker = tp->snd_una;
- } else {
+ tp->undo_marker = tp->snd_una;
+ if (how) {
tp->sacked_out = 0;
tp->fackets_out = 0;
}
struct inet_sock *inet = inet_sk(sk);
u32 mtu = tcp_sk(sk)->mtu_info;
- /* We are not interested in TCP_LISTEN and open_requests (SYN-ACKs
- * send out by Linux are always <576bytes so they should go through
- * unfragmented).
- */
- if (sk->sk_state == TCP_LISTEN)
- return;
-
dst = inet_csk_update_pmtu(sk, mtu);
if (!dst)
return;
goto out;
if (code == ICMP_FRAG_NEEDED) { /* PMTU discovery (RFC1191) */
+ /* We are not interested in TCP_LISTEN and open_requests
+ * (SYN-ACKs send out by Linux are always <576bytes so
+ * they should go through unfragmented).
+ */
+ if (sk->sk_state == TCP_LISTEN)
+ goto out;
+
tp->mtu_info = info;
if (!sock_owned_by_user(sk)) {
tcp_v4_mtu_reduced(sk);
eat = min_t(int, len, skb_headlen(skb));
if (eat) {
__skb_pull(skb, eat);
- skb->avail_size -= eat;
len -= eat;
if (!len)
return;
goto send_now;
}
- /* Ok, it looks like it is advisable to defer. */
- tp->tso_deferred = 1 | (jiffies << 1);
+ /* Ok, it looks like it is advisable to defer.
+ * Do not rearm the timer if already set to not break TCP ACK clocking.
+ */
+ if (!tp->tso_deferred)
+ tp->tso_deferred = 1 | (jiffies << 1);
return true;
void udp_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
bool slow = lock_sock_fast(sk);
udp_flush_pending_frames(sk);
unlock_sock_fast(sk, slow);
+ if (static_key_false(&udp_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
}
/*
static int __net_init addrconf_init_net(struct net *net)
{
- int err;
+ int err = -ENOMEM;
struct ipv6_devconf *all, *dflt;
- err = -ENOMEM;
- all = &ipv6_devconf;
- dflt = &ipv6_devconf_dflt;
+ all = kmemdup(&ipv6_devconf, sizeof(ipv6_devconf), GFP_KERNEL);
+ if (all == NULL)
+ goto err_alloc_all;
- if (!net_eq(net, &init_net)) {
- all = kmemdup(all, sizeof(ipv6_devconf), GFP_KERNEL);
- if (all == NULL)
- goto err_alloc_all;
+ dflt = kmemdup(&ipv6_devconf_dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL);
+ if (dflt == NULL)
+ goto err_alloc_dflt;
- dflt = kmemdup(dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL);
- if (dflt == NULL)
- goto err_alloc_dflt;
- } else {
- /* these will be inherited by all namespaces */
- dflt->autoconf = ipv6_defaults.autoconf;
- dflt->disable_ipv6 = ipv6_defaults.disable_ipv6;
- }
+ /* these will be inherited by all namespaces */
+ dflt->autoconf = ipv6_defaults.autoconf;
+ dflt->disable_ipv6 = ipv6_defaults.disable_ipv6;
net->ipv6.devconf_all = all;
net->ipv6.devconf_dflt = dflt;
static struct xt_target ip6t_npt_target_reg[] __read_mostly = {
{
.name = "SNPT",
+ .table = "mangle",
.target = ip6t_snpt_tg,
.targetsize = sizeof(struct ip6t_npt_tginfo),
.checkentry = ip6t_npt_checkentry,
},
{
.name = "DNPT",
+ .table = "mangle",
.target = ip6t_dnpt_tg,
.targetsize = sizeof(struct ip6t_npt_tginfo),
.checkentry = ip6t_npt_checkentry,
* 2 of the License, or (at your option) any later version.
*/
+#define pr_fmt(fmt) "IPv6-nf: " fmt
+
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/string.h>
q = inet_frag_find(&net->nf_frag.frags, &nf_frags, &arg, hash);
local_bh_enable();
- if (q == NULL)
- goto oom;
-
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
+ return NULL;
+ }
return container_of(q, struct frag_queue, q);
-
-oom:
- return NULL;
}
* YOSHIFUJI,H. @USAGI Always remove fragment header to
* calculate ICV correctly.
*/
+
+#define pr_fmt(fmt) "IPv6: " fmt
+
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/string.h>
hash = inet6_hash_frag(id, src, dst, ip6_frags.rnd);
q = inet_frag_find(&net->ipv6.frags, &ip6_frags, &arg, hash);
- if (q == NULL)
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
return NULL;
-
+ }
return container_of(q, struct frag_queue, q);
}
}
if (type == ICMPV6_PKT_TOOBIG) {
+ /* We are not interested in TCP_LISTEN and open_requests
+ * (SYN-ACKs send out by Linux are always <576bytes so
+ * they should go through unfragmented).
+ */
+ if (sk->sk_state == TCP_LISTEN)
+ goto out;
+
tp->mtu_info = ntohl(info);
if (!sock_owned_by_user(sk))
tcp_v6_mtu_reduced(sk);
void udpv6_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
lock_sock(sk);
udp_v6_flush_pending_frames(sk);
release_sock(sk);
+ if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
+
inet6_destroy_sock(sk);
}
NULL, NULL, NULL);
/* Check if the we got some results */
- if (!self->cachedaddr)
- return -EAGAIN; /* Didn't find any devices */
+ if (!self->cachedaddr) {
+ err = -EAGAIN; /* Didn't find any devices */
+ goto out;
+ }
daddr = self->cachedaddr;
/* Cleanup */
self->cachedaddr = 0;
static void l2tp_session_set_header_len(struct l2tp_session *session, int version);
static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
-static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
static inline struct l2tp_net *l2tp_pernet(struct net *net)
{
} else {
/* Socket is owned by kernelspace */
sk = tunnel->sock;
+ sock_hold(sk);
}
out:
}
sock_put(sk);
}
+ sock_put(sk);
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put);
struct sk_buff *skbp;
struct sk_buff *tmp;
u32 ns = L2TP_SKB_CB(skb)->ns;
- struct l2tp_stats *sstats;
spin_lock_bh(&session->reorder_q.lock);
- sstats = &session->stats;
skb_queue_walk_safe(&session->reorder_q, skbp, tmp) {
if (L2TP_SKB_CB(skbp)->ns > ns) {
__skb_queue_before(&session->reorder_q, skbp, skb);
"%s: pkt %hu, inserted before %hu, reorder_q len=%d\n",
session->name, ns, L2TP_SKB_CB(skbp)->ns,
skb_queue_len(&session->reorder_q));
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_oos_packets++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_oos_packets);
goto out;
}
}
{
struct l2tp_tunnel *tunnel = session->tunnel;
int length = L2TP_SKB_CB(skb)->length;
- struct l2tp_stats *tstats, *sstats;
/* We're about to requeue the skb, so return resources
* to its current owner (a socket receive buffer).
*/
skb_orphan(skb);
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- sstats = &session->stats;
- u64_stats_update_begin(&sstats->syncp);
- tstats->rx_packets++;
- tstats->rx_bytes += length;
- sstats->rx_packets++;
- sstats->rx_bytes += length;
- u64_stats_update_end(&tstats->syncp);
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&tunnel->stats.rx_packets);
+ atomic_long_add(length, &tunnel->stats.rx_bytes);
+ atomic_long_inc(&session->stats.rx_packets);
+ atomic_long_add(length, &session->stats.rx_bytes);
if (L2TP_SKB_CB(skb)->has_seq) {
/* Bump our Nr */
{
struct sk_buff *skb;
struct sk_buff *tmp;
- struct l2tp_stats *sstats;
/* If the pkt at the head of the queue has the nr that we
* expect to send up next, dequeue it and any other
*/
start:
spin_lock_bh(&session->reorder_q.lock);
- sstats = &session->stats;
skb_queue_walk_safe(&session->reorder_q, skb, tmp) {
if (time_after(jiffies, L2TP_SKB_CB(skb)->expires)) {
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- sstats->rx_errors++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
+ atomic_long_inc(&session->stats.rx_errors);
l2tp_dbg(session, L2TP_MSG_SEQ,
"%s: oos pkt %u len %d discarded (too old), waiting for %u, reorder_q_len=%d\n",
session->name, L2TP_SKB_CB(skb)->ns,
struct l2tp_tunnel *tunnel = session->tunnel;
int offset;
u32 ns, nr;
- struct l2tp_stats *sstats = &session->stats;
/* The ref count is increased since we now hold a pointer to
* the session. Take care to decrement the refcnt when exiting
"%s: cookie mismatch (%u/%u). Discarding.\n",
tunnel->name, tunnel->tunnel_id,
session->session_id);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_cookie_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_cookie_discards);
goto discard;
}
ptr += session->peer_cookie_len;
l2tp_warn(session, L2TP_MSG_SEQ,
"%s: recv data has no seq numbers when required. Discarding.\n",
session->name);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
goto discard;
}
l2tp_warn(session, L2TP_MSG_SEQ,
"%s: recv data has no seq numbers when required. Discarding.\n",
session->name);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
goto discard;
}
}
* packets
*/
if (L2TP_SKB_CB(skb)->ns != session->nr) {
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
l2tp_dbg(session, L2TP_MSG_SEQ,
"%s: oos pkt %u len %d discarded, waiting for %u, reorder_q_len=%d\n",
session->name, L2TP_SKB_CB(skb)->ns,
return;
discard:
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_errors++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_errors);
kfree_skb(skb);
if (session->deref)
}
EXPORT_SYMBOL(l2tp_recv_common);
+/* Drop skbs from the session's reorder_q
+ */
+int l2tp_session_queue_purge(struct l2tp_session *session)
+{
+ struct sk_buff *skb = NULL;
+ BUG_ON(!session);
+ BUG_ON(session->magic != L2TP_SESSION_MAGIC);
+ while ((skb = skb_dequeue(&session->reorder_q))) {
+ atomic_long_inc(&session->stats.rx_errors);
+ kfree_skb(skb);
+ if (session->deref)
+ (*session->deref)(session);
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(l2tp_session_queue_purge);
+
/* Internal UDP receive frame. Do the real work of receiving an L2TP data frame
* here. The skb is not on a list when we get here.
* Returns 0 if the packet was a data packet and was successfully passed on.
u32 tunnel_id, session_id;
u16 version;
int length;
- struct l2tp_stats *tstats;
if (tunnel->sock && l2tp_verify_udp_checksum(tunnel->sock, skb))
goto discard_bad_csum;
discard_bad_csum:
LIMIT_NETDEBUG("%s: UDP: bad checksum\n", tunnel->name);
UDP_INC_STATS_USER(tunnel->l2tp_net, UDP_MIB_INERRORS, 0);
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- tstats->rx_errors++;
- u64_stats_update_end(&tstats->syncp);
+ atomic_long_inc(&tunnel->stats.rx_errors);
kfree_skb(skb);
return 0;
struct l2tp_tunnel *tunnel = session->tunnel;
unsigned int len = skb->len;
int error;
- struct l2tp_stats *tstats, *sstats;
/* Debug */
if (session->send_seq)
error = ip_queue_xmit(skb, fl);
/* Update stats */
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- sstats = &session->stats;
- u64_stats_update_begin(&sstats->syncp);
if (error >= 0) {
- tstats->tx_packets++;
- tstats->tx_bytes += len;
- sstats->tx_packets++;
- sstats->tx_bytes += len;
+ atomic_long_inc(&tunnel->stats.tx_packets);
+ atomic_long_add(len, &tunnel->stats.tx_bytes);
+ atomic_long_inc(&session->stats.tx_packets);
+ atomic_long_add(len, &session->stats.tx_bytes);
} else {
- tstats->tx_errors++;
- sstats->tx_errors++;
+ atomic_long_inc(&tunnel->stats.tx_errors);
+ atomic_long_inc(&session->stats.tx_errors);
}
- u64_stats_update_end(&tstats->syncp);
- u64_stats_update_end(&sstats->syncp);
return 0;
}
/* No longer an encapsulation socket. See net/ipv4/udp.c */
(udp_sk(sk))->encap_type = 0;
(udp_sk(sk))->encap_rcv = NULL;
+ (udp_sk(sk))->encap_destroy = NULL;
break;
case L2TP_ENCAPTYPE_IP:
break;
/* When the tunnel is closed, all the attached sessions need to go too.
*/
-static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel)
+void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel)
{
int hash;
struct hlist_node *walk;
hlist_del_init(&session->hlist);
- /* Since we should hold the sock lock while
- * doing any unbinding, we need to release the
- * lock we're holding before taking that lock.
- * Hold a reference to the sock so it doesn't
- * disappear as we're jumping between locks.
- */
if (session->ref != NULL)
(*session->ref)(session);
write_unlock_bh(&tunnel->hlist_lock);
- if (tunnel->version != L2TP_HDR_VER_2) {
- struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
-
- spin_lock_bh(&pn->l2tp_session_hlist_lock);
- hlist_del_init_rcu(&session->global_hlist);
- spin_unlock_bh(&pn->l2tp_session_hlist_lock);
- synchronize_rcu();
- }
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
if (session->session_close != NULL)
(*session->session_close)(session);
if (session->deref != NULL)
(*session->deref)(session);
+ l2tp_session_dec_refcount(session);
+
write_lock_bh(&tunnel->hlist_lock);
/* Now restart from the beginning of this hash
}
write_unlock_bh(&tunnel->hlist_lock);
}
+EXPORT_SYMBOL_GPL(l2tp_tunnel_closeall);
+
+/* Tunnel socket destroy hook for UDP encapsulation */
+static void l2tp_udp_encap_destroy(struct sock *sk)
+{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+}
/* Really kill the tunnel.
* Come here only when all sessions have been cleared from the tunnel.
return;
sock = sk->sk_socket;
- BUG_ON(!sock);
- /* If the tunnel socket was created directly by the kernel, use the
- * sk_* API to release the socket now. Otherwise go through the
- * inet_* layer to shut the socket down, and let userspace close it.
+ /* If the tunnel socket was created by userspace, then go through the
+ * inet layer to shut the socket down, and let userspace close it.
+ * Otherwise, if we created the socket directly within the kernel, use
+ * the sk API to release it here.
* In either case the tunnel resources are freed in the socket
* destructor when the tunnel socket goes away.
*/
- if (sock->file == NULL) {
- kernel_sock_shutdown(sock, SHUT_RDWR);
- sk_release_kernel(sk);
+ if (tunnel->fd >= 0) {
+ if (sock)
+ inet_shutdown(sock, 2);
} else {
- inet_shutdown(sock, 2);
+ if (sock)
+ kernel_sock_shutdown(sock, SHUT_RDWR);
+ sk_release_kernel(sk);
}
l2tp_tunnel_sock_put(sk);
/* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
udp_sk(sk)->encap_type = UDP_ENCAP_L2TPINUDP;
udp_sk(sk)->encap_rcv = l2tp_udp_encap_recv;
+ udp_sk(sk)->encap_destroy = l2tp_udp_encap_destroy;
#if IS_ENABLED(CONFIG_IPV6)
if (sk->sk_family == PF_INET6)
udpv6_encap_enable();
*/
int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
{
+ l2tp_tunnel_closeall(tunnel);
return (false == queue_work(l2tp_wq, &tunnel->del_work));
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
*/
void l2tp_session_free(struct l2tp_session *session)
{
- struct l2tp_tunnel *tunnel;
+ struct l2tp_tunnel *tunnel = session->tunnel;
BUG_ON(atomic_read(&session->ref_count) != 0);
- tunnel = session->tunnel;
- if (tunnel != NULL) {
+ if (tunnel) {
BUG_ON(tunnel->magic != L2TP_TUNNEL_MAGIC);
+ if (session->session_id != 0)
+ atomic_dec(&l2tp_session_count);
+ sock_put(tunnel->sock);
+ session->tunnel = NULL;
+ l2tp_tunnel_dec_refcount(tunnel);
+ }
+
+ kfree(session);
- /* Delete the session from the hash */
+ return;
+}
+EXPORT_SYMBOL_GPL(l2tp_session_free);
+
+/* Remove an l2tp session from l2tp_core's hash lists.
+ * Provides a tidyup interface for pseudowire code which can't just route all
+ * shutdown via. l2tp_session_delete and a pseudowire-specific session_close
+ * callback.
+ */
+void __l2tp_session_unhash(struct l2tp_session *session)
+{
+ struct l2tp_tunnel *tunnel = session->tunnel;
+
+ /* Remove the session from core hashes */
+ if (tunnel) {
+ /* Remove from the per-tunnel hash */
write_lock_bh(&tunnel->hlist_lock);
hlist_del_init(&session->hlist);
write_unlock_bh(&tunnel->hlist_lock);
- /* Unlink from the global hash if not L2TPv2 */
+ /* For L2TPv3 we have a per-net hash: remove from there, too */
if (tunnel->version != L2TP_HDR_VER_2) {
struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
-
spin_lock_bh(&pn->l2tp_session_hlist_lock);
hlist_del_init_rcu(&session->global_hlist);
spin_unlock_bh(&pn->l2tp_session_hlist_lock);
synchronize_rcu();
}
-
- if (session->session_id != 0)
- atomic_dec(&l2tp_session_count);
-
- sock_put(tunnel->sock);
-
- /* This will delete the tunnel context if this
- * is the last session on the tunnel.
- */
- session->tunnel = NULL;
- l2tp_tunnel_dec_refcount(tunnel);
}
-
- kfree(session);
-
- return;
}
-EXPORT_SYMBOL_GPL(l2tp_session_free);
+EXPORT_SYMBOL_GPL(__l2tp_session_unhash);
/* This function is used by the netlink SESSION_DELETE command and by
pseudowire modules.
*/
int l2tp_session_delete(struct l2tp_session *session)
{
+ if (session->ref)
+ (*session->ref)(session);
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
if (session->session_close != NULL)
(*session->session_close)(session);
-
+ if (session->deref)
+ (*session->ref)(session);
l2tp_session_dec_refcount(session);
-
return 0;
}
EXPORT_SYMBOL_GPL(l2tp_session_delete);
-
/* We come here whenever a session's send_seq, cookie_len or
* l2specific_len parameters are set.
*/
struct sk_buff;
struct l2tp_stats {
- u64 tx_packets;
- u64 tx_bytes;
- u64 tx_errors;
- u64 rx_packets;
- u64 rx_bytes;
- u64 rx_seq_discards;
- u64 rx_oos_packets;
- u64 rx_errors;
- u64 rx_cookie_discards;
- struct u64_stats_sync syncp;
+ atomic_long_t tx_packets;
+ atomic_long_t tx_bytes;
+ atomic_long_t tx_errors;
+ atomic_long_t rx_packets;
+ atomic_long_t rx_bytes;
+ atomic_long_t rx_seq_discards;
+ atomic_long_t rx_oos_packets;
+ atomic_long_t rx_errors;
+ atomic_long_t rx_cookie_discards;
};
struct l2tp_tunnel;
extern struct l2tp_tunnel *l2tp_tunnel_find_nth(struct net *net, int nth);
extern int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg, struct l2tp_tunnel **tunnelp);
+extern void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
extern int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel);
extern struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg);
+extern void __l2tp_session_unhash(struct l2tp_session *session);
extern int l2tp_session_delete(struct l2tp_session *session);
extern void l2tp_session_free(struct l2tp_session *session);
extern void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb, unsigned char *ptr, unsigned char *optr, u16 hdrflags, int length, int (*payload_hook)(struct sk_buff *skb));
+extern int l2tp_session_queue_purge(struct l2tp_session *session);
extern int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb);
extern int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len);
tunnel->sock ? atomic_read(&tunnel->sock->sk_refcnt) : 0,
atomic_read(&tunnel->ref_count));
- seq_printf(m, " %08x rx %llu/%llu/%llu rx %llu/%llu/%llu\n",
+ seq_printf(m, " %08x rx %ld/%ld/%ld rx %ld/%ld/%ld\n",
tunnel->debug,
- (unsigned long long)tunnel->stats.tx_packets,
- (unsigned long long)tunnel->stats.tx_bytes,
- (unsigned long long)tunnel->stats.tx_errors,
- (unsigned long long)tunnel->stats.rx_packets,
- (unsigned long long)tunnel->stats.rx_bytes,
- (unsigned long long)tunnel->stats.rx_errors);
+ atomic_long_read(&tunnel->stats.tx_packets),
+ atomic_long_read(&tunnel->stats.tx_bytes),
+ atomic_long_read(&tunnel->stats.tx_errors),
+ atomic_long_read(&tunnel->stats.rx_packets),
+ atomic_long_read(&tunnel->stats.rx_bytes),
+ atomic_long_read(&tunnel->stats.rx_errors));
if (tunnel->show != NULL)
tunnel->show(m, tunnel);
seq_printf(m, "\n");
}
- seq_printf(m, " %hu/%hu tx %llu/%llu/%llu rx %llu/%llu/%llu\n",
+ seq_printf(m, " %hu/%hu tx %ld/%ld/%ld rx %ld/%ld/%ld\n",
session->nr, session->ns,
- (unsigned long long)session->stats.tx_packets,
- (unsigned long long)session->stats.tx_bytes,
- (unsigned long long)session->stats.tx_errors,
- (unsigned long long)session->stats.rx_packets,
- (unsigned long long)session->stats.rx_bytes,
- (unsigned long long)session->stats.rx_errors);
+ atomic_long_read(&session->stats.tx_packets),
+ atomic_long_read(&session->stats.tx_bytes),
+ atomic_long_read(&session->stats.tx_errors),
+ atomic_long_read(&session->stats.rx_packets),
+ atomic_long_read(&session->stats.rx_bytes),
+ atomic_long_read(&session->stats.rx_errors));
if (session->show != NULL)
session->show(m, session);
static void l2tp_ip_destroy_sock(struct sock *sk)
{
struct sk_buff *skb;
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
while ((skb = __skb_dequeue_tail(&sk->sk_write_queue)) != NULL)
kfree_skb(skb);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+
sk_refcnt_debug_dec(sk);
}
static void l2tp_ip6_destroy_sock(struct sock *sk)
{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+
lock_sock(sk);
ip6_flush_pending_frames(sk);
release_sock(sk);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+
inet6_destroy_sock(sk);
}
#if IS_ENABLED(CONFIG_IPV6)
struct ipv6_pinfo *np = NULL;
#endif
- struct l2tp_stats stats;
- unsigned int start;
hdr = genlmsg_put(skb, portid, seq, &l2tp_nl_family, flags,
L2TP_CMD_TUNNEL_GET);
if (nest == NULL)
goto nla_put_failure;
- do {
- start = u64_stats_fetch_begin(&tunnel->stats.syncp);
- stats.tx_packets = tunnel->stats.tx_packets;
- stats.tx_bytes = tunnel->stats.tx_bytes;
- stats.tx_errors = tunnel->stats.tx_errors;
- stats.rx_packets = tunnel->stats.rx_packets;
- stats.rx_bytes = tunnel->stats.rx_bytes;
- stats.rx_errors = tunnel->stats.rx_errors;
- stats.rx_seq_discards = tunnel->stats.rx_seq_discards;
- stats.rx_oos_packets = tunnel->stats.rx_oos_packets;
- } while (u64_stats_fetch_retry(&tunnel->stats.syncp, start));
-
- if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) ||
- nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) ||
- nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) ||
+ if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
+ atomic_long_read(&tunnel->stats.tx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
+ atomic_long_read(&tunnel->stats.tx_bytes)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
+ atomic_long_read(&tunnel->stats.tx_errors)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
+ atomic_long_read(&tunnel->stats.rx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
+ atomic_long_read(&tunnel->stats.rx_bytes)) ||
nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
- stats.rx_seq_discards) ||
+ atomic_long_read(&tunnel->stats.rx_seq_discards)) ||
nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
- stats.rx_oos_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors))
+ atomic_long_read(&tunnel->stats.rx_oos_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
+ atomic_long_read(&tunnel->stats.rx_errors)))
goto nla_put_failure;
nla_nest_end(skb, nest);
struct nlattr *nest;
struct l2tp_tunnel *tunnel = session->tunnel;
struct sock *sk = NULL;
- struct l2tp_stats stats;
- unsigned int start;
sk = tunnel->sock;
if (nest == NULL)
goto nla_put_failure;
- do {
- start = u64_stats_fetch_begin(&session->stats.syncp);
- stats.tx_packets = session->stats.tx_packets;
- stats.tx_bytes = session->stats.tx_bytes;
- stats.tx_errors = session->stats.tx_errors;
- stats.rx_packets = session->stats.rx_packets;
- stats.rx_bytes = session->stats.rx_bytes;
- stats.rx_errors = session->stats.rx_errors;
- stats.rx_seq_discards = session->stats.rx_seq_discards;
- stats.rx_oos_packets = session->stats.rx_oos_packets;
- } while (u64_stats_fetch_retry(&session->stats.syncp, start));
-
- if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) ||
- nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) ||
- nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) ||
+ if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
+ atomic_long_read(&session->stats.tx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
+ atomic_long_read(&session->stats.tx_bytes)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
+ atomic_long_read(&session->stats.tx_errors)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
+ atomic_long_read(&session->stats.rx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
+ atomic_long_read(&session->stats.rx_bytes)) ||
nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
- stats.rx_seq_discards) ||
+ atomic_long_read(&session->stats.rx_seq_discards)) ||
nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
- stats.rx_oos_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors))
+ atomic_long_read(&session->stats.rx_oos_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
+ atomic_long_read(&session->stats.rx_errors)))
goto nla_put_failure;
nla_nest_end(skb, nest);
#include <net/ip.h>
#include <net/udp.h>
#include <net/xfrm.h>
+#include <net/inet_common.h>
#include <asm/byteorder.h>
#include <linux/atomic.h>
session->name);
/* Not bound. Nothing we can do, so discard. */
- session->stats.rx_errors++;
+ atomic_long_inc(&session->stats.rx_errors);
kfree_skb(skb);
}
{
struct pppol2tp_session *ps = l2tp_session_priv(session);
struct sock *sk = ps->sock;
- struct sk_buff *skb;
+ struct socket *sock = sk->sk_socket;
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
- if (session->session_id == 0)
- goto out;
-
- if (sk != NULL) {
- lock_sock(sk);
-
- if (sk->sk_state & (PPPOX_CONNECTED | PPPOX_BOUND)) {
- pppox_unbind_sock(sk);
- sk->sk_state = PPPOX_DEAD;
- sk->sk_state_change(sk);
- }
-
- /* Purge any queued data */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
- while ((skb = skb_dequeue(&session->reorder_q))) {
- kfree_skb(skb);
- sock_put(sk);
- }
- release_sock(sk);
+ if (sock) {
+ inet_shutdown(sock, 2);
+ /* Don't let the session go away before our socket does */
+ l2tp_session_inc_refcount(session);
}
-
-out:
return;
}
*/
static void pppol2tp_session_destruct(struct sock *sk)
{
- struct l2tp_session *session;
-
- if (sk->sk_user_data != NULL) {
- session = sk->sk_user_data;
- if (session == NULL)
- goto out;
-
+ struct l2tp_session *session = sk->sk_user_data;
+ if (session) {
sk->sk_user_data = NULL;
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
l2tp_session_dec_refcount(session);
}
-
-out:
return;
}
session = pppol2tp_sock_to_session(sk);
/* Purge any queued data */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
if (session != NULL) {
- struct sk_buff *skb;
- while ((skb = skb_dequeue(&session->reorder_q))) {
- kfree_skb(skb);
- sock_put(sk);
- }
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
sock_put(sk);
}
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
release_sock(sk);
return error;
}
-/* Called when deleting sessions via the netlink interface.
- */
-static int pppol2tp_session_delete(struct l2tp_session *session)
-{
- struct pppol2tp_session *ps = l2tp_session_priv(session);
-
- if (ps->sock == NULL)
- l2tp_session_dec_refcount(session);
-
- return 0;
-}
-
#endif /* CONFIG_L2TP_V3 */
/* getname() support.
static void pppol2tp_copy_stats(struct pppol2tp_ioc_stats *dest,
struct l2tp_stats *stats)
{
- dest->tx_packets = stats->tx_packets;
- dest->tx_bytes = stats->tx_bytes;
- dest->tx_errors = stats->tx_errors;
- dest->rx_packets = stats->rx_packets;
- dest->rx_bytes = stats->rx_bytes;
- dest->rx_seq_discards = stats->rx_seq_discards;
- dest->rx_oos_packets = stats->rx_oos_packets;
- dest->rx_errors = stats->rx_errors;
+ dest->tx_packets = atomic_long_read(&stats->tx_packets);
+ dest->tx_bytes = atomic_long_read(&stats->tx_bytes);
+ dest->tx_errors = atomic_long_read(&stats->tx_errors);
+ dest->rx_packets = atomic_long_read(&stats->rx_packets);
+ dest->rx_bytes = atomic_long_read(&stats->rx_bytes);
+ dest->rx_seq_discards = atomic_long_read(&stats->rx_seq_discards);
+ dest->rx_oos_packets = atomic_long_read(&stats->rx_oos_packets);
+ dest->rx_errors = atomic_long_read(&stats->rx_errors);
}
/* Session ioctl helper.
tunnel->name,
(tunnel == tunnel->sock->sk_user_data) ? 'Y' : 'N',
atomic_read(&tunnel->ref_count) - 1);
- seq_printf(m, " %08x %llu/%llu/%llu %llu/%llu/%llu\n",
+ seq_printf(m, " %08x %ld/%ld/%ld %ld/%ld/%ld\n",
tunnel->debug,
- (unsigned long long)tunnel->stats.tx_packets,
- (unsigned long long)tunnel->stats.tx_bytes,
- (unsigned long long)tunnel->stats.tx_errors,
- (unsigned long long)tunnel->stats.rx_packets,
- (unsigned long long)tunnel->stats.rx_bytes,
- (unsigned long long)tunnel->stats.rx_errors);
+ atomic_long_read(&tunnel->stats.tx_packets),
+ atomic_long_read(&tunnel->stats.tx_bytes),
+ atomic_long_read(&tunnel->stats.tx_errors),
+ atomic_long_read(&tunnel->stats.rx_packets),
+ atomic_long_read(&tunnel->stats.rx_bytes),
+ atomic_long_read(&tunnel->stats.rx_errors));
}
static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
session->lns_mode ? "LNS" : "LAC",
session->debug,
jiffies_to_msecs(session->reorder_timeout));
- seq_printf(m, " %hu/%hu %llu/%llu/%llu %llu/%llu/%llu\n",
+ seq_printf(m, " %hu/%hu %ld/%ld/%ld %ld/%ld/%ld\n",
session->nr, session->ns,
- (unsigned long long)session->stats.tx_packets,
- (unsigned long long)session->stats.tx_bytes,
- (unsigned long long)session->stats.tx_errors,
- (unsigned long long)session->stats.rx_packets,
- (unsigned long long)session->stats.rx_bytes,
- (unsigned long long)session->stats.rx_errors);
+ atomic_long_read(&session->stats.tx_packets),
+ atomic_long_read(&session->stats.tx_bytes),
+ atomic_long_read(&session->stats.tx_errors),
+ atomic_long_read(&session->stats.rx_packets),
+ atomic_long_read(&session->stats.rx_bytes),
+ atomic_long_read(&session->stats.rx_errors));
if (po)
seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan));
static const struct l2tp_nl_cmd_ops pppol2tp_nl_cmd_ops = {
.session_create = pppol2tp_session_create,
- .session_delete = pppol2tp_session_delete,
+ .session_delete = l2tp_session_delete,
};
#endif /* CONFIG_L2TP_V3 */
skb_reset_network_header(skb);
IP_VS_DBG(12, "ICMP for IPIP %pI4->%pI4: mtu=%u\n",
&ip_hdr(skb)->saddr, &ip_hdr(skb)->daddr, mtu);
- rcu_read_lock();
ipv4_update_pmtu(skb, dev_net(skb->dev),
mtu, 0, 0, 0, 0);
- rcu_read_unlock();
/* Client uses PMTUD? */
if (!(cih->frag_off & htons(IP_DF)))
goto ignore_ipip;
}
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
ip_vs_fill_iph_skb(af, skb, &iph);
}
IP_VS_DBG_PKT(11, af, pp, skb, 0, "Incoming packet");
- ipvs = net_ipvs(net);
/* Check the server status */
if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) {
/* the destination server is not available */
{
int r;
struct net *net;
+ struct netns_ipvs *ipvs;
if (ip_hdr(skb)->protocol != IPPROTO_ICMP)
return NF_ACCEPT;
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
return ip_vs_in_icmp(skb, &r, hooknum);
{
int r;
struct net *net;
+ struct netns_ipvs *ipvs;
struct ip_vs_iphdr iphdr;
ip_vs_fill_iph_skb(AF_INET6, skb, &iphdr);
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
return ip_vs_in_icmp_v6(skb, &r, hooknum, &iphdr);
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "backup_only",
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
#ifdef CONFIG_IP_VS_DEBUG
{
.procname = "debug_level",
tbl[idx++].data = &ipvs->sysctl_nat_icmp_send;
ipvs->sysctl_pmtu_disc = 1;
tbl[idx++].data = &ipvs->sysctl_pmtu_disc;
+ tbl[idx++].data = &ipvs->sysctl_backup_only;
ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
sctp_chunkhdr_t _sctpch, *sch;
unsigned char chunk_type;
int event, next_state;
- int ihl;
+ int ihl, cofs;
#ifdef CONFIG_IP_VS_IPV6
ihl = cp->af == AF_INET ? ip_hdrlen(skb) : sizeof(struct ipv6hdr);
ihl = ip_hdrlen(skb);
#endif
- sch = skb_header_pointer(skb, ihl + sizeof(sctp_sctphdr_t),
- sizeof(_sctpch), &_sctpch);
+ cofs = ihl + sizeof(sctp_sctphdr_t);
+ sch = skb_header_pointer(skb, cofs, sizeof(_sctpch), &_sctpch);
if (sch == NULL)
return;
*/
if ((sch->type == SCTP_CID_COOKIE_ECHO) ||
(sch->type == SCTP_CID_COOKIE_ACK)) {
- sch = skb_header_pointer(skb, (ihl + sizeof(sctp_sctphdr_t) +
- sch->length), sizeof(_sctpch), &_sctpch);
- if (sch) {
- if (sch->type == SCTP_CID_ABORT)
+ int clen = ntohs(sch->length);
+
+ if (clen >= sizeof(sctp_chunkhdr_t)) {
+ sch = skb_header_pointer(skb, cofs + ALIGN(clen, 4),
+ sizeof(_sctpch), &_sctpch);
+ if (sch && sch->type == SCTP_CID_ABORT)
chunk_type = sch->type;
}
}
{
int ret;
+ ret = register_pernet_subsys(&dccp_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&dccp_proto4);
if (ret < 0)
goto out_dccp4;
if (ret < 0)
goto out_dccp6;
- ret = register_pernet_subsys(&dccp_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&dccp_proto6);
out_dccp6:
nf_ct_l4proto_unregister(&dccp_proto4);
out_dccp4:
+ unregister_pernet_subsys(&dccp_net_ops);
+out_pernet:
return ret;
}
{
int ret;
- ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
- if (ret < 0)
- goto out_gre4;
-
ret = register_pernet_subsys(&proto_gre_net_ops);
if (ret < 0)
goto out_pernet;
+ ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
+ if (ret < 0)
+ goto out_gre4;
+
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_gre4);
out_gre4:
+ unregister_pernet_subsys(&proto_gre_net_ops);
+out_pernet:
return ret;
}
{
int ret;
+ ret = register_pernet_subsys(&sctp_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_sctp4);
if (ret < 0)
goto out_sctp4;
if (ret < 0)
goto out_sctp6;
- ret = register_pernet_subsys(&sctp_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp6);
out_sctp6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp4);
out_sctp4:
+ unregister_pernet_subsys(&sctp_net_ops);
+out_pernet:
return ret;
}
{
int ret;
+ ret = register_pernet_subsys(&udplite_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udplite4);
if (ret < 0)
goto out_udplite4;
if (ret < 0)
goto out_udplite6;
- ret = register_pernet_subsys(&udplite_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite6);
out_udplite6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite4);
out_udplite4:
+ unregister_pernet_subsys(&udplite_net_ops);
+out_pernet:
return ret;
}
inst->queue_num = queue_num;
inst->peer_portid = portid;
inst->queue_maxlen = NFQNL_QMAX_DEFAULT;
- inst->copy_range = 0xfffff;
+ inst->copy_range = 0xffff;
inst->copy_mode = NFQNL_COPY_NONE;
spin_lock_init(&inst->lock);
INIT_LIST_HEAD(&inst->queue_list);
int err = 0;
BUG_ON(grp->name[0] == '\0');
+ BUG_ON(memchr(grp->name, '\0', GENL_NAMSIZ) == NULL);
genl_lock();
}
}
-static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool listen)
+static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool listen,
+ int err)
{
struct sock *sk;
struct hlist_node *tmp;
nfc_llcp_accept_unlink(accept_sk);
+ if (err)
+ accept_sk->sk_err = err;
accept_sk->sk_state = LLCP_CLOSED;
+ accept_sk->sk_state_change(sk);
bh_unlock_sock(accept_sk);
continue;
}
+ if (err)
+ sk->sk_err = err;
sk->sk_state = LLCP_CLOSED;
+ sk->sk_state_change(sk);
bh_unlock_sock(sk);
}
write_unlock(&local->sockets.lock);
+
+ /*
+ * If we want to keep the listening sockets alive,
+ * we don't touch the RAW ones.
+ */
+ if (listen == true)
+ return;
+
+ write_lock(&local->raw_sockets.lock);
+
+ sk_for_each_safe(sk, tmp, &local->raw_sockets.head) {
+ llcp_sock = nfc_llcp_sock(sk);
+
+ bh_lock_sock(sk);
+
+ nfc_llcp_socket_purge(llcp_sock);
+
+ if (err)
+ sk->sk_err = err;
+ sk->sk_state = LLCP_CLOSED;
+ sk->sk_state_change(sk);
+
+ bh_unlock_sock(sk);
+
+ sock_orphan(sk);
+
+ sk_del_node_init(sk);
+ }
+
+ write_unlock(&local->raw_sockets.lock);
}
struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
return local;
}
-static void local_release(struct kref *ref)
+static void local_cleanup(struct nfc_llcp_local *local, bool listen)
{
- struct nfc_llcp_local *local;
-
- local = container_of(ref, struct nfc_llcp_local, ref);
-
- list_del(&local->list);
- nfc_llcp_socket_release(local, false);
+ nfc_llcp_socket_release(local, listen, ENXIO);
del_timer_sync(&local->link_timer);
skb_queue_purge(&local->tx_queue);
cancel_work_sync(&local->tx_work);
cancel_work_sync(&local->rx_work);
cancel_work_sync(&local->timeout_work);
kfree_skb(local->rx_pending);
+}
+
+static void local_release(struct kref *ref)
+{
+ struct nfc_llcp_local *local;
+
+ local = container_of(ref, struct nfc_llcp_local, ref);
+
+ list_del(&local->list);
+ local_cleanup(local, false);
kfree(local);
}
return;
/* Close and purge all existing sockets */
- nfc_llcp_socket_release(local, true);
+ nfc_llcp_socket_release(local, true, 0);
}
void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
return;
}
+ local_cleanup(local, false);
+
nfc_llcp_local_put(local);
}
pr_debug("Returning sk state %d\n", sk->sk_state);
+ sk_acceptq_removed(parent);
+
return sk;
}
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = csum_sub(skb->csum, csum_partial(skb->data
- + ETH_HLEN, VLAN_HLEN, 0));
+ + (2 * ETH_ALEN), VLAN_HLEN, 0));
vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN);
*current_tci = vhdr->h_vlan_TCI;
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = csum_add(skb->csum, csum_partial(skb->data
- + ETH_HLEN, VLAN_HLEN, 0));
+ + (2 * ETH_ALEN), VLAN_HLEN, 0));
}
__vlan_hwaccel_put_tag(skb, ntohs(vlan->vlan_tci) & ~VLAN_TAG_PRESENT);
skb_copy_and_csum_dev(skb, nla_data(nla));
+ genlmsg_end(user_skb, upcall);
err = genlmsg_unicast(net, user_skb, upcall_info->portid);
out:
if (IS_ERR(vport))
goto exit_unlock;
+ err = 0;
reply = ovs_vport_cmd_build_info(vport, info->snd_portid, info->snd_seq,
OVS_VPORT_CMD_NEW);
if (IS_ERR(reply)) {
if (IS_ERR(reply))
goto exit_unlock;
+ err = 0;
ovs_dp_detach_port(vport);
genl_notify(reply, genl_info_net(info), info->snd_portid,
return htons(ETH_P_802_2);
__skb_pull(skb, sizeof(struct llc_snap_hdr));
- return llc->ethertype;
+
+ if (ntohs(llc->ethertype) >= 1536)
+ return llc->ethertype;
+
+ return htons(ETH_P_802_2);
}
static int parse_icmpv6(struct sk_buff *skb, struct sw_flow_key *key,
/* Make our own copy of the packet. Otherwise we will mangle the
* packet for anyone who came before us (e.g. tcpdump via AF_PACKET).
- * (No one comes after us, since we tell handle_bridge() that we took
- * the packet.) */
+ */
skb = skb_share_check(skb, GFP_ATOMIC);
if (unlikely(!skb))
return;
* @skb: skb that was received
*
* Must be called with rcu_read_lock. The packet cannot be shared and
- * skb->data should point to the Ethernet header. The caller must have already
- * called compute_ip_summed() to initialize the checksumming fields.
+ * skb->data should point to the Ethernet header.
*/
void ovs_vport_receive(struct vport *vport, struct sk_buff *skb)
{
transports) {
if (transport == active)
- break;
+ continue;
list_for_each_entry(chunk, &transport->transmitted,
transmitted_list) {
if (key == chunk->subh.data_hdr->tsn) {
}
/* Delete the tempory new association. */
- sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));
+ sctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));
sctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());
/* Restore association pointer to provide SCTP command interpeter
list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
task->tk_waitqueue = queue;
queue->qlen++;
+ /* barrier matches the read in rpc_wake_up_task_queue_locked() */
+ smp_wmb();
rpc_set_queued(task);
dprintk("RPC: %5u added to queue %p \"%s\"\n",
*/
static void rpc_wake_up_task_queue_locked(struct rpc_wait_queue *queue, struct rpc_task *task)
{
- if (RPC_IS_QUEUED(task) && task->tk_waitqueue == queue)
- __rpc_do_wake_up_task(queue, task);
+ if (RPC_IS_QUEUED(task)) {
+ smp_rmb();
+ if (task->tk_waitqueue == queue)
+ __rpc_do_wake_up_task(queue, task);
+ }
}
/*
#endif
}
-static int unix_release_sock(struct sock *sk, int embrion)
+static void unix_release_sock(struct sock *sk, int embrion)
{
struct unix_sock *u = unix_sk(sk);
struct path path;
if (unix_tot_inflight)
unix_gc(); /* Garbage collect fds */
-
- return 0;
}
static void init_peercred(struct sock *sk)
if (!sk)
return 0;
+ unix_release_sock(sk, 0);
sock->sk = NULL;
- return unix_release_sock(sk, 0);
+ return 0;
}
static int unix_autobind(struct socket *sock)
if (UNIXCB(skb).cred)
return;
if (test_bit(SOCK_PASSCRED, &sock->flags) ||
- !other->sk_socket ||
- test_bit(SOCK_PASSCRED, &other->sk_socket->flags)) {
+ (other->sk_socket &&
+ test_bit(SOCK_PASSCRED, &other->sk_socket->flags))) {
UNIXCB(skb).pid = get_pid(task_tgid(current));
UNIXCB(skb).cred = get_current_cred();
}
if (old_ctx) {
new_ctx = kmalloc(sizeof(*old_ctx) + old_ctx->ctx_len,
- GFP_KERNEL);
+ GFP_ATOMIC);
if (!new_ctx)
return -ENOMEM;
/* Only disallow PTRACE_TRACEME on more aggressive settings. */
switch (ptrace_scope) {
case YAMA_SCOPE_CAPABILITY:
- rcu_read_lock();
- if (!ns_capable(__task_cred(parent)->user_ns, CAP_SYS_PTRACE))
+ if (!has_ns_capability(parent, current_user_ns(), CAP_SYS_PTRACE))
rc = -EPERM;
- rcu_read_unlock();
break;
case YAMA_SCOPE_NO_ATTACH:
rc = -EPERM;
if (val & AC_DIG1_PROFESSIONAL)
sbits |= IEC958_AES0_PROFESSIONAL;
if (sbits & IEC958_AES0_PROFESSIONAL) {
- if (sbits & AC_DIG1_EMPHASIS)
+ if (val & AC_DIG1_EMPHASIS)
sbits |= IEC958_AES0_PRO_EMPHASIS_5015;
} else {
if (val & AC_DIG1_EMPHASIS)
BAD_NO_EXTRA_SURR_DAC = 0x101,
/* Primary DAC shared with main surrounds */
BAD_SHARED_SURROUND = 0x100,
+ /* No independent HP possible */
+ BAD_NO_INDEP_HP = 0x40,
/* Primary DAC shared with main CLFE */
BAD_SHARED_CLFE = 0x10,
/* Primary DAC shared with extra surrounds */
return snd_hda_get_path_idx(codec, path);
}
+/* check whether the independent HP is available with the current config */
+static bool indep_hp_possible(struct hda_codec *codec)
+{
+ struct hda_gen_spec *spec = codec->spec;
+ struct auto_pin_cfg *cfg = &spec->autocfg;
+ struct nid_path *path;
+ int i, idx;
+
+ if (cfg->line_out_type == AUTO_PIN_HP_OUT)
+ idx = spec->out_paths[0];
+ else
+ idx = spec->hp_paths[0];
+ path = snd_hda_get_path_from_idx(codec, idx);
+ if (!path)
+ return false;
+
+ /* assume no path conflicts unless aamix is involved */
+ if (!spec->mixer_nid || !is_nid_contained(path, spec->mixer_nid))
+ return true;
+
+ /* check whether output paths contain aamix */
+ for (i = 0; i < cfg->line_outs; i++) {
+ if (spec->out_paths[i] == idx)
+ break;
+ path = snd_hda_get_path_from_idx(codec, spec->out_paths[i]);
+ if (path && is_nid_contained(path, spec->mixer_nid))
+ return false;
+ }
+ for (i = 0; i < cfg->speaker_outs; i++) {
+ path = snd_hda_get_path_from_idx(codec, spec->speaker_paths[i]);
+ if (path && is_nid_contained(path, spec->mixer_nid))
+ return false;
+ }
+
+ return true;
+}
+
/* fill the empty entries in the dac array for speaker/hp with the
* shared dac pointed by the paths
*/
badness += BAD_MULTI_IO;
}
+ if (spec->indep_hp && !indep_hp_possible(codec))
+ badness += BAD_NO_INDEP_HP;
+
/* re-fill the shared DAC for speaker / headphone */
if (cfg->line_out_type != AUTO_PIN_HP_OUT)
refill_shared_dacs(codec, cfg->hp_outs,
cfg->speaker_pins, val);
}
+ /* clear indep_hp flag if not available */
+ if (spec->indep_hp && !indep_hp_possible(codec))
+ spec->indep_hp = 0;
+
kfree(best_cfg);
return 0;
}
unsigned int opened :1;
unsigned int running :1;
unsigned int irq_pending :1;
+ unsigned int prepared:1;
+ unsigned int locked:1;
/*
* For VIA:
* A flag to ensure DMA position is 0
struct timecounter azx_tc;
struct cyclecounter azx_cc;
+
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+ struct mutex dsp_mutex;
+#endif
};
+/* DSP lock helpers */
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+#define dsp_lock_init(dev) mutex_init(&(dev)->dsp_mutex)
+#define dsp_lock(dev) mutex_lock(&(dev)->dsp_mutex)
+#define dsp_unlock(dev) mutex_unlock(&(dev)->dsp_mutex)
+#define dsp_is_locked(dev) ((dev)->locked)
+#else
+#define dsp_lock_init(dev) do {} while (0)
+#define dsp_lock(dev) do {} while (0)
+#define dsp_unlock(dev) do {} while (0)
+#define dsp_is_locked(dev) 0
+#endif
+
/* CORB/RIRB */
struct azx_rb {
u32 *buf; /* CORB/RIRB buffer
/* card list (for power_save trigger) */
struct list_head list;
+
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+ struct azx_dev saved_azx_dev;
+#endif
};
#define CREATE_TRACE_POINTS
dev = chip->capture_index_offset;
nums = chip->capture_streams;
}
- for (i = 0; i < nums; i++, dev++)
- if (!chip->azx_dev[dev].opened) {
- res = &chip->azx_dev[dev];
- if (res->assigned_key == key)
- break;
+ for (i = 0; i < nums; i++, dev++) {
+ struct azx_dev *azx_dev = &chip->azx_dev[dev];
+ dsp_lock(azx_dev);
+ if (!azx_dev->opened && !dsp_is_locked(azx_dev)) {
+ res = azx_dev;
+ if (res->assigned_key == key) {
+ res->opened = 1;
+ res->assigned_key = key;
+ dsp_unlock(azx_dev);
+ return azx_dev;
+ }
}
+ dsp_unlock(azx_dev);
+ }
if (res) {
+ dsp_lock(res);
res->opened = 1;
res->assigned_key = key;
+ dsp_unlock(res);
}
return res;
}
struct azx_dev *azx_dev = get_azx_dev(substream);
int ret;
+ dsp_lock(azx_dev);
+ if (dsp_is_locked(azx_dev)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+
mark_runtime_wc(chip, azx_dev, substream, false);
azx_dev->bufsize = 0;
azx_dev->period_bytes = 0;
ret = snd_pcm_lib_malloc_pages(substream,
params_buffer_bytes(hw_params));
if (ret < 0)
- return ret;
+ goto unlock;
mark_runtime_wc(chip, azx_dev, substream, true);
+ unlock:
+ dsp_unlock(azx_dev);
return ret;
}
struct hda_pcm_stream *hinfo = apcm->hinfo[substream->stream];
/* reset BDL address */
- azx_sd_writel(azx_dev, SD_BDLPL, 0);
- azx_sd_writel(azx_dev, SD_BDLPU, 0);
- azx_sd_writel(azx_dev, SD_CTL, 0);
- azx_dev->bufsize = 0;
- azx_dev->period_bytes = 0;
- azx_dev->format_val = 0;
+ dsp_lock(azx_dev);
+ if (!dsp_is_locked(azx_dev)) {
+ azx_sd_writel(azx_dev, SD_BDLPL, 0);
+ azx_sd_writel(azx_dev, SD_BDLPU, 0);
+ azx_sd_writel(azx_dev, SD_CTL, 0);
+ azx_dev->bufsize = 0;
+ azx_dev->period_bytes = 0;
+ azx_dev->format_val = 0;
+ }
snd_hda_codec_cleanup(apcm->codec, hinfo, substream);
mark_runtime_wc(chip, azx_dev, substream, false);
+ azx_dev->prepared = 0;
+ dsp_unlock(azx_dev);
return snd_pcm_lib_free_pages(substream);
}
snd_hda_spdif_out_of_nid(apcm->codec, hinfo->nid);
unsigned short ctls = spdif ? spdif->ctls : 0;
+ dsp_lock(azx_dev);
+ if (dsp_is_locked(azx_dev)) {
+ err = -EBUSY;
+ goto unlock;
+ }
+
azx_stream_reset(chip, azx_dev);
format_val = snd_hda_calc_stream_format(runtime->rate,
runtime->channels,
snd_printk(KERN_ERR SFX
"%s: invalid format_val, rate=%d, ch=%d, format=%d\n",
pci_name(chip->pci), runtime->rate, runtime->channels, runtime->format);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock;
}
bufsize = snd_pcm_lib_buffer_bytes(substream);
azx_dev->no_period_wakeup = runtime->no_period_wakeup;
err = azx_setup_periods(chip, substream, azx_dev);
if (err < 0)
- return err;
+ goto unlock;
}
/* wallclk has 24Mhz clock source */
if ((chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) &&
stream_tag > chip->capture_streams)
stream_tag -= chip->capture_streams;
- return snd_hda_codec_prepare(apcm->codec, hinfo, stream_tag,
+ err = snd_hda_codec_prepare(apcm->codec, hinfo, stream_tag,
azx_dev->format_val, substream);
+
+ unlock:
+ if (!err)
+ azx_dev->prepared = 1;
+ dsp_unlock(azx_dev);
+ return err;
}
static int azx_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
azx_dev = get_azx_dev(substream);
trace_azx_pcm_trigger(chip, azx_dev, cmd);
+ if (dsp_is_locked(azx_dev) || !azx_dev->prepared)
+ return -EPIPE;
+
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
rstart = 1;
struct azx_dev *azx_dev;
int err;
- if (snd_hda_lock_devices(bus))
- return -EBUSY;
+ azx_dev = azx_get_dsp_loader_dev(chip);
+
+ dsp_lock(azx_dev);
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->running || azx_dev->locked) {
+ spin_unlock_irq(&chip->reg_lock);
+ err = -EBUSY;
+ goto unlock;
+ }
+ azx_dev->prepared = 0;
+ chip->saved_azx_dev = *azx_dev;
+ azx_dev->locked = 1;
+ spin_unlock_irq(&chip->reg_lock);
err = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG,
snd_dma_pci_data(chip->pci),
byte_size, bufp);
if (err < 0)
- goto unlock;
+ goto err_alloc;
mark_pages_wc(chip, bufp, true);
- azx_dev = azx_get_dsp_loader_dev(chip);
azx_dev->bufsize = byte_size;
azx_dev->period_bytes = byte_size;
azx_dev->format_val = format;
goto error;
azx_setup_controller(chip, azx_dev);
+ dsp_unlock(azx_dev);
return azx_dev->stream_tag;
error:
mark_pages_wc(chip, bufp, false);
snd_dma_free_pages(bufp);
-unlock:
- snd_hda_unlock_devices(bus);
+ err_alloc:
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->opened)
+ *azx_dev = chip->saved_azx_dev;
+ azx_dev->locked = 0;
+ spin_unlock_irq(&chip->reg_lock);
+ unlock:
+ dsp_unlock(azx_dev);
return err;
}
struct azx *chip = bus->private_data;
struct azx_dev *azx_dev = azx_get_dsp_loader_dev(chip);
- if (!dmab->area)
+ if (!dmab->area || !azx_dev->locked)
return;
+ dsp_lock(azx_dev);
/* reset BDL address */
azx_sd_writel(azx_dev, SD_BDLPL, 0);
azx_sd_writel(azx_dev, SD_BDLPU, 0);
snd_dma_free_pages(dmab);
dmab->area = NULL;
- snd_hda_unlock_devices(bus);
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->opened)
+ *azx_dev = chip->saved_azx_dev;
+ azx_dev->locked = 0;
+ spin_unlock_irq(&chip->reg_lock);
+ dsp_unlock(azx_dev);
}
#endif /* CONFIG_SND_HDA_DSP_LOADER */
}
for (i = 0; i < chip->num_streams; i++) {
+ dsp_lock_init(&chip->azx_dev[i]);
/* allocate memory for the BDL for each stream */
err = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV,
snd_dma_pci_data(chip->pci),
snd_hda_gen_update_outputs(codec);
if (spec->gpio_eapd_hp) {
- unsigned int gpio = spec->gen.hp_jack_present ?
+ spec->gpio_data = spec->gen.hp_jack_present ?
spec->gpio_eapd_hp : spec->gpio_eapd_speaker;
snd_hda_codec_write(codec, 0x01, 0,
- AC_VERB_SET_GPIO_DATA, gpio);
+ AC_VERB_SET_GPIO_DATA, spec->gpio_data);
}
}
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
return 0;
}
+static void cx_auto_free(struct hda_codec *codec)
+{
+ snd_hda_detach_beep_device(codec);
+ snd_hda_gen_free(codec);
+}
+
static const struct hda_codec_ops cx_auto_patch_ops = {
.build_controls = cx_auto_build_controls,
.build_pcms = snd_hda_gen_build_pcms,
.init = snd_hda_gen_init,
- .free = snd_hda_gen_free,
+ .free = cx_auto_free,
.unsol_event = snd_hda_jack_unsol_event,
#ifdef CONFIG_PM
.check_power_status = snd_hda_gen_check_power_status,
codec->patch_ops = cx_auto_patch_ops;
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
/* Some laptops with Conexant chips show stalls in S3 resume,
* which falls into the single-cmd mode.
case UAC2_CLOCK_SELECTOR: {
struct uac_selector_unit_descriptor *d = p1;
/* call recursively to retrieve the channel info */
- if (check_input_term(state, d->baSourceID[0], term) < 0)
- return -ENODEV;
+ err = check_input_term(state, d->baSourceID[0], term);
+ if (err < 0)
+ return err;
term->type = d->bDescriptorSubtype << 16; /* virtual type */
term->id = id;
term->name = uac_selector_unit_iSelector(d);
case UAC1_PROCESSING_UNIT:
case UAC1_EXTENSION_UNIT:
/* UAC2_PROCESSING_UNIT_V2 */
- /* UAC2_EFFECT_UNIT */ {
+ /* UAC2_EFFECT_UNIT */
+ case UAC2_EXTENSION_UNIT_V2: {
struct uac_processing_unit_descriptor *d = p1;
if (state->mixer->protocol == UAC_VERSION_2 &&
return err;
/* determine the input source type and name */
- if (check_input_term(state, hdr->bSourceID, &iterm) < 0)
- return -EINVAL;
+ err = check_input_term(state, hdr->bSourceID, &iterm);
+ if (err < 0)
+ return err;
master_bits = snd_usb_combine_bytes(bmaControls, csize);
/* master configuration quirks */
return parse_audio_extension_unit(state, unitid, p1);
else /* UAC_VERSION_2 */
return parse_audio_processing_unit(state, unitid, p1);
+ case UAC2_EXTENSION_UNIT_V2:
+ return parse_audio_extension_unit(state, unitid, p1);
default:
snd_printk(KERN_ERR "usbaudio: unit %u: unexpected type 0x%02x\n", unitid, p1[2]);
return -EINVAL;
state.oterm.type = le16_to_cpu(desc->wTerminalType);
state.oterm.name = desc->iTerminal;
err = parse_audio_unit(&state, desc->bSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
} else { /* UAC_VERSION_2 */
struct uac2_output_terminal_descriptor *desc = p;
state.oterm.type = le16_to_cpu(desc->wTerminalType);
state.oterm.name = desc->iTerminal;
err = parse_audio_unit(&state, desc->bSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
/* for UAC2, use the same approach to also add the clock selectors */
err = parse_audio_unit(&state, desc->bCSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
}
}
EVENT_PARSE_VERSION = $(EP_VERSION).$(EP_PATCHLEVEL).$(EP_EXTRAVERSION)
-INCLUDES = -I. -I/usr/local/include $(CONFIG_INCLUDES)
+INCLUDES = -I. $(CONFIG_INCLUDES)
# Set compile option CFLAGS if not set elsewhere
CFLAGS ?= -g -Wall
PERF_DEBUG = $(DEBUG)
endif
ifndef PERF_DEBUG
- CFLAGS_OPTIMIZE = -O6 -D_FORTIFY_SOURCE=2
+ CFLAGS_OPTIMIZE = -O6
endif
ifdef PARSER_DEBUG
CFLAGS := $(CFLAGS) -Wvolatile-register-var
endif
+ifndef PERF_DEBUG
+ ifeq ($(call try-cc,$(SOURCE_HELLO),$(CFLAGS) -D_FORTIFY_SOURCE=2,-D_FORTIFY_SOURCE=2),y)
+ CFLAGS := $(CFLAGS) -D_FORTIFY_SOURCE=2
+ endif
+endif
+
### --- END CONFIGURATION SECTION ---
ifeq ($(srctree),)
#ifndef BENCH_H
#define BENCH_H
+/*
+ * The madvise transparent hugepage constants were added in glibc
+ * 2.13. For compatibility with older versions of glibc, define these
+ * tokens if they are not already defined.
+ *
+ * PA-RISC uses different madvise values from other architectures and
+ * needs to be special-cased.
+ */
+#ifdef __hppa__
+# ifndef MADV_HUGEPAGE
+# define MADV_HUGEPAGE 67
+# endif
+# ifndef MADV_NOHUGEPAGE
+# define MADV_NOHUGEPAGE 68
+# endif
+#else
+# ifndef MADV_HUGEPAGE
+# define MADV_HUGEPAGE 14
+# endif
+# ifndef MADV_NOHUGEPAGE
+# define MADV_NOHUGEPAGE 15
+# endif
+#endif
+
extern int bench_numa(int argc, const char **argv, const char *prefix);
extern int bench_sched_messaging(int argc, const char **argv, const char *prefix);
extern int bench_sched_pipe(int argc, const char **argv, const char *prefix);
perf_event__synthesize_guest_os, tool);
}
- if (!opts->target.system_wide)
+ if (perf_target__has_task(&opts->target))
err = perf_event__synthesize_thread_map(tool, evsel_list->threads,
process_synthesized_event,
machine);
- else
+ else if (perf_target__has_cpu(&opts->target))
err = perf_event__synthesize_threads(tool, process_synthesized_event,
machine);
+ else /* command specified */
+ err = 0;
if (err != 0)
goto out_delete_session;
return 0;
}
-#define K_LEFT -1
-#define K_RIGHT -2
+#define K_LEFT -1000
+#define K_RIGHT -2000
+#define K_SWITCH_INPUT_DATA -3000
#endif
#ifdef GTK2_SUPPORT
slist->rblist.node_delete = strlist__node_delete;
slist->dupstr = dupstr;
- if (slist && strlist__parse_list(slist, list) != 0)
+ if (list && strlist__parse_list(slist, list) != 0)
goto out_error;
}
u32 redir_index = (ioapic->ioregsel - 0x10) >> 1;
u64 redir_content;
- ASSERT(redir_index < IOAPIC_NUM_PINS);
+ if (redir_index < IOAPIC_NUM_PINS)
+ redir_content =
+ ioapic->redirtbl[redir_index].bits;
+ else
+ redir_content = ~0ULL;
- redir_content = ioapic->redirtbl[redir_index].bits;
result = (ioapic->ioregsel & 0x1) ?
(redir_content >> 32) & 0xffffffff :
redir_content & 0xffffffff;