<chapter id="uart16x50">
<title>16x50 UART Driver</title>
!Edrivers/tty/serial/serial_core.c
-!Edrivers/tty/serial/8250/8250.c
+!Edrivers/tty/serial/8250/8250_core.c
</chapter>
<chapter id="fbdev">
--- /dev/null
+Cluster-wide Power-up/power-down race avoidance algorithm
+=========================================================
+
+This file documents the algorithm which is used to coordinate CPU and
+cluster setup and teardown operations and to manage hardware coherency
+controls safely.
+
+The section "Rationale" explains what the algorithm is for and why it is
+needed. "Basic model" explains general concepts using a simplified view
+of the system. The other sections explain the actual details of the
+algorithm in use.
+
+
+Rationale
+---------
+
+In a system containing multiple CPUs, it is desirable to have the
+ability to turn off individual CPUs when the system is idle, reducing
+power consumption and thermal dissipation.
+
+In a system containing multiple clusters of CPUs, it is also desirable
+to have the ability to turn off entire clusters.
+
+Turning entire clusters off and on is a risky business, because it
+involves performing potentially destructive operations affecting a group
+of independently running CPUs, while the OS continues to run. This
+means that we need some coordination in order to ensure that critical
+cluster-level operations are only performed when it is truly safe to do
+so.
+
+Simple locking may not be sufficient to solve this problem, because
+mechanisms like Linux spinlocks may rely on coherency mechanisms which
+are not immediately enabled when a cluster powers up. Since enabling or
+disabling those mechanisms may itself be a non-atomic operation (such as
+writing some hardware registers and invalidating large caches), other
+methods of coordination are required in order to guarantee safe
+power-down and power-up at the cluster level.
+
+The mechanism presented in this document describes a coherent memory
+based protocol for performing the needed coordination. It aims to be as
+lightweight as possible, while providing the required safety properties.
+
+
+Basic model
+-----------
+
+Each cluster and CPU is assigned a state, as follows:
+
+ DOWN
+ COMING_UP
+ UP
+ GOING_DOWN
+
+ +---------> UP ----------+
+ | v
+
+ COMING_UP GOING_DOWN
+
+ ^ |
+ +--------- DOWN <--------+
+
+
+DOWN: The CPU or cluster is not coherent, and is either powered off or
+ suspended, or is ready to be powered off or suspended.
+
+COMING_UP: The CPU or cluster has committed to moving to the UP state.
+ It may be part way through the process of initialisation and
+ enabling coherency.
+
+UP: The CPU or cluster is active and coherent at the hardware
+ level. A CPU in this state is not necessarily being used
+ actively by the kernel.
+
+GOING_DOWN: The CPU or cluster has committed to moving to the DOWN
+ state. It may be part way through the process of teardown and
+ coherency exit.
+
+
+Each CPU has one of these states assigned to it at any point in time.
+The CPU states are described in the "CPU state" section, below.
+
+Each cluster is also assigned a state, but it is necessary to split the
+state value into two parts (the "cluster" state and "inbound" state) and
+to introduce additional states in order to avoid races between different
+CPUs in the cluster simultaneously modifying the state. The cluster-
+level states are described in the "Cluster state" section.
+
+To help distinguish the CPU states from cluster states in this
+discussion, the state names are given a CPU_ prefix for the CPU states,
+and a CLUSTER_ or INBOUND_ prefix for the cluster states.
+
+
+CPU state
+---------
+
+In this algorithm, each individual core in a multi-core processor is
+referred to as a "CPU". CPUs are assumed to be single-threaded:
+therefore, a CPU can only be doing one thing at a single point in time.
+
+This means that CPUs fit the basic model closely.
+
+The algorithm defines the following states for each CPU in the system:
+
+ CPU_DOWN
+ CPU_COMING_UP
+ CPU_UP
+ CPU_GOING_DOWN
+
+ cluster setup and
+ CPU setup complete policy decision
+ +-----------> CPU_UP ------------+
+ | v
+
+ CPU_COMING_UP CPU_GOING_DOWN
+
+ ^ |
+ +----------- CPU_DOWN <----------+
+ policy decision CPU teardown complete
+ or hardware event
+
+
+The definitions of the four states correspond closely to the states of
+the basic model.
+
+Transitions between states occur as follows.
+
+A trigger event (spontaneous) means that the CPU can transition to the
+next state as a result of making local progress only, with no
+requirement for any external event to happen.
+
+
+CPU_DOWN:
+
+ A CPU reaches the CPU_DOWN state when it is ready for
+ power-down. On reaching this state, the CPU will typically
+ power itself down or suspend itself, via a WFI instruction or a
+ firmware call.
+
+ Next state: CPU_COMING_UP
+ Conditions: none
+
+ Trigger events:
+
+ a) an explicit hardware power-up operation, resulting
+ from a policy decision on another CPU;
+
+ b) a hardware event, such as an interrupt.
+
+
+CPU_COMING_UP:
+
+ A CPU cannot start participating in hardware coherency until the
+ cluster is set up and coherent. If the cluster is not ready,
+ then the CPU will wait in the CPU_COMING_UP state until the
+ cluster has been set up.
+
+ Next state: CPU_UP
+ Conditions: The CPU's parent cluster must be in CLUSTER_UP.
+ Trigger events: Transition of the parent cluster to CLUSTER_UP.
+
+ Refer to the "Cluster state" section for a description of the
+ CLUSTER_UP state.
+
+
+CPU_UP:
+ When a CPU reaches the CPU_UP state, it is safe for the CPU to
+ start participating in local coherency.
+
+ This is done by jumping to the kernel's CPU resume code.
+
+ Note that the definition of this state is slightly different
+ from the basic model definition: CPU_UP does not mean that the
+ CPU is coherent yet, but it does mean that it is safe to resume
+ the kernel. The kernel handles the rest of the resume
+ procedure, so the remaining steps are not visible as part of the
+ race avoidance algorithm.
+
+ The CPU remains in this state until an explicit policy decision
+ is made to shut down or suspend the CPU.
+
+ Next state: CPU_GOING_DOWN
+ Conditions: none
+ Trigger events: explicit policy decision
+
+
+CPU_GOING_DOWN:
+
+ While in this state, the CPU exits coherency, including any
+ operations required to achieve this (such as cleaning data
+ caches).
+
+ Next state: CPU_DOWN
+ Conditions: local CPU teardown complete
+ Trigger events: (spontaneous)
+
+
+Cluster state
+-------------
+
+A cluster is a group of connected CPUs with some common resources.
+Because a cluster contains multiple CPUs, it can be doing multiple
+things at the same time. This has some implications. In particular, a
+CPU can start up while another CPU is tearing the cluster down.
+
+In this discussion, the "outbound side" is the view of the cluster state
+as seen by a CPU tearing the cluster down. The "inbound side" is the
+view of the cluster state as seen by a CPU setting the CPU up.
+
+In order to enable safe coordination in such situations, it is important
+that a CPU which is setting up the cluster can advertise its state
+independently of the CPU which is tearing down the cluster. For this
+reason, the cluster state is split into two parts:
+
+ "cluster" state: The global state of the cluster; or the state
+ on the outbound side:
+
+ CLUSTER_DOWN
+ CLUSTER_UP
+ CLUSTER_GOING_DOWN
+
+ "inbound" state: The state of the cluster on the inbound side.
+
+ INBOUND_NOT_COMING_UP
+ INBOUND_COMING_UP
+
+
+ The different pairings of these states results in six possible
+ states for the cluster as a whole:
+
+ CLUSTER_UP
+ +==========> INBOUND_NOT_COMING_UP -------------+
+ # |
+ |
+ CLUSTER_UP <----+ |
+ INBOUND_COMING_UP | v
+
+ ^ CLUSTER_GOING_DOWN CLUSTER_GOING_DOWN
+ # INBOUND_COMING_UP <=== INBOUND_NOT_COMING_UP
+
+ CLUSTER_DOWN | |
+ INBOUND_COMING_UP <----+ |
+ |
+ ^ |
+ +=========== CLUSTER_DOWN <------------+
+ INBOUND_NOT_COMING_UP
+
+ Transitions -----> can only be made by the outbound CPU, and
+ only involve changes to the "cluster" state.
+
+ Transitions ===##> can only be made by the inbound CPU, and only
+ involve changes to the "inbound" state, except where there is no
+ further transition possible on the outbound side (i.e., the
+ outbound CPU has put the cluster into the CLUSTER_DOWN state).
+
+ The race avoidance algorithm does not provide a way to determine
+ which exact CPUs within the cluster play these roles. This must
+ be decided in advance by some other means. Refer to the section
+ "Last man and first man selection" for more explanation.
+
+
+ CLUSTER_DOWN/INBOUND_NOT_COMING_UP is the only state where the
+ cluster can actually be powered down.
+
+ The parallelism of the inbound and outbound CPUs is observed by
+ the existence of two different paths from CLUSTER_GOING_DOWN/
+ INBOUND_NOT_COMING_UP (corresponding to GOING_DOWN in the basic
+ model) to CLUSTER_DOWN/INBOUND_COMING_UP (corresponding to
+ COMING_UP in the basic model). The second path avoids cluster
+ teardown completely.
+
+ CLUSTER_UP/INBOUND_COMING_UP is equivalent to UP in the basic
+ model. The final transition to CLUSTER_UP/INBOUND_NOT_COMING_UP
+ is trivial and merely resets the state machine ready for the
+ next cycle.
+
+ Details of the allowable transitions follow.
+
+ The next state in each case is notated
+
+ <cluster state>/<inbound state> (<transitioner>)
+
+ where the <transitioner> is the side on which the transition
+ can occur; either the inbound or the outbound side.
+
+
+CLUSTER_DOWN/INBOUND_NOT_COMING_UP:
+
+ Next state: CLUSTER_DOWN/INBOUND_COMING_UP (inbound)
+ Conditions: none
+ Trigger events:
+
+ a) an explicit hardware power-up operation, resulting
+ from a policy decision on another CPU;
+
+ b) a hardware event, such as an interrupt.
+
+
+CLUSTER_DOWN/INBOUND_COMING_UP:
+
+ In this state, an inbound CPU sets up the cluster, including
+ enabling of hardware coherency at the cluster level and any
+ other operations (such as cache invalidation) which are required
+ in order to achieve this.
+
+ The purpose of this state is to do sufficient cluster-level
+ setup to enable other CPUs in the cluster to enter coherency
+ safely.
+
+ Next state: CLUSTER_UP/INBOUND_COMING_UP (inbound)
+ Conditions: cluster-level setup and hardware coherency complete
+ Trigger events: (spontaneous)
+
+
+CLUSTER_UP/INBOUND_COMING_UP:
+
+ Cluster-level setup is complete and hardware coherency is
+ enabled for the cluster. Other CPUs in the cluster can safely
+ enter coherency.
+
+ This is a transient state, leading immediately to
+ CLUSTER_UP/INBOUND_NOT_COMING_UP. All other CPUs on the cluster
+ should consider treat these two states as equivalent.
+
+ Next state: CLUSTER_UP/INBOUND_NOT_COMING_UP (inbound)
+ Conditions: none
+ Trigger events: (spontaneous)
+
+
+CLUSTER_UP/INBOUND_NOT_COMING_UP:
+
+ Cluster-level setup is complete and hardware coherency is
+ enabled for the cluster. Other CPUs in the cluster can safely
+ enter coherency.
+
+ The cluster will remain in this state until a policy decision is
+ made to power the cluster down.
+
+ Next state: CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP (outbound)
+ Conditions: none
+ Trigger events: policy decision to power down the cluster
+
+
+CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP:
+
+ An outbound CPU is tearing the cluster down. The selected CPU
+ must wait in this state until all CPUs in the cluster are in the
+ CPU_DOWN state.
+
+ When all CPUs are in the CPU_DOWN state, the cluster can be torn
+ down, for example by cleaning data caches and exiting
+ cluster-level coherency.
+
+ To avoid wasteful unnecessary teardown operations, the outbound
+ should check the inbound cluster state for asynchronous
+ transitions to INBOUND_COMING_UP. Alternatively, individual
+ CPUs can be checked for entry into CPU_COMING_UP or CPU_UP.
+
+
+ Next states:
+
+ CLUSTER_DOWN/INBOUND_NOT_COMING_UP (outbound)
+ Conditions: cluster torn down and ready to power off
+ Trigger events: (spontaneous)
+
+ CLUSTER_GOING_DOWN/INBOUND_COMING_UP (inbound)
+ Conditions: none
+ Trigger events:
+
+ a) an explicit hardware power-up operation,
+ resulting from a policy decision on another
+ CPU;
+
+ b) a hardware event, such as an interrupt.
+
+
+CLUSTER_GOING_DOWN/INBOUND_COMING_UP:
+
+ The cluster is (or was) being torn down, but another CPU has
+ come online in the meantime and is trying to set up the cluster
+ again.
+
+ If the outbound CPU observes this state, it has two choices:
+
+ a) back out of teardown, restoring the cluster to the
+ CLUSTER_UP state;
+
+ b) finish tearing the cluster down and put the cluster
+ in the CLUSTER_DOWN state; the inbound CPU will
+ set up the cluster again from there.
+
+ Choice (a) permits the removal of some latency by avoiding
+ unnecessary teardown and setup operations in situations where
+ the cluster is not really going to be powered down.
+
+
+ Next states:
+
+ CLUSTER_UP/INBOUND_COMING_UP (outbound)
+ Conditions: cluster-level setup and hardware
+ coherency complete
+ Trigger events: (spontaneous)
+
+ CLUSTER_DOWN/INBOUND_COMING_UP (outbound)
+ Conditions: cluster torn down and ready to power off
+ Trigger events: (spontaneous)
+
+
+Last man and First man selection
+--------------------------------
+
+The CPU which performs cluster tear-down operations on the outbound side
+is commonly referred to as the "last man".
+
+The CPU which performs cluster setup on the inbound side is commonly
+referred to as the "first man".
+
+The race avoidance algorithm documented above does not provide a
+mechanism to choose which CPUs should play these roles.
+
+
+Last man:
+
+When shutting down the cluster, all the CPUs involved are initially
+executing Linux and hence coherent. Therefore, ordinary spinlocks can
+be used to select a last man safely, before the CPUs become
+non-coherent.
+
+
+First man:
+
+Because CPUs may power up asynchronously in response to external wake-up
+events, a dynamic mechanism is needed to make sure that only one CPU
+attempts to play the first man role and do the cluster-level
+initialisation: any other CPUs must wait for this to complete before
+proceeding.
+
+Cluster-level initialisation may involve actions such as configuring
+coherency controls in the bus fabric.
+
+The current implementation in mcpm_head.S uses a separate mutual exclusion
+mechanism to do this arbitration. This mechanism is documented in
+detail in vlocks.txt.
+
+
+Features and Limitations
+------------------------
+
+Implementation:
+
+ The current ARM-based implementation is split between
+ arch/arm/common/mcpm_head.S (low-level inbound CPU operations) and
+ arch/arm/common/mcpm_entry.c (everything else):
+
+ __mcpm_cpu_going_down() signals the transition of a CPU to the
+ CPU_GOING_DOWN state.
+
+ __mcpm_cpu_down() signals the transition of a CPU to the CPU_DOWN
+ state.
+
+ A CPU transitions to CPU_COMING_UP and then to CPU_UP via the
+ low-level power-up code in mcpm_head.S. This could
+ involve CPU-specific setup code, but in the current
+ implementation it does not.
+
+ __mcpm_outbound_enter_critical() and __mcpm_outbound_leave_critical()
+ handle transitions from CLUSTER_UP to CLUSTER_GOING_DOWN
+ and from there to CLUSTER_DOWN or back to CLUSTER_UP (in
+ the case of an aborted cluster power-down).
+
+ These functions are more complex than the __mcpm_cpu_*()
+ functions due to the extra inter-CPU coordination which
+ is needed for safe transitions at the cluster level.
+
+ A cluster transitions from CLUSTER_DOWN back to CLUSTER_UP via
+ the low-level power-up code in mcpm_head.S. This
+ typically involves platform-specific setup code,
+ provided by the platform-specific power_up_setup
+ function registered via mcpm_sync_init.
+
+Deep topologies:
+
+ As currently described and implemented, the algorithm does not
+ support CPU topologies involving more than two levels (i.e.,
+ clusters of clusters are not supported). The algorithm could be
+ extended by replicating the cluster-level states for the
+ additional topological levels, and modifying the transition
+ rules for the intermediate (non-outermost) cluster levels.
+
+
+Colophon
+--------
+
+Originally created and documented by Dave Martin for Linaro Limited, in
+collaboration with Nicolas Pitre and Achin Gupta.
+
+Copyright (C) 2012-2013 Linaro Limited
+Distributed under the terms of Version 2 of the GNU General Public
+License, as defined in linux/COPYING.
--- /dev/null
+vlocks for Bare-Metal Mutual Exclusion
+======================================
+
+Voting Locks, or "vlocks" provide a simple low-level mutual exclusion
+mechanism, with reasonable but minimal requirements on the memory
+system.
+
+These are intended to be used to coordinate critical activity among CPUs
+which are otherwise non-coherent, in situations where the hardware
+provides no other mechanism to support this and ordinary spinlocks
+cannot be used.
+
+
+vlocks make use of the atomicity provided by the memory system for
+writes to a single memory location. To arbitrate, every CPU "votes for
+itself", by storing a unique number to a common memory location. The
+final value seen in that memory location when all the votes have been
+cast identifies the winner.
+
+In order to make sure that the election produces an unambiguous result
+in finite time, a CPU will only enter the election in the first place if
+no winner has been chosen and the election does not appear to have
+started yet.
+
+
+Algorithm
+---------
+
+The easiest way to explain the vlocks algorithm is with some pseudo-code:
+
+
+ int currently_voting[NR_CPUS] = { 0, };
+ int last_vote = -1; /* no votes yet */
+
+ bool vlock_trylock(int this_cpu)
+ {
+ /* signal our desire to vote */
+ currently_voting[this_cpu] = 1;
+ if (last_vote != -1) {
+ /* someone already volunteered himself */
+ currently_voting[this_cpu] = 0;
+ return false; /* not ourself */
+ }
+
+ /* let's suggest ourself */
+ last_vote = this_cpu;
+ currently_voting[this_cpu] = 0;
+
+ /* then wait until everyone else is done voting */
+ for_each_cpu(i) {
+ while (currently_voting[i] != 0)
+ /* wait */;
+ }
+
+ /* result */
+ if (last_vote == this_cpu)
+ return true; /* we won */
+ return false;
+ }
+
+ bool vlock_unlock(void)
+ {
+ last_vote = -1;
+ }
+
+
+The currently_voting[] array provides a way for the CPUs to determine
+whether an election is in progress, and plays a role analogous to the
+"entering" array in Lamport's bakery algorithm [1].
+
+However, once the election has started, the underlying memory system
+atomicity is used to pick the winner. This avoids the need for a static
+priority rule to act as a tie-breaker, or any counters which could
+overflow.
+
+As long as the last_vote variable is globally visible to all CPUs, it
+will contain only one value that won't change once every CPU has cleared
+its currently_voting flag.
+
+
+Features and limitations
+------------------------
+
+ * vlocks are not intended to be fair. In the contended case, it is the
+ _last_ CPU which attempts to get the lock which will be most likely
+ to win.
+
+ vlocks are therefore best suited to situations where it is necessary
+ to pick a unique winner, but it does not matter which CPU actually
+ wins.
+
+ * Like other similar mechanisms, vlocks will not scale well to a large
+ number of CPUs.
+
+ vlocks can be cascaded in a voting hierarchy to permit better scaling
+ if necessary, as in the following hypothetical example for 4096 CPUs:
+
+ /* first level: local election */
+ my_town = towns[(this_cpu >> 4) & 0xf];
+ I_won = vlock_trylock(my_town, this_cpu & 0xf);
+ if (I_won) {
+ /* we won the town election, let's go for the state */
+ my_state = states[(this_cpu >> 8) & 0xf];
+ I_won = vlock_lock(my_state, this_cpu & 0xf));
+ if (I_won) {
+ /* and so on */
+ I_won = vlock_lock(the_whole_country, this_cpu & 0xf];
+ if (I_won) {
+ /* ... */
+ }
+ vlock_unlock(the_whole_country);
+ }
+ vlock_unlock(my_state);
+ }
+ vlock_unlock(my_town);
+
+
+ARM implementation
+------------------
+
+The current ARM implementation [2] contains some optimisations beyond
+the basic algorithm:
+
+ * By packing the members of the currently_voting array close together,
+ we can read the whole array in one transaction (providing the number
+ of CPUs potentially contending the lock is small enough). This
+ reduces the number of round-trips required to external memory.
+
+ In the ARM implementation, this means that we can use a single load
+ and comparison:
+
+ LDR Rt, [Rn]
+ CMP Rt, #0
+
+ ...in place of code equivalent to:
+
+ LDRB Rt, [Rn]
+ CMP Rt, #0
+ LDRBEQ Rt, [Rn, #1]
+ CMPEQ Rt, #0
+ LDRBEQ Rt, [Rn, #2]
+ CMPEQ Rt, #0
+ LDRBEQ Rt, [Rn, #3]
+ CMPEQ Rt, #0
+
+ This cuts down on the fast-path latency, as well as potentially
+ reducing bus contention in contended cases.
+
+ The optimisation relies on the fact that the ARM memory system
+ guarantees coherency between overlapping memory accesses of
+ different sizes, similarly to many other architectures. Note that
+ we do not care which element of currently_voting appears in which
+ bits of Rt, so there is no need to worry about endianness in this
+ optimisation.
+
+ If there are too many CPUs to read the currently_voting array in
+ one transaction then multiple transations are still required. The
+ implementation uses a simple loop of word-sized loads for this
+ case. The number of transactions is still fewer than would be
+ required if bytes were loaded individually.
+
+
+ In principle, we could aggregate further by using LDRD or LDM, but
+ to keep the code simple this was not attempted in the initial
+ implementation.
+
+
+ * vlocks are currently only used to coordinate between CPUs which are
+ unable to enable their caches yet. This means that the
+ implementation removes many of the barriers which would be required
+ when executing the algorithm in cached memory.
+
+ packing of the currently_voting array does not work with cached
+ memory unless all CPUs contending the lock are cache-coherent, due
+ to cache writebacks from one CPU clobbering values written by other
+ CPUs. (Though if all the CPUs are cache-coherent, you should be
+ probably be using proper spinlocks instead anyway).
+
+
+ * The "no votes yet" value used for the last_vote variable is 0 (not
+ -1 as in the pseudocode). This allows statically-allocated vlocks
+ to be implicitly initialised to an unlocked state simply by putting
+ them in .bss.
+
+ An offset is added to each CPU's ID for the purpose of setting this
+ variable, so that no CPU uses the value 0 for its ID.
+
+
+Colophon
+--------
+
+Originally created and documented by Dave Martin for Linaro Limited, for
+use in ARM-based big.LITTLE platforms, with review and input gratefully
+received from Nicolas Pitre and Achin Gupta. Thanks to Nicolas for
+grabbing most of this text out of the relevant mail thread and writing
+up the pseudocode.
+
+Copyright (C) 2012-2013 Linaro Limited
+Distributed under the terms of Version 2 of the GNU General Public
+License, as defined in linux/COPYING.
+
+
+References
+----------
+
+[1] Lamport, L. "A New Solution of Dijkstra's Concurrent Programming
+ Problem", Communications of the ACM 17, 8 (August 1974), 453-455.
+
+ http://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm
+
+[2] linux/arch/arm/common/vlock.S, www.kernel.org.
Documentation:
http://www.diolan.com/i2c/u2c12.html
-Author: Guenter Roeck <guenter.roeck@ericsson.com>
+Author: Guenter Roeck <linux@roeck-us.net>
Description
-----------
is selected automatically. Check
Documentation/kdump/kdump.txt for further details.
- crashkernel_low=size[KMG]
- [KNL, x86] parts under 4G.
-
crashkernel=range1:size1[,range2:size2,...][@offset]
[KNL] Same as above, but depends on the memory
in the running system. The syntax of range is
a memory unit (amount[KMG]). See also
Documentation/kdump/kdump.txt for an example.
+ crashkernel=size[KMG],high
+ [KNL, x86_64] range could be above 4G. Allow kernel
+ to allocate physical memory region from top, so could
+ be above 4G if system have more than 4G ram installed.
+ Otherwise memory region will be allocated below 4G, if
+ available.
+ It will be ignored if crashkernel=X is specified.
+ crashkernel=size[KMG],low
+ [KNL, x86_64] range under 4G. When crashkernel=X,high
+ is passed, kernel could allocate physical memory region
+ above 4G, that cause second kernel crash on system
+ that require some amount of low memory, e.g. swiotlb
+ requires at least 64M+32K low memory. Kernel would
+ try to allocate 72M below 4G automatically.
+ This one let user to specify own low range under 4G
+ for second kernel instead.
+ 0: to disable low allocation.
+ It will be ignored when crashkernel=X,high is not used
+ or memory reserved is below 4G.
+
cs89x0_dma= [HW,NET]
Format: <dma>
edd= [EDD]
Format: {"off" | "on" | "skip[mbr]"}
+ efi_no_storage_paranoia [EFI; X86]
+ Using this parameter you can use more than 50% of
+ your efi variable storage. Use this parameter only if
+ you are really sure that your UEFI does sane gc and
+ fulfills the spec otherwise your board may brick.
+
eisa_irq_edge= [PARISC,HW]
See header of drivers/parisc/eisa.c.
enabled and the variable is automatically set to 2, otherwise
the strategy is disabled and the variable is set to 1.
+backup_only - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ If set, disable the director function while the server is
+ in backup mode to avoid packet loops for DR/TUN methods.
+
conntrack - BOOLEAN
0 - disabled (default)
not 0 - enabled
-Copyright (c) 2003-2012 QLogic Corporation
+Copyright (c) 2003-2013 QLogic Corporation
QLogic Linux FC-FCoE Driver
This program includes a device driver for Linux 3.x.
enable_msi - Enable Message Signaled Interrupt (MSI) (default = off)
power_save - Automatic power-saving timeout (in second, 0 =
disable)
- power_save_controller - Support runtime D3 of HD-audio controller
- (-1 = on for supported chip (default), false = off,
- true = force to on even for unsupported hardware)
+ power_save_controller - Reset HD-audio controller in power-saving mode
+ (default = on)
align_buffer_size - Force rounding of buffer/period sizes to multiples
of 128 bytes. This is more efficient in terms of memory
access but isn't required by the HDA spec and prevents
models depending on the codec chip. The list of available models
is found in HD-Audio-Models.txt
- The model name "genric" is treated as a special case. When this
+ The model name "generic" is treated as a special case. When this
model is given, the driver uses the generic codec parser without
"codec-patch". It's sometimes good for testing and debugging.
<H4>
7.2.4 Close Callback</H4>
The <TT>close</TT> callback is called when this device is closed by the
-applicaion. If any private data was allocated in open callback, it must
+application. If any private data was allocated in open callback, it must
be released in the close callback. The deletion of ALSA port should be
done here, too. This callback must not be NULL.
<H4>
F: drivers/dma/at_hdmac_regs.h
F: include/linux/platform_data/dma-atmel.h
+ATMEL I2C DRIVER
+M: Ludovic Desroches <ludovic.desroches@atmel.com>
+L: linux-i2c@vger.kernel.org
+S: Supported
+F: drivers/i2c/busses/i2c-at91.c
+
ATMEL ISI DRIVER
M: Josh Wu <josh.wu@atmel.com>
L: linux-media@vger.kernel.org
F: drivers/base/firmware*.c
F: include/linux/firmware.h
+FLASHSYSTEM DRIVER (IBM FlashSystem 70/80 PCI SSD Flash Card)
+M: Joshua Morris <josh.h.morris@us.ibm.com>
+M: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
+S: Maintained
+F: drivers/block/rsxx/
+
FLOPPY DRIVER
M: Jiri Kosina <jkosina@suse.cz>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/floppy.git
S: Maintained
F: fs/logfs/
+LPC32XX MACHINE SUPPORT
+M: Roland Stigge <stigge@antcom.de>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm/mach-lpc32xx/
+
LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI)
M: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com>
M: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
F: drivers/net/ethernet/marvell/sk*
MARVELL LIBERTAS WIRELESS DRIVER
-M: Dan Williams <dcbw@redhat.com>
L: libertas-dev@lists.infradead.org
-S: Maintained
+S: Orphan
F: drivers/net/wireless/libertas/
MARVELL MV643XX ETHERNET DRIVER
F: include/uapi/linux/netdevice.h
NETXEN (1/10) GbE SUPPORT
+M: Manish Chopra <manish.chopra@qlogic.com>
M: Sony Chacko <sony.chacko@qlogic.com>
M: Rajesh Borundia <rajesh.borundia@qlogic.com>
L: netdev@vger.kernel.org
F: drivers/video/riva/
F: drivers/video/nvidia/
+NVM EXPRESS DRIVER
+M: Matthew Wilcox <willy@linux.intel.com>
+L: linux-nvme@lists.infradead.org
+T: git git://git.infradead.org/users/willy/linux-nvme.git
+S: Supported
+F: drivers/block/nvme.c
+F: include/linux/nvme.h
+
OMAP SUPPORT
M: Tony Lindgren <tony@atomide.com>
L: linux-omap@vger.kernel.org
F: arch/arm/*omap*/*clock*
OMAP POWER MANAGEMENT SUPPORT
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: arch/arm/*omap*/*pm*
OMAP GPIO DRIVER
M: Santosh Shilimkar <santosh.shilimkar@ti.com>
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-omap.c
F: drivers/power/
PNP SUPPORT
-M: Adam Belay <abelay@mit.edu>
+M: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
M: Bjorn Helgaas <bhelgaas@google.com>
S: Maintained
F: drivers/pnp/
F: Documentation/blockdev/ramdisk.txt
F: drivers/block/brd.c
-RAMSAM DRIVER (IBM RamSan 70/80 PCI SSD Flash Card)
-M: Joshua Morris <josh.h.morris@us.ibm.com>
-M: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
-S: Maintained
-F: drivers/block/rsxx/
-
RANDOM NUMBER DRIVER
M: Theodore Ts'o" <tytso@mit.edu>
S: Maintained
F: fs/reiserfs/
REGISTER MAP ABSTRACTION
-M: Mark Brown <broonie@opensource.wolfsonmicro.com>
+M: Mark Brown <broonie@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap.git
S: Supported
F: drivers/base/regmap/
SCTP PROTOCOL
M: Vlad Yasevich <vyasevich@gmail.com>
-M: Sridhar Samudrala <sri@us.ibm.com>
M: Neil Horman <nhorman@tuxdriver.com>
L: linux-sctp@vger.kernel.org
W: http://lksctp.sourceforge.net
TI DAVINCI MACHINE SUPPORT
M: Sekhar Nori <nsekhar@ti.com>
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: davinci-linux-open-source@linux.davincidsp.com (moderated for non-subscribers)
T: git git://gitorious.org/linux-davinci/linux-davinci.git
Q: http://patchwork.kernel.org/project/linux-davinci/list/
SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC)
M: Liam Girdwood <lgirdwood@gmail.com>
-M: Mark Brown <broonie@opensource.wolfsonmicro.com>
+M: Mark Brown <broonie@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
W: http://alsa-project.org/main/index.php/ASoC
SPI SUBSYSTEM
M: Grant Likely <grant.likely@secretlab.ca>
-M: Mark Brown <broonie@opensource.wolfsonmicro.com>
+M: Mark Brown <broonie@kernel.org>
L: spi-devel-general@lists.sourceforge.net
Q: http://patchwork.kernel.org/project/spi-devel-general/list/
T: git git://git.secretlab.ca/git/linux-2.6.git
SYNOPSYS ARC ARCHITECTURE
M: Vineet Gupta <vgupta@synopsys.com>
-L: linux-snps-arc@vger.kernel.org
S: Supported
F: arch/arc/
+F: Documentation/devicetree/bindings/arc/
+F: drivers/tty/serial/arc-uart.c
SYSV FILESYSTEM
M: Christoph Hellwig <hch@infradead.org>
VOLTAGE AND CURRENT REGULATOR FRAMEWORK
M: Liam Girdwood <lrg@ti.com>
-M: Mark Brown <broonie@opensource.wolfsonmicro.com>
+M: Mark Brown <broonie@kernel.org>
W: http://opensource.wolfsonmicro.com/node/15
W: http://www.slimlogic.co.uk/?p=48
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lrg/regulator.git
VERSION = 3
PATCHLEVEL = 9
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc8
NAME = Unicycling Gorilla
# *DOCUMENTATION*
# Carefully list dependencies so we do not try to build scripts twice
# in parallel
PHONY += scripts
-scripts: scripts_basic include/config/auto.conf include/config/tristate.conf
+scripts: scripts_basic include/config/auto.conf include/config/tristate.conf \
+ asm-generic
$(Q)$(MAKE) $(build)=$(@)
# Objects we will link into vmlinux / subdirs we need to visit
LDFLAGS_vmlinux := -static -N #-relax
CHECKFLAGS += -D__alpha__ -m64
-cflags-y := -pipe -mno-fp-regs -ffixed-8 -msmall-data
+cflags-y := -pipe -mno-fp-regs -ffixed-8
cflags-y += $(call cc-option, -fno-jump-tables)
cpuflags-$(CONFIG_ALPHA_EV4) := -mcpu=ev4
#define fd_disable_irq() disable_irq(FLOPPY_IRQ)
#define fd_cacheflush(addr,size) /* nothing */
#define fd_request_irq() request_irq(FLOPPY_IRQ, floppy_interrupt,\
- IRQF_DISABLED, "floppy", NULL)
+ 0, "floppy", NULL)
#define fd_free_irq() free_irq(FLOPPY_IRQ, NULL)
#ifdef CONFIG_PCI
return;
}
- /*
- * From here we must proceed with IPL_MAX. Note that we do not
- * explicitly enable interrupts afterwards - some MILO PALcode
- * (namely LX164 one) seems to have severe problems with RTI
- * at IPL 0.
- */
- local_irq_disable();
irq_enter();
generic_handle_irq_desc(irq, desc);
irq_exit();
unsigned long la_ptr, struct pt_regs *regs)
{
struct pt_regs *old_regs;
+
+ /*
+ * Disable interrupts during IRQ handling.
+ * Note that there is no matching local_irq_enable() due to
+ * severe problems with RTI at IPL0 and some MILO PALcode
+ * (namely LX164).
+ */
+ local_irq_disable();
switch (type) {
case 0:
#ifdef CONFIG_SMP
{
long cpu;
- local_irq_disable();
smp_percpu_timer_interrupt(regs);
cpu = smp_processor_id();
if (cpu != boot_cpuid) {
struct irqaction timer_irqaction = {
.handler = timer_interrupt,
- .flags = IRQF_DISABLED,
.name = "timer",
};
extern void free_reserved_mem(void *, void *);
extern void pcibios_claim_one_bus(struct pci_bus *);
+static struct resource irongate_io = {
+ .name = "Irongate PCI IO",
+ .flags = IORESOURCE_IO,
+};
static struct resource irongate_mem = {
.name = "Irongate PCI MEM",
.flags = IORESOURCE_MEM,
irongate = pci_get_bus_and_slot(0, 0);
bus->self = irongate;
+ bus->resource[0] = &irongate_io;
bus->resource[1] = &irongate_mem;
pci_bus_size_bridges(bus);
* all reported to the kernel as machine checks, so the handler
* is a nop so it can be called to count the individual events.
*/
- titan_request_irq(63+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(63+16, titan_intr_nop, 0,
"CChip Error", NULL);
- titan_request_irq(62+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(62+16, titan_intr_nop, 0,
"PChip 0 H_Error", NULL);
- titan_request_irq(61+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(61+16, titan_intr_nop, 0,
"PChip 1 H_Error", NULL);
- titan_request_irq(60+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(60+16, titan_intr_nop, 0,
"PChip 0 C_Error", NULL);
- titan_request_irq(59+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(59+16, titan_intr_nop, 0,
"PChip 1 C_Error", NULL);
/*
* Hook a couple of extra err interrupts that the
* common titan code won't.
*/
- titan_request_irq(53+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(53+16, titan_intr_nop, 0,
"NMI", NULL);
- titan_request_irq(50+16, titan_intr_nop, IRQF_DISABLED,
+ titan_request_irq(50+16, titan_intr_nop, 0,
"Temperature Warning", NULL);
/*
int i;
for_each_sg(sg, s, nents, i)
- sg->dma_address = dma_map_page(dev, sg_page(s), s->offset,
+ s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
*/
#define ELF_PLATFORM (NULL)
-#define SET_PERSONALITY(ex) \
- set_personality(PER_LINUX | (current->personality & (~PER_MASK)))
-
#endif
*-------------------------------------------------------------*/
.macro SAVE_ALL_EXCEPTION marker
- st \marker, [sp, 8]
+ st \marker, [sp, 8] /* orig_r8 */
st r0, [sp, 4] /* orig_r0, needed only for sys calls */
/* Restore r9 used to code the early prologue */
" flag.nz %0 \n"
: "=r"(temp), "=r"(flags)
: "n"((STATUS_E1_MASK | STATUS_E2_MASK))
- : "cc");
+ : "memory", "cc");
return flags;
}
__asm__ __volatile__(
" flag %0 \n"
:
- : "r"(flags));
+ : "r"(flags)
+ : "memory");
}
/*
" and %0, %0, %1 \n"
" flag %0 \n"
: "=&r"(temp)
- : "n"(~(STATUS_E1_MASK | STATUS_E2_MASK)));
+ : "n"(~(STATUS_E1_MASK | STATUS_E2_MASK))
+ : "memory");
}
/*
__asm__ __volatile__(
" lr %0, [status32] \n"
- : "=&r"(temp));
+ : "=&r"(temp)
+ :
+ : "memory");
return temp;
}
#ifdef CONFIG_KGDB
-#include <asm/user.h>
+#include <asm/ptrace.h>
/* to ensure compatibility with Linux 2.6.35, we don't implement the get/set
* register API yet */
};
#else
-static inline void kgdb_trap(struct pt_regs *regs, int param)
-{
-}
+#define kgdb_trap(regs, param)
#endif
#endif /* __ARC_KGDB_H__ */
#define orig_r8_IS_SCALL 0x0001
#define orig_r8_IS_SCALL_RESTARTED 0x0002
#define orig_r8_IS_BRKPT 0x0004
-#define orig_r8_IS_EXCPN 0x0004
+#define orig_r8_IS_EXCPN 0x0008
#define orig_r8_IS_IRQ1 0x0010
#define orig_r8_IS_IRQ2 0x0020
#include <linux/types.h>
int sys_clone_wrapper(int, int, int, int, int);
-int sys_fork_wrapper(void);
-int sys_vfork_wrapper(void);
int sys_cacheflush(uint32_t, uint32_t uint32_t);
int sys_arc_settls(void *);
int sys_arc_gettls(void);
*/
struct user_regs_struct {
- struct scratch {
+ struct {
long pad;
long bta, lp_start, lp_end, lp_count;
long status32, ret, blink, fp, gp;
long r12, r11, r10, r9, r8, r7, r6, r5, r4, r3, r2, r1, r0;
long sp;
} scratch;
- struct callee {
+ struct {
long pad;
long r25, r24, r23, r22, r21, r20;
long r19, r18, r17, r16, r15, r14, r13;
; using ERET won't work since next-PC has already committed
lr r12, [efa]
GET_CURR_TASK_FIELD_PTR TASK_THREAD, r11
- st r12, [r11, THREAD_FAULT_ADDR]
+ st r12, [r11, THREAD_FAULT_ADDR] ; thread.fault_address
; PRE Sys Call Ptrace hook
mov r0, sp ; pt_regs needed
;################### Special Sys Call Wrappers ##########################
-; TBD: call do_fork directly from here
-ARC_ENTRY sys_fork_wrapper
- SAVE_CALLEE_SAVED_USER
- bl @sys_fork
- DISCARD_CALLEE_SAVED_USER
-
- GET_CURR_THR_INFO_FLAGS r10
- btst r10, TIF_SYSCALL_TRACE
- bnz tracesys_exit
-
- b ret_from_system_call
-ARC_EXIT sys_fork_wrapper
-
-ARC_ENTRY sys_vfork_wrapper
- SAVE_CALLEE_SAVED_USER
- bl @sys_vfork
- DISCARD_CALLEE_SAVED_USER
-
- GET_CURR_THR_INFO_FLAGS r10
- btst r10, TIF_SYSCALL_TRACE
- bnz tracesys_exit
-
- b ret_from_system_call
-ARC_EXIT sys_vfork_wrapper
-
ARC_ENTRY sys_clone_wrapper
SAVE_CALLEE_SAVED_USER
bl @sys_clone
*/
#include <linux/kgdb.h>
+#include <linux/sched.h>
#include <asm/disasm.h>
#include <asm/cacheflush.h>
n += scnprintf(buf + n, len - n, "\n");
-#ifdef _ASM_GENERIC_UNISTD_H
n += scnprintf(buf + n, len - n,
- "OS ABI [v2]\t: asm-generic/{unistd,stat,fcntl}\n");
-#endif
+ "OS ABI [v3]\t: no-legacy-syscalls\n");
return buf;
}
#include <asm/syscalls.h>
#define sys_clone sys_clone_wrapper
-#define sys_fork sys_fork_wrapper
-#define sys_vfork sys_vfork_wrapper
#undef __SYSCALL
#define __SYSCALL(nr, call) [nr] = (call),
select CLONE_BACKWARDS
select OLD_SIGSUSPEND3
select OLD_SIGACTION
+ select HAVE_CONTEXT_TRACKING
help
The ARM series is a line of low-power-consumption RISC chip designs
licensed by ARM Ltd and targeted at embedded applications and
default 8
config IWMMXT
- bool "Enable iWMMXt support"
+ bool "Enable iWMMXt support" if !CPU_PJ4
depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_PJ4
- default y if PXA27x || PXA3xx || ARCH_MMP
+ default y if PXA27x || PXA3xx || ARCH_MMP || CPU_PJ4
help
Enable support for iWMMXt context switching at run time if
running on a CPU that supports it.
to deadlock. This workaround puts DSB before executing ISB if
an abort may occur on cache maintenance.
+config ARM_ERRATA_798181
+ bool "ARM errata: TLBI/DSB failure on Cortex-A15"
+ depends on CPU_V7 && SMP
+ help
+ On Cortex-A15 (r0p0..r3p2) the TLBI*IS/DSB operations are not
+ adequately shooting down all use of the old entries. This
+ option enables the Linux kernel workaround for this erratum
+ which sends an IPI to the CPUs that are running the same ASID
+ as the one being invalidated.
+
endmenu
source "arch/arm/common/Kconfig"
help
This options enables support for the ARM timer and watchdog unit
+config MCPM
+ bool "Multi-Cluster Power Management"
+ depends on CPU_V7 && SMP
+ help
+ This option provides the common power management infrastructure
+ for (multi-)cluster based systems, such as big.LITTLE based
+ systems.
+
choice
prompt "Memory split"
default VMSPLIT_3G
def_bool HIGH_RES_TIMERS
config THUMB2_KERNEL
- bool "Compile the kernel in Thumb-2 mode"
+ bool "Compile the kernel in Thumb-2 mode" if !CPU_THUMBONLY
depends on CPU_V7 && !CPU_V6 && !CPU_V6K
+ default y if CPU_THUMBONLY
select AEABI
select ARM_ASM_UNIFIED
select ARM_UNWIND
DEBUG_IMX53_UART || \
DEBUG_IMX6Q_UART
default 1
+ depends on ARCH_MXC
help
Choose UART port on which kernel low-level debug messages
should be output.
default "debug/zynq.S" if DEBUG_ZYNQ_UART0 || DEBUG_ZYNQ_UART1
default "mach/debug-macro.S"
+config DEBUG_UNCOMPRESS
+ bool
+ default y if ARCH_MULTIPLATFORM && DEBUG_LL && \
+ !DEBUG_OMAP2PLUS_UART && \
+ !DEBUG_TEGRA_UART
+
+config UNCOMPRESS_INCLUDE
+ string
+ default "debug/uncompress.h" if ARCH_MULTIPLATFORM
+ default "mach/uncompress.h"
+
config EARLY_PRINTK
bool "Early printk"
depends on DEBUG_LL
AFLAGS_head.o += -DTEXT_OFFSET=$(TEXT_OFFSET)
HEAD = head.o
OBJS += misc.o decompress.o
+ifeq ($(CONFIG_DEBUG_UNCOMPRESS),y)
+OBJS += debug.o
+endif
FONTC = $(srctree)/drivers/video/console/font_acorn_8x8.c
# string library code (-Os is enforced to keep it much smaller)
--- /dev/null
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+#include CONFIG_DEBUG_LL_INCLUDE
+
+ENTRY(putc)
+ addruart r1, r2, r3
+ waituart r3, r1
+ senduart r0, r1
+ busyuart r3, r1
+ mov pc, lr
+ENDPROC(putc)
static void putstr(const char *ptr);
extern void error(char *x);
-#ifdef CONFIG_ARCH_MULTIPLATFORM
-static inline void putc(int c) {}
-static inline void flush(void) {}
-static inline void arch_decomp_setup(void) {}
-#else
-#include <mach/uncompress.h>
-#endif
+#include CONFIG_UNCOMPRESS_INCLUDE
#ifdef CONFIG_DEBUG_ICEDCC
};
mvsdio@d00d4000 {
- pinctrl-0 = <&sdio_pins2>;
+ pinctrl-0 = <&sdio_pins3>;
pinctrl-names = "default";
status = "okay";
/*
"mpp50", "mpp51", "mpp52";
marvell,function = "sd0";
};
+
+ sdio_pins3: sdio-pins3 {
+ marvell,pins = "mpp48", "mpp49", "mpp50",
+ "mpp51", "mpp52", "mpp53";
+ marvell,function = "sd0";
+ };
};
gpio0: gpio@d0018100 {
prcmu: prcmu@80157000 {
compatible = "stericsson,db8500-prcmu";
- reg = <0x80157000 0x1000>;
- reg-names = "prcmu";
+ reg = <0x80157000 0x1000>, <0x801b0000 0x8000>, <0x801b8000 0x1000>;
+ reg-names = "prcmu", "prcmu-tcpm", "prcmu-tcdm";
interrupts = <0 47 0x4>;
#address-cells = <1>;
#size-cells = <1>;
i2c0: i2c@80058000 {
pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins_a>;
- clock-frequency = <400000>;
status = "okay";
sgtl5000: codec@0a {
i2c0: i2c@80058000 {
pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins_a>;
- clock-frequency = <400000>;
status = "okay";
rtc: rtc@51 {
compatible = "arm,cortex-a9-twd-timer";
reg = <0x00a00600 0x20>;
interrupts = <1 13 0xf01>;
+ clocks = <&clks 15>;
};
L2: l2-cache@00a02000 {
};
nand@3000000 {
+ chip-delay = <40>;
status = "okay";
partition@0 {
marvell,function = "gpio";
};
pmx_led_rebuild_brt_ctrl_1: pmx-led-rebuild-brt-ctrl-1 {
- marvell,pins = "mpp44";
+ marvell,pins = "mpp46";
marvell,function = "gpio";
};
pmx_led_rebuild_brt_ctrl_2: pmx-led-rebuild-brt-ctrl-2 {
- marvell,pins = "mpp45";
+ marvell,pins = "mpp47";
marvell,function = "gpio";
};
gpios = <&gpio0 16 0>;
linux,default-trigger = "default-on";
};
- health_led1 {
+ rebuild_led {
+ label = "status:white:rebuild_led";
+ gpios = <&gpio1 4 0>;
+ };
+ health_led {
label = "status:red:health_led";
gpios = <&gpio1 5 0>;
};
- health_led2 {
- label = "status:white:health_led";
- gpios = <&gpio1 4 0>;
- };
backup_led {
label = "status:blue:backup_led";
gpios = <&gpio0 15 0>;
compatible = "marvell,orion5x";
interrupt-parent = <&intc>;
+ aliases {
+ gpio0 = &gpio0;
+ };
intc: interrupt-controller {
compatible = "marvell,orion-intc", "marvell,intc";
interrupt-controller;
#gpio-cells = <2>;
gpio-controller;
reg = <0x10100 0x40>;
- ngpio = <32>;
+ ngpios = <32>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
interrupts = <6>, <7>, <8>, <9>;
};
reg = <0x90000 0x10000>,
<0xf2200000 0x800>;
reg-names = "regs", "sram";
- interrupts = <22>;
+ interrupts = <28>;
status = "okay";
};
};
spi@7000d800 {
compatible = "nvidia,tegra20-slink";
- reg = <0x7000d480 0x200>;
+ reg = <0x7000d800 0x200>;
interrupts = <0 83 0x04>;
nvidia,dma-request-selector = <&apbdma 17>;
#address-cells = <1>;
spi@7000d800 {
compatible = "nvidia,tegra30-slink", "nvidia,tegra20-slink";
- reg = <0x7000d480 0x200>;
+ reg = <0x7000d800 0x200>;
interrupts = <0 83 0x04>;
nvidia,dma-request-selector = <&apbdma 17>;
#address-cells = <1>;
obj-$(CONFIG_SHARP_SCOOP) += scoop.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
+obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o
+AFLAGS_mcpm_head.o := -march=armv7-a
+AFLAGS_vlock.o := -march=armv7-a
--- /dev/null
+/*
+ * arch/arm/common/mcpm_entry.c -- entry point for multi-cluster PM
+ *
+ * Created by: Nicolas Pitre, March 2012
+ * Copyright: (C) 2012-2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/irqflags.h>
+
+#include <asm/mcpm.h>
+#include <asm/cacheflush.h>
+#include <asm/idmap.h>
+#include <asm/cputype.h>
+
+extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER];
+
+void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr)
+{
+ unsigned long val = ptr ? virt_to_phys(ptr) : 0;
+ mcpm_entry_vectors[cluster][cpu] = val;
+ sync_cache_w(&mcpm_entry_vectors[cluster][cpu]);
+}
+
+static const struct mcpm_platform_ops *platform_ops;
+
+int __init mcpm_platform_register(const struct mcpm_platform_ops *ops)
+{
+ if (platform_ops)
+ return -EBUSY;
+ platform_ops = ops;
+ return 0;
+}
+
+int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)
+{
+ if (!platform_ops)
+ return -EUNATCH; /* try not to shadow power_up errors */
+ might_sleep();
+ return platform_ops->power_up(cpu, cluster);
+}
+
+typedef void (*phys_reset_t)(unsigned long);
+
+void mcpm_cpu_power_down(void)
+{
+ phys_reset_t phys_reset;
+
+ BUG_ON(!platform_ops);
+ BUG_ON(!irqs_disabled());
+
+ /*
+ * Do this before calling into the power_down method,
+ * as it might not always be safe to do afterwards.
+ */
+ setup_mm_for_reboot();
+
+ platform_ops->power_down();
+
+ /*
+ * It is possible for a power_up request to happen concurrently
+ * with a power_down request for the same CPU. In this case the
+ * power_down method might not be able to actually enter a
+ * powered down state with the WFI instruction if the power_up
+ * method has removed the required reset condition. The
+ * power_down method is then allowed to return. We must perform
+ * a re-entry in the kernel as if the power_up method just had
+ * deasserted reset on the CPU.
+ *
+ * To simplify race issues, the platform specific implementation
+ * must accommodate for the possibility of unordered calls to
+ * power_down and power_up with a usage count. Therefore, if a
+ * call to power_up is issued for a CPU that is not down, then
+ * the next call to power_down must not attempt a full shutdown
+ * but only do the minimum (normally disabling L1 cache and CPU
+ * coherency) and return just as if a concurrent power_up request
+ * had happened as described above.
+ */
+
+ phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
+ phys_reset(virt_to_phys(mcpm_entry_point));
+
+ /* should never get here */
+ BUG();
+}
+
+void mcpm_cpu_suspend(u64 expected_residency)
+{
+ phys_reset_t phys_reset;
+
+ BUG_ON(!platform_ops);
+ BUG_ON(!irqs_disabled());
+
+ /* Very similar to mcpm_cpu_power_down() */
+ setup_mm_for_reboot();
+ platform_ops->suspend(expected_residency);
+ phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
+ phys_reset(virt_to_phys(mcpm_entry_point));
+ BUG();
+}
+
+int mcpm_cpu_powered_up(void)
+{
+ if (!platform_ops)
+ return -EUNATCH;
+ if (platform_ops->powered_up)
+ platform_ops->powered_up();
+ return 0;
+}
+
+struct sync_struct mcpm_sync;
+
+/*
+ * __mcpm_cpu_going_down: Indicates that the cpu is being torn down.
+ * This must be called at the point of committing to teardown of a CPU.
+ * The CPU cache (SCTRL.C bit) is expected to still be active.
+ */
+void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster)
+{
+ mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_GOING_DOWN;
+ sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu);
+}
+
+/*
+ * __mcpm_cpu_down: Indicates that cpu teardown is complete and that the
+ * cluster can be torn down without disrupting this CPU.
+ * To avoid deadlocks, this must be called before a CPU is powered down.
+ * The CPU cache (SCTRL.C bit) is expected to be off.
+ * However L2 cache might or might not be active.
+ */
+void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster)
+{
+ dmb();
+ mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_DOWN;
+ sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu);
+ dsb_sev();
+}
+
+/*
+ * __mcpm_outbound_leave_critical: Leave the cluster teardown critical section.
+ * @state: the final state of the cluster:
+ * CLUSTER_UP: no destructive teardown was done and the cluster has been
+ * restored to the previous state (CPU cache still active); or
+ * CLUSTER_DOWN: the cluster has been torn-down, ready for power-off
+ * (CPU cache disabled, L2 cache either enabled or disabled).
+ */
+void __mcpm_outbound_leave_critical(unsigned int cluster, int state)
+{
+ dmb();
+ mcpm_sync.clusters[cluster].cluster = state;
+ sync_cache_w(&mcpm_sync.clusters[cluster].cluster);
+ dsb_sev();
+}
+
+/*
+ * __mcpm_outbound_enter_critical: Enter the cluster teardown critical section.
+ * This function should be called by the last man, after local CPU teardown
+ * is complete. CPU cache expected to be active.
+ *
+ * Returns:
+ * false: the critical section was not entered because an inbound CPU was
+ * observed, or the cluster is already being set up;
+ * true: the critical section was entered: it is now safe to tear down the
+ * cluster.
+ */
+bool __mcpm_outbound_enter_critical(unsigned int cpu, unsigned int cluster)
+{
+ unsigned int i;
+ struct mcpm_sync_struct *c = &mcpm_sync.clusters[cluster];
+
+ /* Warn inbound CPUs that the cluster is being torn down: */
+ c->cluster = CLUSTER_GOING_DOWN;
+ sync_cache_w(&c->cluster);
+
+ /* Back out if the inbound cluster is already in the critical region: */
+ sync_cache_r(&c->inbound);
+ if (c->inbound == INBOUND_COMING_UP)
+ goto abort;
+
+ /*
+ * Wait for all CPUs to get out of the GOING_DOWN state, so that local
+ * teardown is complete on each CPU before tearing down the cluster.
+ *
+ * If any CPU has been woken up again from the DOWN state, then we
+ * shouldn't be taking the cluster down at all: abort in that case.
+ */
+ sync_cache_r(&c->cpus);
+ for (i = 0; i < MAX_CPUS_PER_CLUSTER; i++) {
+ int cpustate;
+
+ if (i == cpu)
+ continue;
+
+ while (1) {
+ cpustate = c->cpus[i].cpu;
+ if (cpustate != CPU_GOING_DOWN)
+ break;
+
+ wfe();
+ sync_cache_r(&c->cpus[i].cpu);
+ }
+
+ switch (cpustate) {
+ case CPU_DOWN:
+ continue;
+
+ default:
+ goto abort;
+ }
+ }
+
+ return true;
+
+abort:
+ __mcpm_outbound_leave_critical(cluster, CLUSTER_UP);
+ return false;
+}
+
+int __mcpm_cluster_state(unsigned int cluster)
+{
+ sync_cache_r(&mcpm_sync.clusters[cluster].cluster);
+ return mcpm_sync.clusters[cluster].cluster;
+}
+
+extern unsigned long mcpm_power_up_setup_phys;
+
+int __init mcpm_sync_init(
+ void (*power_up_setup)(unsigned int affinity_level))
+{
+ unsigned int i, j, mpidr, this_cluster;
+
+ BUILD_BUG_ON(MCPM_SYNC_CLUSTER_SIZE * MAX_NR_CLUSTERS != sizeof mcpm_sync);
+ BUG_ON((unsigned long)&mcpm_sync & (__CACHE_WRITEBACK_GRANULE - 1));
+
+ /*
+ * Set initial CPU and cluster states.
+ * Only one cluster is assumed to be active at this point.
+ */
+ for (i = 0; i < MAX_NR_CLUSTERS; i++) {
+ mcpm_sync.clusters[i].cluster = CLUSTER_DOWN;
+ mcpm_sync.clusters[i].inbound = INBOUND_NOT_COMING_UP;
+ for (j = 0; j < MAX_CPUS_PER_CLUSTER; j++)
+ mcpm_sync.clusters[i].cpus[j].cpu = CPU_DOWN;
+ }
+ mpidr = read_cpuid_mpidr();
+ this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ for_each_online_cpu(i)
+ mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP;
+ mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP;
+ sync_cache_w(&mcpm_sync);
+
+ if (power_up_setup) {
+ mcpm_power_up_setup_phys = virt_to_phys(power_up_setup);
+ sync_cache_w(&mcpm_power_up_setup_phys);
+ }
+
+ return 0;
+}
--- /dev/null
+/*
+ * arch/arm/common/mcpm_head.S -- kernel entry point for multi-cluster PM
+ *
+ * Created by: Nicolas Pitre, March 2012
+ * Copyright: (C) 2012-2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ *
+ * Refer to Documentation/arm/cluster-pm-race-avoidance.txt
+ * for details of the synchronisation algorithms used here.
+ */
+
+#include <linux/linkage.h>
+#include <asm/mcpm.h>
+
+#include "vlock.h"
+
+.if MCPM_SYNC_CLUSTER_CPUS
+.error "cpus must be the first member of struct mcpm_sync_struct"
+.endif
+
+ .macro pr_dbg string
+#if defined(CONFIG_DEBUG_LL) && defined(DEBUG)
+ b 1901f
+1902: .asciz "CPU"
+1903: .asciz " cluster"
+1904: .asciz ": \string"
+ .align
+1901: adr r0, 1902b
+ bl printascii
+ mov r0, r9
+ bl printhex8
+ adr r0, 1903b
+ bl printascii
+ mov r0, r10
+ bl printhex8
+ adr r0, 1904b
+ bl printascii
+#endif
+ .endm
+
+ .arm
+ .align
+
+ENTRY(mcpm_entry_point)
+
+ THUMB( adr r12, BSYM(1f) )
+ THUMB( bx r12 )
+ THUMB( .thumb )
+1:
+ mrc p15, 0, r0, c0, c0, 5 @ MPIDR
+ ubfx r9, r0, #0, #8 @ r9 = cpu
+ ubfx r10, r0, #8, #8 @ r10 = cluster
+ mov r3, #MAX_CPUS_PER_CLUSTER
+ mla r4, r3, r10, r9 @ r4 = canonical CPU index
+ cmp r4, #(MAX_CPUS_PER_CLUSTER * MAX_NR_CLUSTERS)
+ blo 2f
+
+ /* We didn't expect this CPU. Try to cheaply make it quiet. */
+1: wfi
+ wfe
+ b 1b
+
+2: pr_dbg "kernel mcpm_entry_point\n"
+
+ /*
+ * MMU is off so we need to get to various variables in a
+ * position independent way.
+ */
+ adr r5, 3f
+ ldmia r5, {r6, r7, r8, r11}
+ add r6, r5, r6 @ r6 = mcpm_entry_vectors
+ ldr r7, [r5, r7] @ r7 = mcpm_power_up_setup_phys
+ add r8, r5, r8 @ r8 = mcpm_sync
+ add r11, r5, r11 @ r11 = first_man_locks
+
+ mov r0, #MCPM_SYNC_CLUSTER_SIZE
+ mla r8, r0, r10, r8 @ r8 = sync cluster base
+
+ @ Signal that this CPU is coming UP:
+ mov r0, #CPU_COMING_UP
+ mov r5, #MCPM_SYNC_CPU_SIZE
+ mla r5, r9, r5, r8 @ r5 = sync cpu address
+ strb r0, [r5]
+
+ @ At this point, the cluster cannot unexpectedly enter the GOING_DOWN
+ @ state, because there is at least one active CPU (this CPU).
+
+ mov r0, #VLOCK_SIZE
+ mla r11, r0, r10, r11 @ r11 = cluster first man lock
+ mov r0, r11
+ mov r1, r9 @ cpu
+ bl vlock_trylock @ implies DMB
+
+ cmp r0, #0 @ failed to get the lock?
+ bne mcpm_setup_wait @ wait for cluster setup if so
+
+ ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
+ cmp r0, #CLUSTER_UP @ cluster already up?
+ bne mcpm_setup @ if not, set up the cluster
+
+ @ Otherwise, release the first man lock and skip setup:
+ mov r0, r11
+ bl vlock_unlock
+ b mcpm_setup_complete
+
+mcpm_setup:
+ @ Control dependency implies strb not observable before previous ldrb.
+
+ @ Signal that the cluster is being brought up:
+ mov r0, #INBOUND_COMING_UP
+ strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
+ dmb
+
+ @ Any CPU trying to take the cluster into CLUSTER_GOING_DOWN from this
+ @ point onwards will observe INBOUND_COMING_UP and abort.
+
+ @ Wait for any previously-pending cluster teardown operations to abort
+ @ or complete:
+mcpm_teardown_wait:
+ ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
+ cmp r0, #CLUSTER_GOING_DOWN
+ bne first_man_setup
+ wfe
+ b mcpm_teardown_wait
+
+first_man_setup:
+ dmb
+
+ @ If the outbound gave up before teardown started, skip cluster setup:
+
+ cmp r0, #CLUSTER_UP
+ beq mcpm_setup_leave
+
+ @ power_up_setup is now responsible for setting up the cluster:
+
+ cmp r7, #0
+ mov r0, #1 @ second (cluster) affinity level
+ blxne r7 @ Call power_up_setup if defined
+ dmb
+
+ mov r0, #CLUSTER_UP
+ strb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
+ dmb
+
+mcpm_setup_leave:
+ @ Leave the cluster setup critical section:
+
+ mov r0, #INBOUND_NOT_COMING_UP
+ strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
+ dsb
+ sev
+
+ mov r0, r11
+ bl vlock_unlock @ implies DMB
+ b mcpm_setup_complete
+
+ @ In the contended case, non-first men wait here for cluster setup
+ @ to complete:
+mcpm_setup_wait:
+ ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
+ cmp r0, #CLUSTER_UP
+ wfene
+ bne mcpm_setup_wait
+ dmb
+
+mcpm_setup_complete:
+ @ If a platform-specific CPU setup hook is needed, it is
+ @ called from here.
+
+ cmp r7, #0
+ mov r0, #0 @ first (CPU) affinity level
+ blxne r7 @ Call power_up_setup if defined
+ dmb
+
+ @ Mark the CPU as up:
+
+ mov r0, #CPU_UP
+ strb r0, [r5]
+
+ @ Observability order of CPU_UP and opening of the gate does not matter.
+
+mcpm_entry_gated:
+ ldr r5, [r6, r4, lsl #2] @ r5 = CPU entry vector
+ cmp r5, #0
+ wfeeq
+ beq mcpm_entry_gated
+ dmb
+
+ pr_dbg "released\n"
+ bx r5
+
+ .align 2
+
+3: .word mcpm_entry_vectors - .
+ .word mcpm_power_up_setup_phys - 3b
+ .word mcpm_sync - 3b
+ .word first_man_locks - 3b
+
+ENDPROC(mcpm_entry_point)
+
+ .bss
+
+ .align CACHE_WRITEBACK_ORDER
+ .type first_man_locks, #object
+first_man_locks:
+ .space VLOCK_SIZE * MAX_NR_CLUSTERS
+ .align CACHE_WRITEBACK_ORDER
+
+ .type mcpm_entry_vectors, #object
+ENTRY(mcpm_entry_vectors)
+ .space 4 * MAX_NR_CLUSTERS * MAX_CPUS_PER_CLUSTER
+
+ .type mcpm_power_up_setup_phys, #object
+ENTRY(mcpm_power_up_setup_phys)
+ .space 4 @ set by mcpm_sync_init()
--- /dev/null
+/*
+ * linux/arch/arm/mach-vexpress/mcpm_platsmp.c
+ *
+ * Created by: Nicolas Pitre, November 2012
+ * Copyright: (C) 2012-2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Code to handle secondary CPU bringup and hotplug for the cluster power API.
+ */
+
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/spinlock.h>
+
+#include <linux/irqchip/arm-gic.h>
+
+#include <asm/mcpm.h>
+#include <asm/smp.h>
+#include <asm/smp_plat.h>
+
+static void __init simple_smp_init_cpus(void)
+{
+}
+
+static int __cpuinit mcpm_boot_secondary(unsigned int cpu, struct task_struct *idle)
+{
+ unsigned int mpidr, pcpu, pcluster, ret;
+ extern void secondary_startup(void);
+
+ mpidr = cpu_logical_map(cpu);
+ pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+ pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ pr_debug("%s: logical CPU %d is physical CPU %d cluster %d\n",
+ __func__, cpu, pcpu, pcluster);
+
+ mcpm_set_entry_vector(pcpu, pcluster, NULL);
+ ret = mcpm_cpu_power_up(pcpu, pcluster);
+ if (ret)
+ return ret;
+ mcpm_set_entry_vector(pcpu, pcluster, secondary_startup);
+ arch_send_wakeup_ipi_mask(cpumask_of(cpu));
+ dsb_sev();
+ return 0;
+}
+
+static void __cpuinit mcpm_secondary_init(unsigned int cpu)
+{
+ mcpm_cpu_powered_up();
+ gic_secondary_init(0);
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
+
+static int mcpm_cpu_disable(unsigned int cpu)
+{
+ /*
+ * We assume all CPUs may be shut down.
+ * This would be the hook to use for eventual Secure
+ * OS migration requests as described in the PSCI spec.
+ */
+ return 0;
+}
+
+static void mcpm_cpu_die(unsigned int cpu)
+{
+ unsigned int mpidr, pcpu, pcluster;
+ mpidr = read_cpuid_mpidr();
+ pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+ pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ mcpm_set_entry_vector(pcpu, pcluster, NULL);
+ mcpm_cpu_power_down();
+}
+
+#endif
+
+static struct smp_operations __initdata mcpm_smp_ops = {
+ .smp_init_cpus = simple_smp_init_cpus,
+ .smp_boot_secondary = mcpm_boot_secondary,
+ .smp_secondary_init = mcpm_secondary_init,
+#ifdef CONFIG_HOTPLUG_CPU
+ .cpu_disable = mcpm_cpu_disable,
+ .cpu_die = mcpm_cpu_die,
+#endif
+};
+
+void __init mcpm_smp_set_ops(void)
+{
+ smp_set_ops(&mcpm_smp_ops);
+}
--- /dev/null
+/*
+ * vlock.S - simple voting lock implementation for ARM
+ *
+ * Created by: Dave Martin, 2012-08-16
+ * Copyright: (C) 2012-2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * This algorithm is described in more detail in
+ * Documentation/arm/vlocks.txt.
+ */
+
+#include <linux/linkage.h>
+#include "vlock.h"
+
+/* Select different code if voting flags can fit in a single word. */
+#if VLOCK_VOTING_SIZE > 4
+#define FEW(x...)
+#define MANY(x...) x
+#else
+#define FEW(x...) x
+#define MANY(x...)
+#endif
+
+@ voting lock for first-man coordination
+
+.macro voting_begin rbase:req, rcpu:req, rscratch:req
+ mov \rscratch, #1
+ strb \rscratch, [\rbase, \rcpu]
+ dmb
+.endm
+
+.macro voting_end rbase:req, rcpu:req, rscratch:req
+ dmb
+ mov \rscratch, #0
+ strb \rscratch, [\rbase, \rcpu]
+ dsb
+ sev
+.endm
+
+/*
+ * The vlock structure must reside in Strongly-Ordered or Device memory.
+ * This implementation deliberately eliminates most of the barriers which
+ * would be required for other memory types, and assumes that independent
+ * writes to neighbouring locations within a cacheline do not interfere
+ * with one another.
+ */
+
+@ r0: lock structure base
+@ r1: CPU ID (0-based index within cluster)
+ENTRY(vlock_trylock)
+ add r1, r1, #VLOCK_VOTING_OFFSET
+
+ voting_begin r0, r1, r2
+
+ ldrb r2, [r0, #VLOCK_OWNER_OFFSET] @ check whether lock is held
+ cmp r2, #VLOCK_OWNER_NONE
+ bne trylock_fail @ fail if so
+
+ @ Control dependency implies strb not observable before previous ldrb.
+
+ strb r1, [r0, #VLOCK_OWNER_OFFSET] @ submit my vote
+
+ voting_end r0, r1, r2 @ implies DMB
+
+ @ Wait for the current round of voting to finish:
+
+ MANY( mov r3, #VLOCK_VOTING_OFFSET )
+0:
+ MANY( ldr r2, [r0, r3] )
+ FEW( ldr r2, [r0, #VLOCK_VOTING_OFFSET] )
+ cmp r2, #0
+ wfene
+ bne 0b
+ MANY( add r3, r3, #4 )
+ MANY( cmp r3, #VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE )
+ MANY( bne 0b )
+
+ @ Check who won:
+
+ dmb
+ ldrb r2, [r0, #VLOCK_OWNER_OFFSET]
+ eor r0, r1, r2 @ zero if I won, else nonzero
+ bx lr
+
+trylock_fail:
+ voting_end r0, r1, r2
+ mov r0, #1 @ nonzero indicates that I lost
+ bx lr
+ENDPROC(vlock_trylock)
+
+@ r0: lock structure base
+ENTRY(vlock_unlock)
+ dmb
+ mov r1, #VLOCK_OWNER_NONE
+ strb r1, [r0, #VLOCK_OWNER_OFFSET]
+ dsb
+ sev
+ bx lr
+ENDPROC(vlock_unlock)
--- /dev/null
+/*
+ * vlock.h - simple voting lock implementation
+ *
+ * Created by: Dave Martin, 2012-08-16
+ * Copyright: (C) 2012-2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __VLOCK_H
+#define __VLOCK_H
+
+#include <asm/mcpm.h>
+
+/* Offsets and sizes are rounded to a word (4 bytes) */
+#define VLOCK_OWNER_OFFSET 0
+#define VLOCK_VOTING_OFFSET 4
+#define VLOCK_VOTING_SIZE ((MAX_CPUS_PER_CLUSTER + 3) / 4 * 4)
+#define VLOCK_SIZE (VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE)
+#define VLOCK_OWNER_NONE 0
+
+#endif /* ! __VLOCK_H */
#define ATOMIC64_INIT(i) { (i) }
+#ifdef CONFIG_ARM_LPAE
+static inline u64 atomic64_read(const atomic64_t *v)
+{
+ u64 result;
+
+ __asm__ __volatile__("@ atomic64_read\n"
+" ldrd %0, %H0, [%1]"
+ : "=&r" (result)
+ : "r" (&v->counter), "Qo" (v->counter)
+ );
+
+ return result;
+}
+
+static inline void atomic64_set(atomic64_t *v, u64 i)
+{
+ __asm__ __volatile__("@ atomic64_set\n"
+" strd %2, %H2, [%1]"
+ : "=Qo" (v->counter)
+ : "r" (&v->counter), "r" (i)
+ );
+}
+#else
static inline u64 atomic64_read(const atomic64_t *v)
{
u64 result;
: "r" (&v->counter), "r" (i)
: "cc");
}
+#endif
static inline void atomic64_add(u64 i, atomic64_t *v)
{
flush_cache_all();
}
+/*
+ * Memory synchronization helpers for mixed cached vs non cached accesses.
+ *
+ * Some synchronization algorithms have to set states in memory with the
+ * cache enabled or disabled depending on the code path. It is crucial
+ * to always ensure proper cache maintenance to update main memory right
+ * away in that case.
+ *
+ * Any cached write must be followed by a cache clean operation.
+ * Any cached read must be preceded by a cache invalidate operation.
+ * Yet, in the read case, a cache flush i.e. atomic clean+invalidate
+ * operation is needed to avoid discarding possible concurrent writes to the
+ * accessed memory.
+ *
+ * Also, in order to prevent a cached writer from interfering with an
+ * adjacent non-cached writer, each state variable must be located to
+ * a separate cache line.
+ */
+
+/*
+ * This needs to be >= the max cache writeback size of all
+ * supported platforms included in the current kernel configuration.
+ * This is used to align state variables to their own cache lines.
+ */
+#define __CACHE_WRITEBACK_ORDER 6 /* guessed from existing platforms */
+#define __CACHE_WRITEBACK_GRANULE (1 << __CACHE_WRITEBACK_ORDER)
+
+/*
+ * There is no __cpuc_clean_dcache_area but we use it anyway for
+ * code intent clarity, and alias it to __cpuc_flush_dcache_area.
+ */
+#define __cpuc_clean_dcache_area __cpuc_flush_dcache_area
+
+/*
+ * Ensure preceding writes to *p by this CPU are visible to
+ * subsequent reads by other CPUs:
+ */
+static inline void __sync_cache_range_w(volatile void *p, size_t size)
+{
+ char *_p = (char *)p;
+
+ __cpuc_clean_dcache_area(_p, size);
+ outer_clean_range(__pa(_p), __pa(_p + size));
+}
+
+/*
+ * Ensure preceding writes to *p by other CPUs are visible to
+ * subsequent reads by this CPU. We must be careful not to
+ * discard data simultaneously written by another CPU, hence the
+ * usage of flush rather than invalidate operations.
+ */
+static inline void __sync_cache_range_r(volatile void *p, size_t size)
+{
+ char *_p = (char *)p;
+
+#ifdef CONFIG_OUTER_CACHE
+ if (outer_cache.flush_range) {
+ /*
+ * Ensure dirty data migrated from other CPUs into our cache
+ * are cleaned out safely before the outer cache is cleaned:
+ */
+ __cpuc_clean_dcache_area(_p, size);
+
+ /* Clean and invalidate stale data for *p from outer ... */
+ outer_flush_range(__pa(_p), __pa(_p + size));
+ }
+#endif
+
+ /* ... and inner cache: */
+ __cpuc_flush_dcache_area(_p, size);
+}
+
+#define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr))
+#define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr))
+
#endif
#define vectors_high() (0)
#endif
+#ifdef CONFIG_CPU_CP15
+
extern unsigned long cr_no_alignment; /* defined in entry-armv.S */
extern unsigned long cr_alignment; /* defined in entry-armv.S */
isb();
}
-#endif
+#else /* ifdef CONFIG_CPU_CP15 */
+
+/*
+ * cr_alignment and cr_no_alignment are tightly coupled to cp15 (at least in the
+ * minds of the developers). Yielding 0 for machines without a cp15 (and making
+ * it read-only) is fine for most cases and saves quite some #ifdeffery.
+ */
+#define cr_no_alignment UL(0)
+#define cr_alignment UL(0)
+
+#endif /* ifdef CONFIG_CPU_CP15 / else */
+
+#endif /* ifndef __ASSEMBLY__ */
#endif
#define MPIDR_AFFINITY_LEVEL(mpidr, level) \
((mpidr >> (MPIDR_LEVEL_BITS * level)) & MPIDR_LEVEL_MASK)
+#define ARM_CPU_IMP_ARM 0x41
+#define ARM_CPU_IMP_INTEL 0x69
+
+#define ARM_CPU_PART_ARM1136 0xB360
+#define ARM_CPU_PART_ARM1156 0xB560
+#define ARM_CPU_PART_ARM1176 0xB760
+#define ARM_CPU_PART_ARM11MPCORE 0xB020
+#define ARM_CPU_PART_CORTEX_A8 0xC080
+#define ARM_CPU_PART_CORTEX_A9 0xC090
+#define ARM_CPU_PART_CORTEX_A5 0xC050
+#define ARM_CPU_PART_CORTEX_A15 0xC0F0
+#define ARM_CPU_PART_CORTEX_A7 0xC070
+
+#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
+#define ARM_CPU_XSCALE_ARCH_V1 0x2000
+#define ARM_CPU_XSCALE_ARCH_V2 0x4000
+#define ARM_CPU_XSCALE_ARCH_V3 0x6000
+
extern unsigned int processor_id;
#ifdef CONFIG_CPU_CP15
: "cc"); \
__val; \
})
+
#define read_cpuid_ext(ext_reg) \
({ \
unsigned int __val; \
: "cc"); \
__val; \
})
-#else
-#define read_cpuid(reg) (processor_id)
-#define read_cpuid_ext(reg) 0
-#endif
-#define ARM_CPU_IMP_ARM 0x41
-#define ARM_CPU_IMP_INTEL 0x69
+#else /* ifdef CONFIG_CPU_CP15 */
-#define ARM_CPU_PART_ARM1136 0xB360
-#define ARM_CPU_PART_ARM1156 0xB560
-#define ARM_CPU_PART_ARM1176 0xB760
-#define ARM_CPU_PART_ARM11MPCORE 0xB020
-#define ARM_CPU_PART_CORTEX_A8 0xC080
-#define ARM_CPU_PART_CORTEX_A9 0xC090
-#define ARM_CPU_PART_CORTEX_A5 0xC050
-#define ARM_CPU_PART_CORTEX_A15 0xC0F0
-#define ARM_CPU_PART_CORTEX_A7 0xC070
+/*
+ * read_cpuid and read_cpuid_ext should only ever be called on machines that
+ * have cp15 so warn on other usages.
+ */
+#define read_cpuid(reg) \
+ ({ \
+ WARN_ON_ONCE(1); \
+ 0; \
+ })
-#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
-#define ARM_CPU_XSCALE_ARCH_V1 0x2000
-#define ARM_CPU_XSCALE_ARCH_V2 0x4000
-#define ARM_CPU_XSCALE_ARCH_V3 0x6000
+#define read_cpuid_ext(reg) read_cpuid(reg)
+
+#endif /* ifdef CONFIG_CPU_CP15 / else */
+#ifdef CONFIG_CPU_CP15
/*
* The CPU ID never changes at run time, so we might as well tell the
* compiler that it's constant. Use this function to read the CPU ID
return read_cpuid(CPUID_ID);
}
+#else /* ifdef CONFIG_CPU_CP15 */
+
+static inline unsigned int __attribute_const__ read_cpuid_id(void)
+{
+ return processor_id;
+}
+
+#endif /* ifdef CONFIG_CPU_CP15 / else */
+
static inline unsigned int __attribute_const__ read_cpuid_implementor(void)
{
return (read_cpuid_id() & 0xFF000000) >> 24;
void (*delay)(unsigned long);
void (*const_udelay)(unsigned long);
void (*udelay)(unsigned long);
- bool const_clock;
+ unsigned long ticks_per_jiffy;
} arm_delay_ops;
#define __delay(n) arm_delay_ops.delay(n)
#undef _CACHE
#undef MULTI_CACHE
-#if defined(CONFIG_CPU_CACHE_V3)
-# ifdef _CACHE
-# define MULTI_CACHE 1
-# else
-# define _CACHE v3
-# endif
-#endif
-
#if defined(CONFIG_CPU_CACHE_V4)
# ifdef _CACHE
# define MULTI_CACHE 1
* ================
*
* We have the following to choose from:
- * arm6 - ARM6 style
* arm7 - ARM7 style
* v4_early - ARMv4 without Thumb early abort handler
* v4t_late - ARMv4 with Thumb late abort handler
* v4t_early - ARMv4 with Thumb early abort handler
- * v5tej_early - ARMv5 with Thumb and Java early abort handler
+ * v5t_early - ARMv5 with Thumb early abort handler
+ * v5tj_early - ARMv5 with Thumb and Java early abort handler
* xscale - ARMv5 with Thumb with Xscale extensions
* v6_early - ARMv6 generic early abort handler
* v7_early - ARMv7 generic early abort handler
# endif
#endif
-#ifdef CONFIG_CPU_ABRT_LV4T
+#ifdef CONFIG_CPU_ABRT_EV4
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
-# define CPU_DABORT_HANDLER v4t_late_abort
+# define CPU_DABORT_HANDLER v4_early_abort
# endif
#endif
-#ifdef CONFIG_CPU_ABRT_EV4
+#ifdef CONFIG_CPU_ABRT_LV4T
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
-# define CPU_DABORT_HANDLER v4_early_abort
+# define CPU_DABORT_HANDLER v4t_late_abort
# endif
#endif
# endif
#endif
-#ifdef CONFIG_CPU_ABRT_EV5TJ
+#ifdef CONFIG_CPU_ABRT_EV5T
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
-# define CPU_DABORT_HANDLER v5tj_early_abort
+# define CPU_DABORT_HANDLER v5t_early_abort
# endif
#endif
-#ifdef CONFIG_CPU_ABRT_EV5T
+#ifdef CONFIG_CPU_ABRT_EV5TJ
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
-# define CPU_DABORT_HANDLER v5t_early_abort
+# define CPU_DABORT_HANDLER v5tj_early_abort
# endif
#endif
* IOP3XX processor registers
*/
#define IOP3XX_PERIPHERAL_PHYS_BASE 0xffffe000
-#define IOP3XX_PERIPHERAL_VIRT_BASE 0xfeffe000
+#define IOP3XX_PERIPHERAL_VIRT_BASE 0xfedfe000
#define IOP3XX_PERIPHERAL_SIZE 0x00002000
#define IOP3XX_PERIPHERAL_UPPER_PA (IOP3XX_PERIPHERAL_PHYS_BASE +\
IOP3XX_PERIPHERAL_SIZE - 1)
#endif
#endif
+/*
+ * Needed to be able to broadcast the TLB invalidation for kmap.
+ */
+#ifdef CONFIG_ARM_ERRATA_798181
+#undef ARCH_NEEDS_KMAP_HIGH_GET
+#endif
+
#ifdef ARCH_NEEDS_KMAP_HIGH_GET
extern void *kmap_high_get(struct page *page);
#else
#define HSR_HVC_IMM_MASK ((1UL << 16) - 1)
+#define HSR_DABT_S1PTW (1U << 7)
+#define HSR_DABT_CM (1U << 8)
+#define HSR_DABT_EA (1U << 9)
+
#endif /* __ARM_KVM_ARM_H__ */
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_flush_vm_context(void);
-extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
+extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
#endif
#include <linux/kvm_host.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h>
+#include <asm/kvm_arm.h>
-u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
-u32 *vcpu_spsr(struct kvm_vcpu *vcpu);
+unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
+unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu);
-int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run);
+bool kvm_condition_valid(struct kvm_vcpu *vcpu);
void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr);
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
return 1;
}
-static inline u32 *vcpu_pc(struct kvm_vcpu *vcpu)
+static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
{
- return (u32 *)&vcpu->arch.regs.usr_regs.ARM_pc;
+ return &vcpu->arch.regs.usr_regs.ARM_pc;
}
-static inline u32 *vcpu_cpsr(struct kvm_vcpu *vcpu)
+static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
{
- return (u32 *)&vcpu->arch.regs.usr_regs.ARM_cpsr;
+ return &vcpu->arch.regs.usr_regs.ARM_cpsr;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
return reg == 15;
}
+static inline u32 kvm_vcpu_get_hsr(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.hsr;
+}
+
+static inline unsigned long kvm_vcpu_get_hfar(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.hxfar;
+}
+
+static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
+{
+ return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
+}
+
+static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.hyp_pc;
+}
+
+static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
+}
+
+static inline bool kvm_vcpu_dabt_iswrite(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_WNR;
+}
+
+static inline bool kvm_vcpu_dabt_issext(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_SSE;
+}
+
+static inline int kvm_vcpu_dabt_get_rd(struct kvm_vcpu *vcpu)
+{
+ return (kvm_vcpu_get_hsr(vcpu) & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
+}
+
+static inline bool kvm_vcpu_dabt_isextabt(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_EA;
+}
+
+static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW;
+}
+
+/* Get Access Size from a data abort */
+static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu)
+{
+ switch ((kvm_vcpu_get_hsr(vcpu) >> 22) & 0x3) {
+ case 0:
+ return 1;
+ case 1:
+ return 2;
+ case 2:
+ return 4;
+ default:
+ kvm_err("Hardware is weird: SAS 0b11 is reserved\n");
+ return -EFAULT;
+ }
+}
+
+/* This one is not specific to Data Abort */
+static inline bool kvm_vcpu_trap_il_is32bit(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_IL;
+}
+
+static inline u8 kvm_vcpu_trap_get_class(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) >> HSR_EC_SHIFT;
+}
+
+static inline bool kvm_vcpu_trap_is_iabt(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_trap_get_class(vcpu) == HSR_EC_IABT;
+}
+
+static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_FSC_TYPE;
+}
+
+static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & HSR_HVC_IMM_MASK;
+}
+
#endif /* __ARM_KVM_EMULATE_H__ */
void *objects[KVM_NR_MEM_OBJS];
};
+struct kvm_vcpu_fault_info {
+ u32 hsr; /* Hyp Syndrome Register */
+ u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
+ u32 hpfar; /* Hyp IPA Fault Address Register */
+ u32 hyp_pc; /* PC when exception was taken from Hyp mode */
+};
+
+typedef struct vfp_hard_struct kvm_kernel_vfp_t;
+
struct kvm_vcpu_arch {
struct kvm_regs regs;
u32 midr;
/* Exception Information */
- u32 hsr; /* Hyp Syndrome Register */
- u32 hxfar; /* Hyp Data/Inst Fault Address Register */
- u32 hpfar; /* Hyp IPA Fault Address Register */
+ struct kvm_vcpu_fault_info fault;
/* Floating point registers (VFP and Advanced SIMD/NEON) */
- struct vfp_hard_struct vfp_guest;
- struct vfp_hard_struct *vfp_host;
+ kvm_kernel_vfp_t vfp_guest;
+ kvm_kernel_vfp_t *vfp_host;
/* VGIC state */
struct vgic_cpu vgic_cpu;
/* Interrupt related fields */
u32 irq_lines; /* IRQ and FIQ levels */
- /* Hyp exception information */
- u32 hyp_pc; /* PC when exception was taken from Hyp mode */
-
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ int exception_index);
+
+static inline void __cpu_init_hyp_mode(unsigned long long pgd_ptr,
+ unsigned long hyp_stack_ptr,
+ unsigned long vector_ptr)
+{
+ unsigned long pgd_low, pgd_high;
+
+ pgd_low = (pgd_ptr & ((1ULL << 32) - 1));
+ pgd_high = (pgd_ptr >> 32ULL);
+
+ /*
+ * Call initialization code, and switch to the full blown
+ * HYP code. The init code doesn't need to preserve these registers as
+ * r1-r3 and r12 are already callee save according to the AAPCS.
+ * Note that we slightly misuse the prototype by casing the pgd_low to
+ * a void *.
+ */
+ kvm_call_hyp((void *)pgd_low, pgd_high, hyp_stack_ptr, vector_ptr);
+}
+
#endif /* __ARM_KVM_HOST_H__ */
#ifndef __ARM_KVM_MMU_H__
#define __ARM_KVM_MMU_H__
+#include <asm/cacheflush.h>
+#include <asm/pgalloc.h>
+#include <asm/idmap.h>
+
+/*
+ * We directly use the kernel VA for the HYP, as we can directly share
+ * the mapping (HTTBR "covers" TTBR1).
+ */
+#define HYP_PAGE_OFFSET_MASK (~0UL)
+#define HYP_PAGE_OFFSET PAGE_OFFSET
+#define KERN_TO_HYP(kva) (kva)
+
int create_hyp_mappings(void *from, void *to);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_hyp_pmds(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
+static inline void kvm_set_pte(pte_t *pte, pte_t new_pte)
+{
+ pte_val(*pte) = new_pte;
+ /*
+ * flush_pmd_entry just takes a void pointer and cleans the necessary
+ * cache entries, so we can reuse the function for ptes.
+ */
+ flush_pmd_entry(pte);
+}
+
static inline bool kvm_is_write_fault(unsigned long hsr)
{
unsigned long hsr_ec = hsr >> HSR_EC_SHIFT;
return true;
}
+static inline void kvm_clean_pgd(pgd_t *pgd)
+{
+ clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
+}
+
+static inline void kvm_clean_pmd_entry(pmd_t *pmd)
+{
+ clean_pmd_entry(pmd);
+}
+
+static inline void kvm_clean_pte(pte_t *pte)
+{
+ clean_pte_table(pte);
+}
+
+static inline void kvm_set_s2pte_writable(pte_t *pte)
+{
+ pte_val(*pte) |= L_PTE_S2_RDWR;
+}
+
+struct kvm;
+
+static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
+{
+ /*
+ * If we are going to insert an instruction page and the icache is
+ * either VIPT or PIPT, there is a potential problem where the host
+ * (or another VM) may have used the same page as this guest, and we
+ * read incorrect data from the icache. If we're using a PIPT cache,
+ * we can invalidate just that page, but if we are using a VIPT cache
+ * we need to invalidate the entire icache - damn shame - as written
+ * in the ARM ARM (DDI 0406C.b - Page B3-1393).
+ *
+ * VIVT caches are tagged using both the ASID and the VMID and doesn't
+ * need any kind of flushing (DDI 0406C.b - Page B3-1392).
+ */
+ if (icache_is_pipt()) {
+ unsigned long hva = gfn_to_hva(kvm, gfn);
+ __cpuc_coherent_user_range(hva, hva + PAGE_SIZE);
+ } else if (!icache_is_vivt_asid_tagged()) {
+ /* any kind of VIPT cache */
+ __flush_icache_all();
+ }
+}
+
#endif /* __ARM_KVM_MMU_H__ */
#include <linux/kernel.h>
#include <linux/kvm.h>
-#include <linux/kvm_host.h>
#include <linux/irqreturn.h>
#include <linux/spinlock.h>
#include <linux/types.h>
void (*postinit)(void);
u8 (*swizzle)(struct pci_dev *dev, u8 *pin);
int (*map_irq)(const struct pci_dev *dev, u8 slot, u8 pin);
+ resource_size_t (*align_resource)(struct pci_dev *dev,
+ const struct resource *res,
+ resource_size_t start,
+ resource_size_t size,
+ resource_size_t align);
};
/*
u8 (*swizzle)(struct pci_dev *, u8 *);
/* IRQ mapping */
int (*map_irq)(const struct pci_dev *, u8, u8);
+ /* Resource alignement requirements */
+ resource_size_t (*align_resource)(struct pci_dev *dev,
+ const struct resource *res,
+ resource_size_t start,
+ resource_size_t size,
+ resource_size_t align);
void *private_data; /* platform controller private data */
};
--- /dev/null
+/*
+ * arch/arm/include/asm/mcpm.h
+ *
+ * Created by: Nicolas Pitre, April 2012
+ * Copyright: (C) 2012-2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef MCPM_H
+#define MCPM_H
+
+/*
+ * Maximum number of possible clusters / CPUs per cluster.
+ *
+ * This should be sufficient for quite a while, while keeping the
+ * (assembly) code simpler. When this starts to grow then we'll have
+ * to consider dynamic allocation.
+ */
+#define MAX_CPUS_PER_CLUSTER 4
+#define MAX_NR_CLUSTERS 2
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+#include <asm/cacheflush.h>
+
+/*
+ * Platform specific code should use this symbol to set up secondary
+ * entry location for processors to use when released from reset.
+ */
+extern void mcpm_entry_point(void);
+
+/*
+ * This is used to indicate where the given CPU from given cluster should
+ * branch once it is ready to re-enter the kernel using ptr, or NULL if it
+ * should be gated. A gated CPU is held in a WFE loop until its vector
+ * becomes non NULL.
+ */
+void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr);
+
+/*
+ * CPU/cluster power operations API for higher subsystems to use.
+ */
+
+/**
+ * mcpm_cpu_power_up - make given CPU in given cluster runable
+ *
+ * @cpu: CPU number within given cluster
+ * @cluster: cluster number for the CPU
+ *
+ * The identified CPU is brought out of reset. If the cluster was powered
+ * down then it is brought up as well, taking care not to let the other CPUs
+ * in the cluster run, and ensuring appropriate cluster setup.
+ *
+ * Caller must ensure the appropriate entry vector is initialized with
+ * mcpm_set_entry_vector() prior to calling this.
+ *
+ * This must be called in a sleepable context. However, the implementation
+ * is strongly encouraged to return early and let the operation happen
+ * asynchronously, especially when significant delays are expected.
+ *
+ * If the operation cannot be performed then an error code is returned.
+ */
+int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster);
+
+/**
+ * mcpm_cpu_power_down - power the calling CPU down
+ *
+ * The calling CPU is powered down.
+ *
+ * If this CPU is found to be the "last man standing" in the cluster
+ * then the cluster is prepared for power-down too.
+ *
+ * This must be called with interrupts disabled.
+ *
+ * This does not return. Re-entry in the kernel is expected via
+ * mcpm_entry_point.
+ */
+void mcpm_cpu_power_down(void);
+
+/**
+ * mcpm_cpu_suspend - bring the calling CPU in a suspended state
+ *
+ * @expected_residency: duration in microseconds the CPU is expected
+ * to remain suspended, or 0 if unknown/infinity.
+ *
+ * The calling CPU is suspended. The expected residency argument is used
+ * as a hint by the platform specific backend to implement the appropriate
+ * sleep state level according to the knowledge it has on wake-up latency
+ * for the given hardware.
+ *
+ * If this CPU is found to be the "last man standing" in the cluster
+ * then the cluster may be prepared for power-down too, if the expected
+ * residency makes it worthwhile.
+ *
+ * This must be called with interrupts disabled.
+ *
+ * This does not return. Re-entry in the kernel is expected via
+ * mcpm_entry_point.
+ */
+void mcpm_cpu_suspend(u64 expected_residency);
+
+/**
+ * mcpm_cpu_powered_up - housekeeping workafter a CPU has been powered up
+ *
+ * This lets the platform specific backend code perform needed housekeeping
+ * work. This must be called by the newly activated CPU as soon as it is
+ * fully operational in kernel space, before it enables interrupts.
+ *
+ * If the operation cannot be performed then an error code is returned.
+ */
+int mcpm_cpu_powered_up(void);
+
+/*
+ * Platform specific methods used in the implementation of the above API.
+ */
+struct mcpm_platform_ops {
+ int (*power_up)(unsigned int cpu, unsigned int cluster);
+ void (*power_down)(void);
+ void (*suspend)(u64);
+ void (*powered_up)(void);
+};
+
+/**
+ * mcpm_platform_register - register platform specific power methods
+ *
+ * @ops: mcpm_platform_ops structure to register
+ *
+ * An error is returned if the registration has been done previously.
+ */
+int __init mcpm_platform_register(const struct mcpm_platform_ops *ops);
+
+/* Synchronisation structures for coordinating safe cluster setup/teardown: */
+
+/*
+ * When modifying this structure, make sure you update the MCPM_SYNC_ defines
+ * to match.
+ */
+struct mcpm_sync_struct {
+ /* individual CPU states */
+ struct {
+ s8 cpu __aligned(__CACHE_WRITEBACK_GRANULE);
+ } cpus[MAX_CPUS_PER_CLUSTER];
+
+ /* cluster state */
+ s8 cluster __aligned(__CACHE_WRITEBACK_GRANULE);
+
+ /* inbound-side state */
+ s8 inbound __aligned(__CACHE_WRITEBACK_GRANULE);
+};
+
+struct sync_struct {
+ struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS];
+};
+
+extern unsigned long sync_phys; /* physical address of *mcpm_sync */
+
+void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster);
+void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster);
+void __mcpm_outbound_leave_critical(unsigned int cluster, int state);
+bool __mcpm_outbound_enter_critical(unsigned int this_cpu, unsigned int cluster);
+int __mcpm_cluster_state(unsigned int cluster);
+
+int __init mcpm_sync_init(
+ void (*power_up_setup)(unsigned int affinity_level));
+
+void __init mcpm_smp_set_ops(void);
+
+#else
+
+/*
+ * asm-offsets.h causes trouble when included in .c files, and cacheflush.h
+ * cannot be included in asm files. Let's work around the conflict like this.
+ */
+#include <asm/asm-offsets.h>
+#define __CACHE_WRITEBACK_GRANULE CACHE_WRITEBACK_GRANULE
+
+#endif /* ! __ASSEMBLY__ */
+
+/* Definitions for mcpm_sync_struct */
+#define CPU_DOWN 0x11
+#define CPU_COMING_UP 0x12
+#define CPU_UP 0x13
+#define CPU_GOING_DOWN 0x14
+
+#define CLUSTER_DOWN 0x21
+#define CLUSTER_UP 0x22
+#define CLUSTER_GOING_DOWN 0x23
+
+#define INBOUND_NOT_COMING_UP 0x31
+#define INBOUND_COMING_UP 0x32
+
+/*
+ * Offsets for the mcpm_sync_struct members, for use in asm.
+ * We don't want to make them global to the kernel via asm-offsets.c.
+ */
+#define MCPM_SYNC_CLUSTER_CPUS 0
+#define MCPM_SYNC_CPU_SIZE __CACHE_WRITEBACK_GRANULE
+#define MCPM_SYNC_CLUSTER_CLUSTER \
+ (MCPM_SYNC_CLUSTER_CPUS + MCPM_SYNC_CPU_SIZE * MAX_CPUS_PER_CLUSTER)
+#define MCPM_SYNC_CLUSTER_INBOUND \
+ (MCPM_SYNC_CLUSTER_CLUSTER + __CACHE_WRITEBACK_GRANULE)
+#define MCPM_SYNC_CLUSTER_SIZE \
+ (MCPM_SYNC_CLUSTER_INBOUND + __CACHE_WRITEBACK_GRANULE)
+
+#endif
void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk);
#define init_new_context(tsk,mm) ({ atomic64_set(&mm->context.id, 0); 0; })
+DECLARE_PER_CPU(atomic64_t, active_asids);
+
#else /* !CONFIG_CPU_HAS_ASID */
#ifdef CONFIG_MMU
#define L_PTE_S2_MT_WRITETHROUGH (_AT(pteval_t, 0xa) << 2) /* MemAttr[3:0] */
#define L_PTE_S2_MT_WRITEBACK (_AT(pteval_t, 0xf) << 2) /* MemAttr[3:0] */
#define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */
-#define L_PTE_S2_RDWR (_AT(pteval_t, 2) << 6) /* HAP[2:1] */
+#define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */
/*
* Hyp-mode PL2 PTE definitions for LPAE.
*/
#define FIRST_USER_ADDRESS PAGE_SIZE
+/*
+ * Use TASK_SIZE as the ceiling argument for free_pgtables() and
+ * free_pgd_range() to avoid freeing the modules pmd when LPAE is enabled (pmd
+ * page shared between user and kernel).
+ */
+#ifdef CONFIG_ARM_LPAE
+#define USER_PGTABLES_CEILING TASK_SIZE
+#endif
+
/*
* The pgprot_* and protection_map entries will be fixed up in runtime
* to include the cachable and bufferable bits based on memory policy,
#define TIF_SYSCALL_AUDIT 9
#define TIF_SYSCALL_TRACEPOINT 10
#define TIF_SECCOMP 11 /* seccomp syscall filtering active */
+#define TIF_NOHZ 12 /* in adaptive nohz mode */
#define TIF_USING_IWMMXT 17
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 20
#include <asm/glue.h>
-#define TLB_V3_PAGE (1 << 0)
#define TLB_V4_U_PAGE (1 << 1)
#define TLB_V4_D_PAGE (1 << 2)
#define TLB_V4_I_PAGE (1 << 3)
#define TLB_V6_D_PAGE (1 << 5)
#define TLB_V6_I_PAGE (1 << 6)
-#define TLB_V3_FULL (1 << 8)
#define TLB_V4_U_FULL (1 << 9)
#define TLB_V4_D_FULL (1 << 10)
#define TLB_V4_I_FULL (1 << 11)
* =============
*
* We have the following to choose from:
- * v3 - ARMv3
* v4 - ARMv4 without write buffer
* v4wb - ARMv4 with write buffer without I TLB flush entry instruction
* v4wbi - ARMv4 with write buffer with I TLB flush entry instruction
# define v6wbi_always_flags (-1UL)
#endif
-#define v7wbi_tlb_flags_smp (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \
+#define v7wbi_tlb_flags_smp (TLB_WB | TLB_BARRIER | \
TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | \
TLB_V7_UIS_ASID | TLB_V7_UIS_BP)
#define v7wbi_tlb_flags_up (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \
if (tlb_flag(TLB_WB))
dsb();
- tlb_op(TLB_V3_FULL, "c6, c0, 0", zero);
tlb_op(TLB_V4_U_FULL | TLB_V6_U_FULL, "c8, c7, 0", zero);
tlb_op(TLB_V4_D_FULL | TLB_V6_D_FULL, "c8, c6, 0", zero);
tlb_op(TLB_V4_I_FULL | TLB_V6_I_FULL, "c8, c5, 0", zero);
if (tlb_flag(TLB_WB))
dsb();
- if (possible_tlb_flags & (TLB_V3_FULL|TLB_V4_U_FULL|TLB_V4_D_FULL|TLB_V4_I_FULL)) {
+ if (possible_tlb_flags & (TLB_V4_U_FULL|TLB_V4_D_FULL|TLB_V4_I_FULL)) {
if (cpumask_test_cpu(get_cpu(), mm_cpumask(mm))) {
- tlb_op(TLB_V3_FULL, "c6, c0, 0", zero);
tlb_op(TLB_V4_U_FULL, "c8, c7, 0", zero);
tlb_op(TLB_V4_D_FULL, "c8, c6, 0", zero);
tlb_op(TLB_V4_I_FULL, "c8, c5, 0", zero);
if (tlb_flag(TLB_WB))
dsb();
- if (possible_tlb_flags & (TLB_V3_PAGE|TLB_V4_U_PAGE|TLB_V4_D_PAGE|TLB_V4_I_PAGE|TLB_V4_I_FULL) &&
+ if (possible_tlb_flags & (TLB_V4_U_PAGE|TLB_V4_D_PAGE|TLB_V4_I_PAGE|TLB_V4_I_FULL) &&
cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
- tlb_op(TLB_V3_PAGE, "c6, c0, 0", uaddr);
tlb_op(TLB_V4_U_PAGE, "c8, c7, 1", uaddr);
tlb_op(TLB_V4_D_PAGE, "c8, c6, 1", uaddr);
tlb_op(TLB_V4_I_PAGE, "c8, c5, 1", uaddr);
if (tlb_flag(TLB_WB))
dsb();
- tlb_op(TLB_V3_PAGE, "c6, c0, 0", kaddr);
tlb_op(TLB_V4_U_PAGE, "c8, c7, 1", kaddr);
tlb_op(TLB_V4_D_PAGE, "c8, c6, 1", kaddr);
tlb_op(TLB_V4_I_PAGE, "c8, c5, 1", kaddr);
isb();
}
+#ifdef CONFIG_ARM_ERRATA_798181
+static inline void dummy_flush_tlb_a15_erratum(void)
+{
+ /*
+ * Dummy TLBIMVAIS. Using the unmapped address 0 and ASID 0.
+ */
+ asm("mcr p15, 0, %0, c8, c3, 1" : : "r" (0));
+ dsb();
+}
+#else
+static inline void dummy_flush_tlb_a15_erratum(void)
+{
+}
+#endif
+
/*
* flush_pmd_entry
*
--- /dev/null
+#ifdef CONFIG_DEBUG_UNCOMPRESS
+extern void putc(int c);
+#else
+static inline void putc(int c) {}
+#endif
+static inline void flush(void) {}
+static inline void arch_decomp_setup(void) {}
#define KVM_ARM_FIQ_spsr fiq_regs[7]
struct kvm_regs {
- struct pt_regs usr_regs;/* R0_usr - R14_usr, PC, CPSR */
- __u32 svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
- __u32 abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
- __u32 und_regs[3]; /* SP_und, LR_und, SPSR_und */
- __u32 irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
- __u32 fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
+ struct pt_regs usr_regs; /* R0_usr - R14_usr, PC, CPSR */
+ unsigned long svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
+ unsigned long abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
+ unsigned long und_regs[3]; /* SP_und, LR_und, SPSR_und */
+ unsigned long irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
+ unsigned long fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
};
/* Supported Processor Types */
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
+ BLANK();
+ DEFINE(CACHE_WRITEBACK_ORDER, __CACHE_WRITEBACK_ORDER);
+ DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
+ BLANK();
#ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
- DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.hsr));
- DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.hxfar));
- DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.hpfar));
- DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.hyp_pc));
+ DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
+ DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
+ DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.fault.hpfar));
+ DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
#ifdef CONFIG_KVM_ARM_VGIC
DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
DEFINE(VGIC_CPU_HCR, offsetof(struct vgic_cpu, vgic_hcr));
sys->busnr = busnr;
sys->swizzle = hw->swizzle;
sys->map_irq = hw->map_irq;
+ sys->align_resource = hw->align_resource;
INIT_LIST_HEAD(&sys->resources);
if (hw->private_data)
resource_size_t pcibios_align_resource(void *data, const struct resource *res,
resource_size_t size, resource_size_t align)
{
+ struct pci_dev *dev = data;
+ struct pci_sys_data *sys = dev->sysdata;
resource_size_t start = res->start;
if (res->flags & IORESOURCE_IO && start & 0x300)
start = (start + align - 1) & ~(align - 1);
+ if (sys->align_resource)
+ return sys->align_resource(dev, res, start, size, align);
+
return start;
}
svc_entry
mov r2, sp
dabt_helper
-
- @
- @ IRQs off again before pulling preserved data off the stack
- @
- disable_irq_notrace
-
-#ifdef CONFIG_TRACE_IRQFLAGS
- tst r5, #PSR_I_BIT
- bleq trace_hardirqs_on
- tst r5, #PSR_I_BIT
- blne trace_hardirqs_off
-#endif
svc_exit r5 @ return from exception
UNWIND(.fnend )
ENDPROC(__dabt_svc)
blne svc_preempt
#endif
-#ifdef CONFIG_TRACE_IRQFLAGS
- @ The parent context IRQs must have been enabled to get here in
- @ the first place, so there's no point checking the PSR I bit.
- bl trace_hardirqs_on
-#endif
- svc_exit r5 @ return from exception
+ svc_exit r5, irq = 1 @ return from exception
UNWIND(.fnend )
ENDPROC(__irq_svc)
mov r0, sp @ struct pt_regs *regs
bl __und_fault
- @
- @ IRQs off again before pulling preserved data off the stack
- @
__und_svc_finish:
- disable_irq_notrace
-
- @
- @ restore SPSR and restart the instruction
- @
ldr r5, [sp, #S_PSR] @ Get SVC cpsr
-#ifdef CONFIG_TRACE_IRQFLAGS
- tst r5, #PSR_I_BIT
- bleq trace_hardirqs_on
- tst r5, #PSR_I_BIT
- blne trace_hardirqs_off
-#endif
svc_exit r5 @ return from exception
UNWIND(.fnend )
ENDPROC(__und_svc)
svc_entry
mov r2, sp @ regs
pabt_helper
-
- @
- @ IRQs off again before pulling preserved data off the stack
- @
- disable_irq_notrace
-
-#ifdef CONFIG_TRACE_IRQFLAGS
- tst r5, #PSR_I_BIT
- bleq trace_hardirqs_on
- tst r5, #PSR_I_BIT
- blne trace_hardirqs_off
-#endif
svc_exit r5 @ return from exception
UNWIND(.fnend )
ENDPROC(__pabt_svc)
#ifdef CONFIG_IRQSOFF_TRACER
bl trace_hardirqs_off
#endif
+ ct_user_exit save = 0
.endm
.macro kuser_cmpxchg_check
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
bne fast_work_pending
-#if defined(CONFIG_IRQSOFF_TRACER)
asm_trace_hardirqs_on
-#endif
/* perform architecture specific actions before user return */
arch_ret_to_user r1, lr
+ ct_user_enter
restore_user_regs fast = 1, offset = S_OFF
UNWIND(.fnend )
tst r1, #_TIF_WORK_MASK
bne work_pending
no_work_pending:
-#if defined(CONFIG_IRQSOFF_TRACER)
asm_trace_hardirqs_on
-#endif
+
/* perform architecture specific actions before user return */
arch_ret_to_user r1, lr
+ ct_user_enter save = 0
restore_user_regs fast = 0, offset = 0
ENDPROC(ret_to_user_from_irq)
*/
.macro mcount_enter
+/*
+ * This pad compensates for the push {lr} at the call site. Note that we are
+ * unable to unwind through a function which does not otherwise save its lr.
+ */
+ UNWIND(.pad #4)
stmdb sp!, {r0-r3, lr}
+ UNWIND(.save {r0-r3, lr})
.endm
.macro mcount_get_lr reg
.endm
ENTRY(__gnu_mcount_nc)
+UNWIND(.fnstart)
#ifdef CONFIG_DYNAMIC_FTRACE
mov ip, lr
ldmia sp!, {lr}
#else
__mcount
#endif
+UNWIND(.fnend)
ENDPROC(__gnu_mcount_nc)
#ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(ftrace_caller)
+UNWIND(.fnstart)
__ftrace_caller
+UNWIND(.fnend)
ENDPROC(ftrace_caller)
#endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
ENTRY(ftrace_graph_caller)
+UNWIND(.fnstart)
__ftrace_graph_caller
+UNWIND(.fnend)
ENDPROC(ftrace_graph_caller)
#endif
mcr p15, 0, ip, c1, c0 @ update control register
#endif
enable_irq
+ ct_user_exit
get_thread_info tsk
adr tbl, sys_call_table @ load syscall table pointer
.endm
#ifndef CONFIG_THUMB2_KERNEL
- .macro svc_exit, rpsr
+ .macro svc_exit, rpsr, irq = 0
+ .if \irq != 0
+ @ IRQs already off
+#ifdef CONFIG_TRACE_IRQFLAGS
+ @ The parent context IRQs must have been enabled to get here in
+ @ the first place, so there's no point checking the PSR I bit.
+ bl trace_hardirqs_on
+#endif
+ .else
+ @ IRQs off again before pulling preserved data off the stack
+ disable_irq_notrace
+#ifdef CONFIG_TRACE_IRQFLAGS
+ tst \rpsr, #PSR_I_BIT
+ bleq trace_hardirqs_on
+ tst \rpsr, #PSR_I_BIT
+ blne trace_hardirqs_off
+#endif
+ .endif
msr spsr_cxsf, \rpsr
#if defined(CONFIG_CPU_V6)
ldr r0, [sp]
mov pc, \reg
.endm
#else /* CONFIG_THUMB2_KERNEL */
- .macro svc_exit, rpsr
+ .macro svc_exit, rpsr, irq = 0
+ .if \irq != 0
+ @ IRQs already off
+#ifdef CONFIG_TRACE_IRQFLAGS
+ @ The parent context IRQs must have been enabled to get here in
+ @ the first place, so there's no point checking the PSR I bit.
+ bl trace_hardirqs_on
+#endif
+ .else
+ @ IRQs off again before pulling preserved data off the stack
+ disable_irq_notrace
+#ifdef CONFIG_TRACE_IRQFLAGS
+ tst \rpsr, #PSR_I_BIT
+ bleq trace_hardirqs_on
+ tst \rpsr, #PSR_I_BIT
+ blne trace_hardirqs_off
+#endif
+ .endif
ldr lr, [sp, #S_SP] @ top of the stack
ldrd r0, r1, [sp, #S_LR] @ calling lr and pc
clrex @ clear the exclusive monitor
.endm
#endif /* !CONFIG_THUMB2_KERNEL */
+/*
+ * Context tracking subsystem. Used to instrument transitions
+ * between user and kernel mode.
+ */
+ .macro ct_user_exit, save = 1
+#ifdef CONFIG_CONTEXT_TRACKING
+ .if \save
+ stmdb sp!, {r0-r3, ip, lr}
+ bl user_exit
+ ldmia sp!, {r0-r3, ip, lr}
+ .else
+ bl user_exit
+ .endif
+#endif
+ .endm
+
+ .macro ct_user_enter, save = 1
+#ifdef CONFIG_CONTEXT_TRACKING
+ .if \save
+ stmdb sp!, {r0-r3, ip, lr}
+ bl user_enter
+ ldmia sp!, {r0-r3, ip, lr}
+ .else
+ bl user_enter
+ .endif
+#endif
+ .endm
+
/*
* These are the registers used in the syscall handler, and allow us to
* have in theory up to 7 arguments to a function - r0 to r6.
str r9, [r4] @ Save processor ID
str r1, [r5] @ Save machine type
str r2, [r6] @ Save atags pointer
- bic r4, r0, #CR_A @ Clear 'A' bit
- stmia r7, {r0, r4} @ Save control register values
+ cmp r7, #0
+ bicne r4, r0, #CR_A @ Clear 'A' bit
+ stmneia r7, {r0, r4} @ Save control register values
b start_kernel
ENDPROC(__mmap_switched)
.long processor_id @ r4
.long __machine_arch_type @ r5
.long __atags_pointer @ r6
+#ifdef CONFIG_CPU_CP15
.long cr_alignment @ r7
+#else
+ .long 0 @ r7
+#endif
.long init_thread_union + THREAD_START_SP @ sp
.size __mmap_switched_data, . - __mmap_switched_data
* numbers for r1.
*
*/
- .arm
__HEAD
+
+#ifdef CONFIG_CPU_THUMBONLY
+ .thumb
+ENTRY(stext)
+#else
+ .arm
ENTRY(stext)
THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM.
THUMB( bx r9 ) @ If this is a Thumb-2 kernel,
THUMB( .thumb ) @ switch to Thumb now.
THUMB(1: )
+#endif
setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode
@ and irqs disabled
addne r6, r6, #1 << SECTION_SHIFT
strne r6, [r3]
-#if defined(CONFIG_LPAE) && defined(CONFIG_CPU_ENDIAN_BE8)
+#if defined(CONFIG_ARM_LPAE) && defined(CONFIG_CPU_ENDIAN_BE8)
sub r4, r4, #4 @ Fixup page table pointer
@ for 64-bit descriptors
#endif
}
if (err) {
- pr_warning("CPU %d debug is powered down!\n", cpu);
+ pr_warn_once("CPU %d debug is powered down!\n", cpu);
cpumask_or(&debug_err_mask, &debug_err_mask, cpumask_of(cpu));
return;
}
isb();
if (cpumask_intersects(&debug_err_mask, cpumask_of(cpu))) {
- pr_warning("CPU %d failed to disable vector catch\n", cpu);
+ pr_warn_once("CPU %d failed to disable vector catch\n", cpu);
return;
}
}
if (cpumask_intersects(&debug_err_mask, cpumask_of(cpu))) {
- pr_warning("CPU %d failed to clear debug register pairs\n", cpu);
+ pr_warn_once("CPU %d failed to clear debug register pairs\n", cpu);
return;
}
return NOTIFY_OK;
}
-static struct notifier_block __cpuinitdata dbg_cpu_pm_nb = {
+static struct notifier_block dbg_cpu_pm_nb = {
.notifier_call = dbg_cpu_pm_notify,
};
struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
struct pmu *leader_pmu = event->group_leader->pmu;
- if (event->pmu != leader_pmu || event->state <= PERF_EVENT_STATE_OFF)
+ if (event->pmu != leader_pmu || event->state < PERF_EVENT_STATE_OFF)
+ return 1;
+
+ if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
return 1;
return armpmu->get_event_idx(hw_events, event) >= 0;
struct return_address_data *data = d;
if (!data->level) {
- data->addr = (void *)frame->lr;
+ data->addr = (void *)frame->pc;
return 1;
} else {
struct stackframe frame;
register unsigned long current_sp asm ("sp");
- data.level = level + 1;
+ data.level = level + 2;
+ data.addr = NULL;
frame.fp = (unsigned long)__builtin_frame_address(0);
frame.sp = current_sp;
static u32 __read_mostly (*read_sched_clock)(void) = jiffy_sched_clock_read;
-static inline u64 cyc_to_ns(u64 cyc, u32 mult, u32 shift)
+static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift)
{
return (cyc * mult) >> shift;
}
-static unsigned long long cyc_to_sched_clock(u32 cyc, u32 mask)
+static unsigned long long notrace cyc_to_sched_clock(u32 cyc, u32 mask)
{
u64 epoch_ns;
u32 epoch_cyc;
#include <asm/virt.h>
#include "atags.h"
-#include "tcm.h"
#if defined(CONFIG_FPE_NWFPE) || defined(CONFIG_FPE_FASTFPE)
static void __init cacheid_init(void)
{
- unsigned int cachetype = read_cpuid_cachetype();
unsigned int arch = cpu_architecture();
if (arch >= CPU_ARCH_ARMv6) {
+ unsigned int cachetype = read_cpuid_cachetype();
if ((cachetype & (7 << 29)) == 4 << 29) {
/* ARMv7 register format */
arch = CPU_ARCH_ARMv7;
printk("%s", buf);
}
+static void __init cpuid_init_hwcaps(void)
+{
+ unsigned int divide_instrs;
+
+ if (cpu_architecture() < CPU_ARCH_ARMv7)
+ return;
+
+ divide_instrs = (read_cpuid_ext(CPUID_EXT_ISAR0) & 0x0f000000) >> 24;
+
+ switch (divide_instrs) {
+ case 2:
+ elf_hwcap |= HWCAP_IDIVA;
+ case 1:
+ elf_hwcap |= HWCAP_IDIVT;
+ }
+}
+
static void __init feat_v6_fixup(void)
{
int id = read_cpuid_id();
*
* cpu_init sets up the per-CPU stacks.
*/
-void cpu_init(void)
+void notrace cpu_init(void)
{
unsigned int cpu = smp_processor_id();
struct stack *stk = &stacks[cpu];
snprintf(elf_platform, ELF_PLATFORM_SIZE, "%s%c",
list->elf_name, ENDIANNESS);
elf_hwcap = list->elf_hwcap;
+
+ cpuid_init_hwcaps();
+
#ifndef CONFIG_ARM_THUMB
- elf_hwcap &= ~HWCAP_THUMB;
+ elf_hwcap &= ~(HWCAP_THUMB | HWCAP_IDIVT);
#endif
feat_v6_fixup();
size -= start & ~PAGE_MASK;
bank->start = PAGE_ALIGN(start);
-#ifndef CONFIG_LPAE
+#ifndef CONFIG_ARM_LPAE
if (bank->start + size < bank->start) {
printk(KERN_CRIT "Truncating memory at 0x%08llx to fit in "
"32-bit physical address space\n", (long long)start);
reserve_crashkernel();
- tcm_init();
-
#ifdef CONFIG_MULTI_IRQ_HANDLER
handle_arch_irq = mdesc->handle_irq;
#endif
}
printk(KERN_NOTICE "CPU%u: shutdown\n", cpu);
+ /*
+ * platform_cpu_kill() is generally expected to do the powering off
+ * and/or cutting of clocks to the dying CPU. Optionally, this may
+ * be done by the CPU which is dying in preference to supporting
+ * this call, but that means there is _no_ synchronisation between
+ * the requesting CPU and the dying CPU actually losing power.
+ */
if (!platform_cpu_kill(cpu))
printk("CPU%u: unable to kill\n", cpu);
}
idle_task_exit();
local_irq_disable();
- mb();
- /* Tell __cpu_die() that this CPU is now safe to dispose of */
+ /*
+ * Flush the data out of the L1 cache for this CPU. This must be
+ * before the completion to ensure that data is safely written out
+ * before platform_cpu_kill() gets called - which may disable
+ * *this* CPU and power down its cache.
+ */
+ flush_cache_louis();
+
+ /*
+ * Tell __cpu_die() that this CPU is now safe to dispose of. Once
+ * this returns, power and/or clocks can be removed at any point
+ * from this CPU and its cache by platform_cpu_kill().
+ */
RCU_NONIDLE(complete(&cpu_died));
/*
- * actual CPU shutdown procedure is at least platform (if not
- * CPU) specific.
+ * Ensure that the cache lines associated with that completion are
+ * written out. This covers the case where _this_ CPU is doing the
+ * powering down, to ensure that the completion is visible to the
+ * CPU waiting for this one.
+ */
+ flush_cache_louis();
+
+ /*
+ * The actual CPU shutdown procedure is at least platform (if not
+ * CPU) specific. This may remove power, or it may simply spin.
+ *
+ * Platforms are generally expected *NOT* to return from this call,
+ * although there are some which do because they have no way to
+ * power down the CPU. These platforms are the _only_ reason we
+ * have a return path which uses the fragment of assembly below.
+ *
+ * The return path should not be used for platforms which can
+ * power off the CPU.
*/
if (smp_ops.cpu_die)
smp_ops.cpu_die(cpu);
if (freq->flags & CPUFREQ_CONST_LOOPS)
return NOTIFY_OK;
- if (arm_delay_ops.const_clock)
- return NOTIFY_OK;
-
if (!per_cpu(l_p_j_ref, cpu)) {
per_cpu(l_p_j_ref, cpu) =
per_cpu(cpu_data, cpu).loops_per_jiffy;
#ifdef CONFIG_ARM_ERRATA_764369
/* Cortex-A9 only */
- if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
+ if ((read_cpuid_id() & 0xff0ffff0) == 0x410fc090) {
scu_ctrl = __raw_readl(scu_base + 0x30);
if (!(scu_ctrl & 1))
__raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
#include <asm/smp_plat.h>
#include <asm/tlbflush.h>
+#include <asm/mmu_context.h>
/**********************************************************************/
local_flush_bp_all();
}
+#ifdef CONFIG_ARM_ERRATA_798181
+static int erratum_a15_798181(void)
+{
+ unsigned int midr = read_cpuid_id();
+
+ /* Cortex-A15 r0p0..r3p2 affected */
+ if ((midr & 0xff0ffff0) != 0x410fc0f0 || midr > 0x413fc0f2)
+ return 0;
+ return 1;
+}
+#else
+static int erratum_a15_798181(void)
+{
+ return 0;
+}
+#endif
+
+static void ipi_flush_tlb_a15_erratum(void *arg)
+{
+ dmb();
+}
+
+static void broadcast_tlb_a15_erratum(void)
+{
+ if (!erratum_a15_798181())
+ return;
+
+ dummy_flush_tlb_a15_erratum();
+ smp_call_function(ipi_flush_tlb_a15_erratum, NULL, 1);
+}
+
+static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
+{
+ int cpu, this_cpu;
+ cpumask_t mask = { CPU_BITS_NONE };
+
+ if (!erratum_a15_798181())
+ return;
+
+ dummy_flush_tlb_a15_erratum();
+ this_cpu = get_cpu();
+ for_each_online_cpu(cpu) {
+ if (cpu == this_cpu)
+ continue;
+ /*
+ * We only need to send an IPI if the other CPUs are running
+ * the same ASID as the one being invalidated. There is no
+ * need for locking around the active_asids check since the
+ * switch_mm() function has at least one dmb() (as required by
+ * this workaround) in case a context switch happens on
+ * another CPU after the condition below.
+ */
+ if (atomic64_read(&mm->context.id) ==
+ atomic64_read(&per_cpu(active_asids, cpu)))
+ cpumask_set_cpu(cpu, &mask);
+ }
+ smp_call_function_many(&mask, ipi_flush_tlb_a15_erratum, NULL, 1);
+ put_cpu();
+}
+
void flush_tlb_all(void)
{
if (tlb_ops_need_broadcast())
on_each_cpu(ipi_flush_tlb_all, NULL, 1);
else
local_flush_tlb_all();
+ broadcast_tlb_a15_erratum();
}
void flush_tlb_mm(struct mm_struct *mm)
on_each_cpu_mask(mm_cpumask(mm), ipi_flush_tlb_mm, mm, 1);
else
local_flush_tlb_mm(mm);
+ broadcast_tlb_mm_a15_erratum(mm);
}
void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
&ta, 1);
} else
local_flush_tlb_page(vma, uaddr);
+ broadcast_tlb_mm_a15_erratum(vma->vm_mm);
}
void flush_tlb_kernel_page(unsigned long kaddr)
on_each_cpu(ipi_flush_tlb_kernel_page, &ta, 1);
} else
local_flush_tlb_kernel_page(kaddr);
+ broadcast_tlb_a15_erratum();
}
void flush_tlb_range(struct vm_area_struct *vma,
&ta, 1);
} else
local_flush_tlb_range(vma, start, end);
+ broadcast_tlb_mm_a15_erratum(vma->vm_mm);
}
void flush_tlb_kernel_range(unsigned long start, unsigned long end)
on_each_cpu(ipi_flush_tlb_kernel_range, &ta, 1);
} else
local_flush_tlb_kernel_range(start, end);
+ broadcast_tlb_a15_erratum();
}
void flush_bp_all(void)
#include <asm/mach/map.h>
#include <asm/memory.h>
#include <asm/system_info.h>
-#include "tcm.h"
static struct gen_pool *tcm_pool;
static bool dtcm_present;
+++ /dev/null
-/*
- * Copyright (C) 2008-2009 ST-Ericsson AB
- * License terms: GNU General Public License (GPL) version 2
- * TCM memory handling for ARM systems
- *
- * Author: Linus Walleij <linus.walleij@stericsson.com>
- * Author: Rickard Andersson <rickard.andersson@stericsson.com>
- */
-
-#ifdef CONFIG_HAVE_TCM
-void __init tcm_init(void);
-#else
-/* No TCM support, just blank inlines to be optimized out */
-inline void tcm_init(void)
-{
-}
-#endif
kvm-arm-y = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
obj-y += kvm-arm.o init.o interrupts.o
-obj-y += arm.o guest.o mmu.o emulate.o reset.o
+obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o mmio.o psci.o
obj-$(CONFIG_KVM_ARM_VGIC) += vgic.o
obj-$(CONFIG_KVM_ARM_TIMER) += arch_timer.o
#define CREATE_TRACE_POINTS
#include "trace.h"
-#include <asm/unified.h>
#include <asm/uaccess.h>
#include <asm/ptrace.h>
#include <asm/mman.h>
-#include <asm/cputype.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
#include <asm/virt.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_psci.h>
-#include <asm/opcodes.h>
#ifdef REQUIRES_VIRT
__asm__(".arch_extension virt");
#endif
static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
-static struct vfp_hard_struct __percpu *kvm_host_vfp_state;
+static kvm_kernel_vfp_t __percpu *kvm_host_vfp_state;
static unsigned long hyp_default_vectors;
/* Per-CPU variable containing the currently running vcpu. */
break;
case KVM_CAP_ARM_SET_DEVICE_ADDR:
r = 1;
+ break;
case KVM_CAP_NR_VCPUS:
r = num_online_cpus();
break;
return 0;
}
-int __attribute_const__ kvm_target_cpu(void)
-{
- unsigned long implementor = read_cpuid_implementor();
- unsigned long part_number = read_cpuid_part_number();
-
- if (implementor != ARM_CPU_IMP_ARM)
- return -EINVAL;
-
- switch (part_number) {
- case ARM_CPU_PART_CORTEX_A15:
- return KVM_ARM_TARGET_CORTEX_A15;
- default:
- return -EINVAL;
- }
-}
-
int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{
int ret;
spin_unlock(&kvm_vmid_lock);
}
-static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
- /* SVC called from Hyp mode should never get here */
- kvm_debug("SVC called from Hyp mode shouldn't go here\n");
- BUG();
- return -EINVAL; /* Squash warning */
-}
-
-static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
- trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
- vcpu->arch.hsr & HSR_HVC_IMM_MASK);
-
- if (kvm_psci_call(vcpu))
- return 1;
-
- kvm_inject_undefined(vcpu);
- return 1;
-}
-
-static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
- if (kvm_psci_call(vcpu))
- return 1;
-
- kvm_inject_undefined(vcpu);
- return 1;
-}
-
-static int handle_pabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
- /* The hypervisor should never cause aborts */
- kvm_err("Prefetch Abort taken from Hyp mode at %#08x (HSR: %#08x)\n",
- vcpu->arch.hxfar, vcpu->arch.hsr);
- return -EFAULT;
-}
-
-static int handle_dabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
- /* This is either an error in the ws. code or an external abort */
- kvm_err("Data Abort taken from Hyp mode at %#08x (HSR: %#08x)\n",
- vcpu->arch.hxfar, vcpu->arch.hsr);
- return -EFAULT;
-}
-
-typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
-static exit_handle_fn arm_exit_handlers[] = {
- [HSR_EC_WFI] = kvm_handle_wfi,
- [HSR_EC_CP15_32] = kvm_handle_cp15_32,
- [HSR_EC_CP15_64] = kvm_handle_cp15_64,
- [HSR_EC_CP14_MR] = kvm_handle_cp14_access,
- [HSR_EC_CP14_LS] = kvm_handle_cp14_load_store,
- [HSR_EC_CP14_64] = kvm_handle_cp14_access,
- [HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access,
- [HSR_EC_CP10_ID] = kvm_handle_cp10_id,
- [HSR_EC_SVC_HYP] = handle_svc_hyp,
- [HSR_EC_HVC] = handle_hvc,
- [HSR_EC_SMC] = handle_smc,
- [HSR_EC_IABT] = kvm_handle_guest_abort,
- [HSR_EC_IABT_HYP] = handle_pabt_hyp,
- [HSR_EC_DABT] = kvm_handle_guest_abort,
- [HSR_EC_DABT_HYP] = handle_dabt_hyp,
-};
-
-/*
- * A conditional instruction is allowed to trap, even though it
- * wouldn't be executed. So let's re-implement the hardware, in
- * software!
- */
-static bool kvm_condition_valid(struct kvm_vcpu *vcpu)
-{
- unsigned long cpsr, cond, insn;
-
- /*
- * Exception Code 0 can only happen if we set HCR.TGE to 1, to
- * catch undefined instructions, and then we won't get past
- * the arm_exit_handlers test anyway.
- */
- BUG_ON(((vcpu->arch.hsr & HSR_EC) >> HSR_EC_SHIFT) == 0);
-
- /* Top two bits non-zero? Unconditional. */
- if (vcpu->arch.hsr >> 30)
- return true;
-
- cpsr = *vcpu_cpsr(vcpu);
-
- /* Is condition field valid? */
- if ((vcpu->arch.hsr & HSR_CV) >> HSR_CV_SHIFT)
- cond = (vcpu->arch.hsr & HSR_COND) >> HSR_COND_SHIFT;
- else {
- /* This can happen in Thumb mode: examine IT state. */
- unsigned long it;
-
- it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
-
- /* it == 0 => unconditional. */
- if (it == 0)
- return true;
-
- /* The cond for this insn works out as the top 4 bits. */
- cond = (it >> 4);
- }
-
- /* Shift makes it look like an ARM-mode instruction */
- insn = cond << 28;
- return arm_check_condition(insn, cpsr) != ARM_OPCODE_CONDTEST_FAIL;
-}
-
-/*
- * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
- * proper exit to QEMU.
- */
-static int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
- int exception_index)
-{
- unsigned long hsr_ec;
-
- switch (exception_index) {
- case ARM_EXCEPTION_IRQ:
- return 1;
- case ARM_EXCEPTION_UNDEFINED:
- kvm_err("Undefined exception in Hyp mode at: %#08x\n",
- vcpu->arch.hyp_pc);
- BUG();
- panic("KVM: Hypervisor undefined exception!\n");
- case ARM_EXCEPTION_DATA_ABORT:
- case ARM_EXCEPTION_PREF_ABORT:
- case ARM_EXCEPTION_HVC:
- hsr_ec = (vcpu->arch.hsr & HSR_EC) >> HSR_EC_SHIFT;
-
- if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers)
- || !arm_exit_handlers[hsr_ec]) {
- kvm_err("Unkown exception class: %#08lx, "
- "hsr: %#08x\n", hsr_ec,
- (unsigned int)vcpu->arch.hsr);
- BUG();
- }
-
- /*
- * See ARM ARM B1.14.1: "Hyp traps on instructions
- * that fail their condition code check"
- */
- if (!kvm_condition_valid(vcpu)) {
- bool is_wide = vcpu->arch.hsr & HSR_IL;
- kvm_skip_instr(vcpu, is_wide);
- return 1;
- }
-
- return arm_exit_handlers[hsr_ec](vcpu, run);
- default:
- kvm_pr_unimpl("Unsupported exception type: %d",
- exception_index);
- run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
- return 0;
- }
-}
-
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
{
if (likely(vcpu->arch.has_run_once))
static void cpu_init_hyp_mode(void *vector)
{
unsigned long long pgd_ptr;
- unsigned long pgd_low, pgd_high;
unsigned long hyp_stack_ptr;
unsigned long stack_page;
unsigned long vector_ptr;
__hyp_set_vectors((unsigned long)vector);
pgd_ptr = (unsigned long long)kvm_mmu_get_httbr();
- pgd_low = (pgd_ptr & ((1ULL << 32) - 1));
- pgd_high = (pgd_ptr >> 32ULL);
stack_page = __get_cpu_var(kvm_arm_hyp_stack_page);
hyp_stack_ptr = stack_page + PAGE_SIZE;
vector_ptr = (unsigned long)__kvm_hyp_vector;
- /*
- * Call initialization code, and switch to the full blown
- * HYP code. The init code doesn't need to preserve these registers as
- * r1-r3 and r12 are already callee save according to the AAPCS.
- * Note that we slightly misuse the prototype by casing the pgd_low to
- * a void *.
- */
- kvm_call_hyp((void *)pgd_low, pgd_high, hyp_stack_ptr, vector_ptr);
+ __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
}
/**
/*
* Map the host VFP structures
*/
- kvm_host_vfp_state = alloc_percpu(struct vfp_hard_struct);
+ kvm_host_vfp_state = alloc_percpu(kvm_kernel_vfp_t);
if (!kvm_host_vfp_state) {
err = -ENOMEM;
kvm_err("Cannot allocate host VFP state\n");
}
for_each_possible_cpu(cpu) {
- struct vfp_hard_struct *vfp;
+ kvm_kernel_vfp_t *vfp;
vfp = per_cpu_ptr(kvm_host_vfp_state, cpu);
err = create_hyp_mappings(vfp, vfp + 1);
const struct coproc_params *p,
const struct coproc_reg *r)
{
- u32 val;
+ unsigned long val;
int cpu;
- cpu = get_cpu();
-
if (!p->is_write)
return read_from_write_only(vcpu, p);
+ cpu = get_cpu();
+
cpumask_setall(&vcpu->arch.require_dcache_flush);
cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
if (likely(r->access(vcpu, params, r))) {
/* Skip instruction, since it was emulated */
- kvm_skip_instr(vcpu, (vcpu->arch.hsr >> 25) & 1);
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}
/* If access function fails, it should complain. */
} else {
- kvm_err("Unsupported guest CP15 access at: %08x\n",
+ kvm_err("Unsupported guest CP15 access at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
}
{
struct coproc_params params;
- params.CRm = (vcpu->arch.hsr >> 1) & 0xf;
- params.Rt1 = (vcpu->arch.hsr >> 5) & 0xf;
- params.is_write = ((vcpu->arch.hsr & 1) == 0);
+ params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
+ params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
+ params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
params.is_64bit = true;
- params.Op1 = (vcpu->arch.hsr >> 16) & 0xf;
+ params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
params.Op2 = 0;
- params.Rt2 = (vcpu->arch.hsr >> 10) & 0xf;
+ params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
params.CRn = 0;
return emulate_cp15(vcpu, ¶ms);
{
struct coproc_params params;
- params.CRm = (vcpu->arch.hsr >> 1) & 0xf;
- params.Rt1 = (vcpu->arch.hsr >> 5) & 0xf;
- params.is_write = ((vcpu->arch.hsr & 1) == 0);
+ params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
+ params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
+ params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
params.is_64bit = false;
- params.CRn = (vcpu->arch.hsr >> 10) & 0xf;
- params.Op1 = (vcpu->arch.hsr >> 14) & 0x7;
- params.Op2 = (vcpu->arch.hsr >> 17) & 0x7;
+ params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
+ params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
+ params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
params.Rt2 = 0;
return emulate_cp15(vcpu, ¶ms);
static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
const struct coproc_params *params)
{
- kvm_debug("CP15 write to read-only register at: %08x\n",
+ kvm_debug("CP15 write to read-only register at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
return false;
static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
const struct coproc_params *params)
{
- kvm_debug("CP15 read to write-only register at: %08x\n",
+ kvm_debug("CP15 read to write-only register at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
return false;
#include <linux/kvm_host.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
+#include <asm/opcodes.h>
#include <trace/events/kvm.h>
#include "trace.h"
* Return a pointer to the register number valid in the current mode of
* the virtual CPU.
*/
-u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
+unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{
- u32 *reg_array = (u32 *)&vcpu->arch.regs;
- u32 mode = *vcpu_cpsr(vcpu) & MODE_MASK;
+ unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs;
+ unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
case USR_MODE...SVC_MODE:
/*
* Return the SPSR for the current mode of the virtual CPU.
*/
-u32 *vcpu_spsr(struct kvm_vcpu *vcpu)
+unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
{
- u32 mode = *vcpu_cpsr(vcpu) & MODE_MASK;
+ unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
case SVC_MODE:
return &vcpu->arch.regs.KVM_ARM_SVC_spsr;
}
}
-/**
- * kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
- * @vcpu: the vcpu pointer
- * @run: the kvm_run structure pointer
- *
- * Simply sets the wait_for_interrupts flag on the vcpu structure, which will
- * halt execution of world-switches and schedule other host processes until
- * there is an incoming IRQ or FIQ to the VM.
+/*
+ * A conditional instruction is allowed to trap, even though it
+ * wouldn't be executed. So let's re-implement the hardware, in
+ * software!
*/
-int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
+bool kvm_condition_valid(struct kvm_vcpu *vcpu)
{
- trace_kvm_wfi(*vcpu_pc(vcpu));
- kvm_vcpu_block(vcpu);
- return 1;
+ unsigned long cpsr, cond, insn;
+
+ /*
+ * Exception Code 0 can only happen if we set HCR.TGE to 1, to
+ * catch undefined instructions, and then we won't get past
+ * the arm_exit_handlers test anyway.
+ */
+ BUG_ON(!kvm_vcpu_trap_get_class(vcpu));
+
+ /* Top two bits non-zero? Unconditional. */
+ if (kvm_vcpu_get_hsr(vcpu) >> 30)
+ return true;
+
+ cpsr = *vcpu_cpsr(vcpu);
+
+ /* Is condition field valid? */
+ if ((kvm_vcpu_get_hsr(vcpu) & HSR_CV) >> HSR_CV_SHIFT)
+ cond = (kvm_vcpu_get_hsr(vcpu) & HSR_COND) >> HSR_COND_SHIFT;
+ else {
+ /* This can happen in Thumb mode: examine IT state. */
+ unsigned long it;
+
+ it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
+
+ /* it == 0 => unconditional. */
+ if (it == 0)
+ return true;
+
+ /* The cond for this insn works out as the top 4 bits. */
+ cond = (it >> 4);
+ }
+
+ /* Shift makes it look like an ARM-mode instruction */
+ insn = cond << 28;
+ return arm_check_condition(insn, cpsr) != ARM_OPCODE_CONDTEST_FAIL;
}
/**
*/
void kvm_inject_undefined(struct kvm_vcpu *vcpu)
{
- u32 new_lr_value;
- u32 new_spsr_value;
- u32 cpsr = *vcpu_cpsr(vcpu);
+ unsigned long new_lr_value;
+ unsigned long new_spsr_value;
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
bool is_thumb = (cpsr & PSR_T_BIT);
u32 vect_offset = 4;
*/
static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
{
- u32 new_lr_value;
- u32 new_spsr_value;
- u32 cpsr = *vcpu_cpsr(vcpu);
+ unsigned long new_lr_value;
+ unsigned long new_spsr_value;
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
bool is_thumb = (cpsr & PSR_T_BIT);
u32 vect_offset;
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/fs.h>
+#include <asm/cputype.h>
#include <asm/uaccess.h>
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
return -EINVAL;
}
+int __attribute_const__ kvm_target_cpu(void)
+{
+ unsigned long implementor = read_cpuid_implementor();
+ unsigned long part_number = read_cpuid_part_number();
+
+ if (implementor != ARM_CPU_IMP_ARM)
+ return -EINVAL;
+
+ switch (part_number) {
+ case ARM_CPU_PART_CORTEX_A15:
+ return KVM_ARM_TARGET_CORTEX_A15;
+ default:
+ return -EINVAL;
+ }
+}
+
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
{
--- /dev/null
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
+#include <asm/kvm_psci.h>
+#include <trace/events/kvm.h>
+
+#include "trace.h"
+
+#include "trace.h"
+
+typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
+
+static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /* SVC called from Hyp mode should never get here */
+ kvm_debug("SVC called from Hyp mode shouldn't go here\n");
+ BUG();
+ return -EINVAL; /* Squash warning */
+}
+
+static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
+ kvm_vcpu_hvc_get_imm(vcpu));
+
+ if (kvm_psci_call(vcpu))
+ return 1;
+
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ if (kvm_psci_call(vcpu))
+ return 1;
+
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static int handle_pabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /* The hypervisor should never cause aborts */
+ kvm_err("Prefetch Abort taken from Hyp mode at %#08lx (HSR: %#08x)\n",
+ kvm_vcpu_get_hfar(vcpu), kvm_vcpu_get_hsr(vcpu));
+ return -EFAULT;
+}
+
+static int handle_dabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /* This is either an error in the ws. code or an external abort */
+ kvm_err("Data Abort taken from Hyp mode at %#08lx (HSR: %#08x)\n",
+ kvm_vcpu_get_hfar(vcpu), kvm_vcpu_get_hsr(vcpu));
+ return -EFAULT;
+}
+
+/**
+ * kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
+ * @vcpu: the vcpu pointer
+ * @run: the kvm_run structure pointer
+ *
+ * Simply sets the wait_for_interrupts flag on the vcpu structure, which will
+ * halt execution of world-switches and schedule other host processes until
+ * there is an incoming IRQ or FIQ to the VM.
+ */
+static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ trace_kvm_wfi(*vcpu_pc(vcpu));
+ kvm_vcpu_block(vcpu);
+ return 1;
+}
+
+static exit_handle_fn arm_exit_handlers[] = {
+ [HSR_EC_WFI] = kvm_handle_wfi,
+ [HSR_EC_CP15_32] = kvm_handle_cp15_32,
+ [HSR_EC_CP15_64] = kvm_handle_cp15_64,
+ [HSR_EC_CP14_MR] = kvm_handle_cp14_access,
+ [HSR_EC_CP14_LS] = kvm_handle_cp14_load_store,
+ [HSR_EC_CP14_64] = kvm_handle_cp14_access,
+ [HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access,
+ [HSR_EC_CP10_ID] = kvm_handle_cp10_id,
+ [HSR_EC_SVC_HYP] = handle_svc_hyp,
+ [HSR_EC_HVC] = handle_hvc,
+ [HSR_EC_SMC] = handle_smc,
+ [HSR_EC_IABT] = kvm_handle_guest_abort,
+ [HSR_EC_IABT_HYP] = handle_pabt_hyp,
+ [HSR_EC_DABT] = kvm_handle_guest_abort,
+ [HSR_EC_DABT_HYP] = handle_dabt_hyp,
+};
+
+static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
+{
+ u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
+
+ if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
+ !arm_exit_handlers[hsr_ec]) {
+ kvm_err("Unkown exception class: hsr: %#08x\n",
+ (unsigned int)kvm_vcpu_get_hsr(vcpu));
+ BUG();
+ }
+
+ return arm_exit_handlers[hsr_ec];
+}
+
+/*
+ * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
+ * proper exit to userspace.
+ */
+int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ int exception_index)
+{
+ exit_handle_fn exit_handler;
+
+ switch (exception_index) {
+ case ARM_EXCEPTION_IRQ:
+ return 1;
+ case ARM_EXCEPTION_UNDEFINED:
+ kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
+ kvm_vcpu_get_hyp_pc(vcpu));
+ BUG();
+ panic("KVM: Hypervisor undefined exception!\n");
+ case ARM_EXCEPTION_DATA_ABORT:
+ case ARM_EXCEPTION_PREF_ABORT:
+ case ARM_EXCEPTION_HVC:
+ /*
+ * See ARM ARM B1.14.1: "Hyp traps on instructions
+ * that fail their condition code check"
+ */
+ if (!kvm_condition_valid(vcpu)) {
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+
+ exit_handler = kvm_get_exit_handler(vcpu);
+
+ return exit_handler(vcpu, run);
+ default:
+ kvm_pr_unimpl("Unsupported exception type: %d",
+ exception_index);
+ run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ return 0;
+ }
+}
/********************************************************************
* Flush per-VMID TLBs
*
- * void __kvm_tlb_flush_vmid(struct kvm *kvm);
+ * void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
*
* We rely on the hardware to broadcast the TLB invalidation to all CPUs
* inside the inner-shareable domain (which is the case for all v7
* implementations). If we come across a non-IS SMP implementation, we'll
* have to use an IPI based mechanism. Until then, we stick to the simple
* hardware assisted version.
+ *
+ * As v7 does not support flushing per IPA, just nuke the whole TLB
+ * instead, ignoring the ipa value.
*/
-ENTRY(__kvm_tlb_flush_vmid)
+ENTRY(__kvm_tlb_flush_vmid_ipa)
push {r2, r3}
add r0, r0, #KVM_VTTBR
pop {r2, r3}
bx lr
-ENDPROC(__kvm_tlb_flush_vmid)
+ENDPROC(__kvm_tlb_flush_vmid_ipa)
/********************************************************************
* Flush TLBs and instruction caches of all CPUs inside the inner-shareable
* instruction is issued since all traps are disabled when running the host
* kernel as per the Hyp-mode initialization at boot time.
*
- * HVC instructions cause a trap to the vector page + offset 0x18 (see hyp_hvc
+ * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
* below) when the HVC instruction is called from SVC mode (i.e. a guest or the
- * host kernel) and they cause a trap to the vector page + offset 0xc when HVC
+ * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
* instructions are called from within Hyp-mode.
*
* Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
*/
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- __u32 *dest;
+ unsigned long *dest;
unsigned int len;
int mask;
if (!run->mmio.is_write) {
dest = vcpu_reg(vcpu, vcpu->arch.mmio_decode.rt);
- memset(dest, 0, sizeof(int));
+ *dest = 0;
len = run->mmio.len;
- if (len > 4)
+ if (len > sizeof(unsigned long))
return -EINVAL;
memcpy(dest, run->mmio.data, len);
trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
*((u64 *)run->mmio.data));
- if (vcpu->arch.mmio_decode.sign_extend && len < 4) {
+ if (vcpu->arch.mmio_decode.sign_extend &&
+ len < sizeof(unsigned long)) {
mask = 1U << ((len * 8) - 1);
*dest = (*dest ^ mask) - mask;
}
unsigned long rt, len;
bool is_write, sign_extend;
- if ((vcpu->arch.hsr >> 8) & 1) {
+ if (kvm_vcpu_dabt_isextabt(vcpu)) {
/* cache operation on I/O addr, tell guest unsupported */
- kvm_inject_dabt(vcpu, vcpu->arch.hxfar);
+ kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
- if ((vcpu->arch.hsr >> 7) & 1) {
+ if (kvm_vcpu_dabt_iss1tw(vcpu)) {
/* page table accesses IO mem: tell guest to fix its TTBR */
- kvm_inject_dabt(vcpu, vcpu->arch.hxfar);
+ kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
- switch ((vcpu->arch.hsr >> 22) & 0x3) {
- case 0:
- len = 1;
- break;
- case 1:
- len = 2;
- break;
- case 2:
- len = 4;
- break;
- default:
- kvm_err("Hardware is weird: SAS 0b11 is reserved\n");
- return -EFAULT;
- }
+ len = kvm_vcpu_dabt_get_as(vcpu);
+ if (unlikely(len < 0))
+ return len;
- is_write = vcpu->arch.hsr & HSR_WNR;
- sign_extend = vcpu->arch.hsr & HSR_SSE;
- rt = (vcpu->arch.hsr & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
+ is_write = kvm_vcpu_dabt_iswrite(vcpu);
+ sign_extend = kvm_vcpu_dabt_issext(vcpu);
+ rt = kvm_vcpu_dabt_get_rd(vcpu);
if (kvm_vcpu_reg_is_pc(vcpu, rt)) {
/* IO memory trying to read/write pc */
- kvm_inject_pabt(vcpu, vcpu->arch.hxfar);
+ kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
* The MMIO instruction is emulated and should not be re-executed
* in the guest.
*/
- kvm_skip_instr(vcpu, (vcpu->arch.hsr >> 25) & 1);
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 0;
}
* space do its magic.
*/
- if (vcpu->arch.hsr & HSR_ISV) {
+ if (kvm_vcpu_dabt_isvalid(vcpu)) {
ret = decode_hsr(vcpu, fault_ipa, &mmio);
if (ret)
return ret;
#include <linux/kvm_host.h>
#include <linux/io.h>
#include <trace/events/kvm.h>
-#include <asm/idmap.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_mmio.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
-#include <asm/mach/map.h>
-#include <trace/events/kvm.h>
#include "trace.h"
static DEFINE_MUTEX(kvm_hyp_pgd_mutex);
-static void kvm_tlb_flush_vmid(struct kvm *kvm)
+static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
- kvm_call_hyp(__kvm_tlb_flush_vmid, kvm);
-}
-
-static void kvm_set_pte(pte_t *pte, pte_t new_pte)
-{
- pte_val(*pte) = new_pte;
- /*
- * flush_pmd_entry just takes a void pointer and cleans the necessary
- * cache entries, so we can reuse the function for ptes.
- */
- flush_pmd_entry(pte);
+ kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
}
static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
}
}
+static void free_hyp_pgd_entry(unsigned long addr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ unsigned long hyp_addr = KERN_TO_HYP(addr);
+
+ pgd = hyp_pgd + pgd_index(hyp_addr);
+ pud = pud_offset(pgd, hyp_addr);
+
+ if (pud_none(*pud))
+ return;
+ BUG_ON(pud_bad(*pud));
+
+ pmd = pmd_offset(pud, hyp_addr);
+ free_ptes(pmd, addr);
+ pmd_free(NULL, pmd);
+ pud_clear(pud);
+}
+
/**
* free_hyp_pmds - free a Hyp-mode level-2 tables and child level-3 tables
*
* Assumes this is a page table used strictly in Hyp-mode and therefore contains
- * only mappings in the kernel memory area, which is above PAGE_OFFSET.
+ * either mappings in the kernel memory area (above PAGE_OFFSET), or
+ * device mappings in the vmalloc range (from VMALLOC_START to VMALLOC_END).
*/
void free_hyp_pmds(void)
{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
unsigned long addr;
mutex_lock(&kvm_hyp_pgd_mutex);
- for (addr = PAGE_OFFSET; addr != 0; addr += PGDIR_SIZE) {
- pgd = hyp_pgd + pgd_index(addr);
- pud = pud_offset(pgd, addr);
-
- if (pud_none(*pud))
- continue;
- BUG_ON(pud_bad(*pud));
-
- pmd = pmd_offset(pud, addr);
- free_ptes(pmd, addr);
- pmd_free(NULL, pmd);
- pud_clear(pud);
- }
+ for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
+ free_hyp_pgd_entry(addr);
+ for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
+ free_hyp_pgd_entry(addr);
mutex_unlock(&kvm_hyp_pgd_mutex);
}
struct page *page;
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
- pte = pte_offset_kernel(pmd, addr);
+ unsigned long hyp_addr = KERN_TO_HYP(addr);
+
+ pte = pte_offset_kernel(pmd, hyp_addr);
BUG_ON(!virt_addr_valid(addr));
page = virt_to_page(addr);
kvm_set_pte(pte, mk_pte(page, PAGE_HYP));
unsigned long addr;
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
- pte = pte_offset_kernel(pmd, addr);
+ unsigned long hyp_addr = KERN_TO_HYP(addr);
+
+ pte = pte_offset_kernel(pmd, hyp_addr);
BUG_ON(pfn_valid(*pfn_base));
kvm_set_pte(pte, pfn_pte(*pfn_base, PAGE_HYP_DEVICE));
(*pfn_base)++;
unsigned long addr, next;
for (addr = start; addr < end; addr = next) {
- pmd = pmd_offset(pud, addr);
+ unsigned long hyp_addr = KERN_TO_HYP(addr);
+ pmd = pmd_offset(pud, hyp_addr);
BUG_ON(pmd_sect(*pmd));
if (pmd_none(*pmd)) {
- pte = pte_alloc_one_kernel(NULL, addr);
+ pte = pte_alloc_one_kernel(NULL, hyp_addr);
if (!pte) {
kvm_err("Cannot allocate Hyp pte\n");
return -ENOMEM;
unsigned long addr, next;
int err = 0;
- BUG_ON(start > end);
- if (start < PAGE_OFFSET)
+ if (start >= end)
+ return -EINVAL;
+ /* Check for a valid kernel memory mapping */
+ if (!pfn_base && (!virt_addr_valid(from) || !virt_addr_valid(to - 1)))
+ return -EINVAL;
+ /* Check for a valid kernel IO mapping */
+ if (pfn_base && (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1)))
return -EINVAL;
mutex_lock(&kvm_hyp_pgd_mutex);
for (addr = start; addr < end; addr = next) {
- pgd = hyp_pgd + pgd_index(addr);
- pud = pud_offset(pgd, addr);
+ unsigned long hyp_addr = KERN_TO_HYP(addr);
+ pgd = hyp_pgd + pgd_index(hyp_addr);
+ pud = pud_offset(pgd, hyp_addr);
if (pud_none_or_clear_bad(pud)) {
- pmd = pmd_alloc_one(NULL, addr);
+ pmd = pmd_alloc_one(NULL, hyp_addr);
if (!pmd) {
kvm_err("Cannot allocate Hyp pmd\n");
err = -ENOMEM;
}
/**
- * create_hyp_mappings - map a kernel virtual address range in Hyp mode
+ * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
* @from: The virtual kernel start address of the range
* @to: The virtual kernel end address of the range (exclusive)
*
- * The same virtual address as the kernel virtual address is also used in
- * Hyp-mode mapping to the same underlying physical pages.
+ * The same virtual address as the kernel virtual address is also used
+ * in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
+ * physical pages.
*
* Note: Wrapping around zero in the "to" address is not supported.
*/
}
/**
- * create_hyp_io_mappings - map a physical IO range in Hyp mode
- * @from: The virtual HYP start address of the range
- * @to: The virtual HYP end address of the range (exclusive)
+ * create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
+ * @from: The kernel start VA of the range
+ * @to: The kernel end VA of the range (exclusive)
* @addr: The physical start address which gets mapped
+ *
+ * The resulting HYP VA is the same as the kernel VA, modulo
+ * HYP_PAGE_OFFSET.
*/
int create_hyp_io_mappings(void *from, void *to, phys_addr_t addr)
{
VM_BUG_ON((unsigned long)pgd & (S2_PGD_SIZE - 1));
memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
- clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
+ kvm_clean_pgd(pgd);
kvm->arch.pgd = pgd;
return 0;
return 0; /* ignore calls from kvm_set_spte_hva */
pmd = mmu_memory_cache_alloc(cache);
pud_populate(NULL, pud, pmd);
- pmd += pmd_index(addr);
get_page(virt_to_page(pud));
- } else
- pmd = pmd_offset(pud, addr);
+ }
+
+ pmd = pmd_offset(pud, addr);
/* Create 2nd stage page table mapping - Level 2 */
if (pmd_none(*pmd)) {
if (!cache)
return 0; /* ignore calls from kvm_set_spte_hva */
pte = mmu_memory_cache_alloc(cache);
- clean_pte_table(pte);
+ kvm_clean_pte(pte);
pmd_populate_kernel(NULL, pmd, pte);
- pte += pte_index(addr);
get_page(virt_to_page(pmd));
- } else
- pte = pte_offset_kernel(pmd, addr);
+ }
+
+ pte = pte_offset_kernel(pmd, addr);
if (iomap && pte_present(*pte))
return -EFAULT;
old_pte = *pte;
kvm_set_pte(pte, *new_pte);
if (pte_present(old_pte))
- kvm_tlb_flush_vmid(kvm);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
else
get_page(virt_to_page(pte));
pfn = __phys_to_pfn(pa);
for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
- pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE | L_PTE_S2_RDWR);
+ pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
+ kvm_set_s2pte_writable(&pte);
ret = mmu_topup_memory_cache(&cache, 2, 2);
if (ret)
return ret;
}
-static void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
-{
- /*
- * If we are going to insert an instruction page and the icache is
- * either VIPT or PIPT, there is a potential problem where the host
- * (or another VM) may have used the same page as this guest, and we
- * read incorrect data from the icache. If we're using a PIPT cache,
- * we can invalidate just that page, but if we are using a VIPT cache
- * we need to invalidate the entire icache - damn shame - as written
- * in the ARM ARM (DDI 0406C.b - Page B3-1393).
- *
- * VIVT caches are tagged using both the ASID and the VMID and doesn't
- * need any kind of flushing (DDI 0406C.b - Page B3-1392).
- */
- if (icache_is_pipt()) {
- unsigned long hva = gfn_to_hva(kvm, gfn);
- __cpuc_coherent_user_range(hva, hva + PAGE_SIZE);
- } else if (!icache_is_vivt_asid_tagged()) {
- /* any kind of VIPT cache */
- __flush_icache_all();
- }
-}
-
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
gfn_t gfn, struct kvm_memory_slot *memslot,
unsigned long fault_status)
unsigned long mmu_seq;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
- write_fault = kvm_is_write_fault(vcpu->arch.hsr);
+ write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu));
if (fault_status == FSC_PERM && !write_fault) {
kvm_err("Unexpected L2 read permission error\n");
return -EFAULT;
if (mmu_notifier_retry(vcpu->kvm, mmu_seq))
goto out_unlock;
if (writable) {
- pte_val(new_pte) |= L_PTE_S2_RDWR;
+ kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
stage2_set_pte(vcpu->kvm, memcache, fault_ipa, &new_pte, false);
*/
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- unsigned long hsr_ec;
unsigned long fault_status;
phys_addr_t fault_ipa;
struct kvm_memory_slot *memslot;
gfn_t gfn;
int ret, idx;
- hsr_ec = vcpu->arch.hsr >> HSR_EC_SHIFT;
- is_iabt = (hsr_ec == HSR_EC_IABT);
- fault_ipa = ((phys_addr_t)vcpu->arch.hpfar & HPFAR_MASK) << 8;
+ is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
+ fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
- trace_kvm_guest_fault(*vcpu_pc(vcpu), vcpu->arch.hsr,
- vcpu->arch.hxfar, fault_ipa);
+ trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_hsr(vcpu),
+ kvm_vcpu_get_hfar(vcpu), fault_ipa);
/* Check the stage-2 fault is trans. fault or write fault */
- fault_status = (vcpu->arch.hsr & HSR_FSC_TYPE);
+ fault_status = kvm_vcpu_trap_get_fault(vcpu);
if (fault_status != FSC_FAULT && fault_status != FSC_PERM) {
- kvm_err("Unsupported fault status: EC=%#lx DFCS=%#lx\n",
- hsr_ec, fault_status);
+ kvm_err("Unsupported fault status: EC=%#x DFCS=%#lx\n",
+ kvm_vcpu_trap_get_class(vcpu), fault_status);
return -EFAULT;
}
if (!kvm_is_visible_gfn(vcpu->kvm, gfn)) {
if (is_iabt) {
/* Prefetch Abort on I/O address */
- kvm_inject_pabt(vcpu, vcpu->arch.hxfar);
+ kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
ret = 1;
goto out_unlock;
}
goto out_unlock;
}
- /* Adjust page offset */
- fault_ipa |= vcpu->arch.hxfar & ~PAGE_MASK;
+ /*
+ * The IPA is reported as [MAX:12], so we need to
+ * complement it with the bottom 12 bits from the
+ * faulting VA. This is always 12 bits, irrespective
+ * of the page size.
+ */
+ fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
ret = io_mem_abort(vcpu, run, fault_ipa);
goto out_unlock;
}
static void kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
{
unmap_stage2_range(kvm, gpa, PAGE_SIZE);
- kvm_tlb_flush_vmid(kvm);
+ kvm_tlb_flush_vmid_ipa(kvm, gpa);
}
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
pmd = pmd_offset(pud, addr);
pud_clear(pud);
- clean_pmd_entry(pmd);
+ kvm_clean_pmd_entry(pmd);
pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK));
} while (pgd++, addr = next, addr < end);
}
lr, irq, vgic_cpu->vgic_lr[lr]);
BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
vgic_cpu->vgic_lr[lr] |= GICH_LR_PENDING_BIT;
-
- goto out;
+ return true;
}
/* Try to use another LR for this interrupt */
vgic_cpu->vgic_irq_lr_map[irq] = lr;
set_bit(lr, vgic_cpu->lr_used);
-out:
if (!vgic_irq_is_edge(vcpu, irq))
vgic_cpu->vgic_lr[lr] |= GICH_LR_EOI;
kvm_debug("MISR = %08x\n", vgic_cpu->vgic_misr);
- /*
- * We do not need to take the distributor lock here, since the only
- * action we perform is clearing the irq_active_bit for an EOIed
- * level interrupt. There is a potential race with
- * the queuing of an interrupt in __kvm_vgic_flush_hwstate(), where we
- * check if the interrupt is already active. Two possibilities:
- *
- * - The queuing is occurring on the same vcpu: cannot happen,
- * as we're already in the context of this vcpu, and
- * executing the handler
- * - The interrupt has been migrated to another vcpu, and we
- * ignore this interrupt for this run. Big deal. It is still
- * pending though, and will get considered when this vcpu
- * exits.
- */
if (vgic_cpu->vgic_misr & GICH_MISR_EOI) {
/*
* Some level interrupts have been EOIed. Clear their
} else {
vgic_cpu_irq_clear(vcpu, irq);
}
+
+ /*
+ * Despite being EOIed, the LR may not have
+ * been marked as empty.
+ */
+ set_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
+ vgic_cpu->vgic_lr[lr] &= ~GICH_LR_ACTIVE_BIT;
}
}
}
/*
- * Sync back the VGIC state after a guest run. We do not really touch
- * the distributor here (the irq_pending_on_cpu bit is safe to set),
- * so there is no need for taking its lock.
+ * Sync back the VGIC state after a guest run. The distributor lock is
+ * needed so we don't get preempted in the middle of the state processing.
*/
static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
+ struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
if (!irqchip_in_kernel(vcpu->kvm))
return;
+ spin_lock(&dist->lock);
__kvm_vgic_sync_hwstate(vcpu);
+ spin_unlock(&dist->lock);
}
int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
if (addr & ~KVM_PHYS_MASK)
return -E2BIG;
- if (addr & ~PAGE_MASK)
+ if (addr & (SZ_4K - 1))
return -EINVAL;
mutex_lock(&kvm->lock);
static void __timer_const_udelay(unsigned long xloops)
{
unsigned long long loops = xloops;
- loops *= loops_per_jiffy;
+ loops *= arm_delay_ops.ticks_per_jiffy;
__timer_delay(loops >> UDELAY_SHIFT);
}
pr_info("Switching to timer-based delay loop\n");
delay_timer = timer;
lpj_fine = timer->freq / HZ;
- loops_per_jiffy = lpj_fine;
+
+ /* cpufreq may scale loops_per_jiffy, so keep a private copy */
+ arm_delay_ops.ticks_per_jiffy = lpj_fine;
arm_delay_ops.delay = __timer_delay;
arm_delay_ops.const_udelay = __timer_const_udelay;
arm_delay_ops.udelay = __timer_udelay;
- arm_delay_ops.const_clock = true;
+
delay_calibrated = true;
} else {
pr_info("Ignoring duplicate/late registration of read_current_timer delay\n");
static struct map_desc cns3xxx_io_desc[] __initdata = {
{
- .virtual = CNS3XXX_TC11MP_TWD_BASE_VIRT,
- .pfn = __phys_to_pfn(CNS3XXX_TC11MP_TWD_BASE),
- .length = SZ_4K,
- .type = MT_DEVICE,
- }, {
- .virtual = CNS3XXX_TC11MP_GIC_CPU_BASE_VIRT,
- .pfn = __phys_to_pfn(CNS3XXX_TC11MP_GIC_CPU_BASE),
- .length = SZ_4K,
- .type = MT_DEVICE,
- }, {
- .virtual = CNS3XXX_TC11MP_GIC_DIST_BASE_VIRT,
- .pfn = __phys_to_pfn(CNS3XXX_TC11MP_GIC_DIST_BASE),
- .length = SZ_4K,
+ .virtual = CNS3XXX_TC11MP_SCU_BASE_VIRT,
+ .pfn = __phys_to_pfn(CNS3XXX_TC11MP_SCU_BASE),
+ .length = SZ_8K,
.type = MT_DEVICE,
}, {
.virtual = CNS3XXX_TIMER1_2_3_BASE_VIRT,
#define RTC_INTR_STS_OFFSET 0x34
#define CNS3XXX_MISC_BASE 0x76000000 /* Misc Control */
-#define CNS3XXX_MISC_BASE_VIRT 0xFFF07000 /* Misc Control */
+#define CNS3XXX_MISC_BASE_VIRT 0xFB000000 /* Misc Control */
#define CNS3XXX_PM_BASE 0x77000000 /* Power Management Control */
-#define CNS3XXX_PM_BASE_VIRT 0xFFF08000
+#define CNS3XXX_PM_BASE_VIRT 0xFB001000
#define PM_CLK_GATE_OFFSET 0x00
#define PM_SOFT_RST_OFFSET 0x04
#define PM_PLL_HM_PD_OFFSET 0x1C
#define CNS3XXX_UART0_BASE 0x78000000 /* UART 0 */
-#define CNS3XXX_UART0_BASE_VIRT 0xFFF09000
+#define CNS3XXX_UART0_BASE_VIRT 0xFB002000
#define CNS3XXX_UART1_BASE 0x78400000 /* UART 1 */
#define CNS3XXX_UART1_BASE_VIRT 0xFFF0A000
#define CNS3XXX_I2S_BASE_VIRT 0xFFF10000
#define CNS3XXX_TIMER1_2_3_BASE 0x7C800000 /* Timer */
-#define CNS3XXX_TIMER1_2_3_BASE_VIRT 0xFFF10800
+#define CNS3XXX_TIMER1_2_3_BASE_VIRT 0xFB003000
#define TIMER1_COUNTER_OFFSET 0x00
#define TIMER1_AUTO_RELOAD_OFFSET 0x04
* Testchip peripheral and fpga gic regions
*/
#define CNS3XXX_TC11MP_SCU_BASE 0x90000000 /* IRQ, Test chip */
-#define CNS3XXX_TC11MP_SCU_BASE_VIRT 0xFF000000
+#define CNS3XXX_TC11MP_SCU_BASE_VIRT 0xFB004000
#define CNS3XXX_TC11MP_GIC_CPU_BASE 0x90000100 /* Test chip interrupt controller CPU interface */
-#define CNS3XXX_TC11MP_GIC_CPU_BASE_VIRT 0xFF000100
+#define CNS3XXX_TC11MP_GIC_CPU_BASE_VIRT (CNS3XXX_TC11MP_SCU_BASE_VIRT + 0x100)
#define CNS3XXX_TC11MP_TWD_BASE 0x90000600
-#define CNS3XXX_TC11MP_TWD_BASE_VIRT 0xFF000600
+#define CNS3XXX_TC11MP_TWD_BASE_VIRT (CNS3XXX_TC11MP_SCU_BASE_VIRT + 0x600)
#define CNS3XXX_TC11MP_GIC_DIST_BASE 0x90001000 /* Test chip interrupt controller distributor */
-#define CNS3XXX_TC11MP_GIC_DIST_BASE_VIRT 0xFF001000
+#define CNS3XXX_TC11MP_GIC_DIST_BASE_VIRT (CNS3XXX_TC11MP_SCU_BASE_VIRT + 0x1000)
#define CNS3XXX_TC11MP_L220_BASE 0x92002000 /* L220 registers */
#define CNS3XXX_TC11MP_L220_BASE_VIRT 0xFF002000
static inline void putc(int c)
{
- /* Transmit fifo not full? */
- while (__raw_readb(PHYS_UART_FLAG) & UART_FLAG_TXFF)
- ;
+ int i;
+
+ for (i = 0; i < 10000; i++) {
+ /* Transmit fifo not full? */
+ if (!(__raw_readb(PHYS_UART_FLAG) & UART_FLAG_TXFF))
+ break;
+ }
__raw_writeb(c, PHYS_UART_DATA);
}
{
unsigned int v;
- flush_cache_all();
asm volatile(
" mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/kernel.h>
-
#include <asm/cacheflush.h>
#include "core.h"
*/
void __ref highbank_cpu_die(unsigned int cpu)
{
- flush_cache_all();
-
highbank_set_cpu_jump(cpu, phys_to_virt(0));
- highbank_set_core_pwr();
- cpu_do_idle();
+ flush_cache_louis();
+ highbank_set_core_pwr();
- /* We should never return from idle */
- panic("highbank: cpu %d unexpectedly exit from shutdown\n", cpu);
+ while (1)
+ cpu_do_idle();
}
clk_register_clkdev(clk[wdog_gate], NULL, "imx2-wdt.0");
clk_register_clkdev(clk[nfc_div], NULL, "imx25-nand.0");
clk_register_clkdev(clk[csi_gate], NULL, "mx3-camera.0");
+ clk_register_clkdev(clk[admux_gate], "audmux", NULL);
clk_prepare_enable(clk[spba_gate]);
clk_prepare_enable(clk[gpio1_gate]);
clk_prepare_enable(clk[iim_gate]);
clk_prepare_enable(clk[emi_gate]);
clk_prepare_enable(clk[max_gate]);
+ clk_prepare_enable(clk[iomuxc_gate]);
/*
* SCC is needed to boot via mmc after a watchdog reset. The clock code
static const char *gpu3d_core_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll2_pfd2_396m", };
static const char *gpu3d_shader_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll2_pfd9_720m", };
static const char *ipu_sels[] = { "mmdc_ch0_axi", "pll2_pfd2_396m", "pll3_120m", "pll3_pfd1_540m", };
-static const char *ldb_di_sels[] = { "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_pfd1_540m", };
+static const char *ldb_di_sels[] = { "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_usb_otg", };
static const char *ipu_di_pre_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "pll3_pfd1_540m", };
static const char *ipu1_di0_sels[] = { "ipu1_di0_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", };
static const char *ipu1_di1_sels[] = { "ipu1_di1_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", };
clk_register_clkdev(clk[gpt_ipg], "ipg", "imx-gpt.0");
clk_register_clkdev(clk[gpt_ipg_per], "per", "imx-gpt.0");
- clk_register_clkdev(clk[twd], NULL, "smp_twd");
clk_register_clkdev(clk[cko1_sel], "cko1_sel", NULL);
clk_register_clkdev(clk[ahb], "ahb", NULL);
clk_register_clkdev(clk[cko1], "cko1", NULL);
extern void imx_enable_cpu(int cpu, bool enable);
extern void imx_set_cpu_jump(int cpu, void *jump_addr);
+extern u32 imx_get_cpu_arg(int cpu);
+extern void imx_set_cpu_arg(int cpu, u32 arg);
extern void v7_cpu_resume(void);
extern u32 *pl310_get_save_ptr(void);
#ifdef CONFIG_SMP
*/
#include <linux/errno.h>
-#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include "common.h"
{
unsigned int v;
- flush_cache_all();
asm volatile(
"mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
void imx_cpu_die(unsigned int cpu)
{
cpu_enter_lowpower();
+ /*
+ * We use the cpu jumping argument register to sync with
+ * imx_cpu_kill() which is running on cpu0 and waiting for
+ * the register being cleared to kill the cpu.
+ */
+ imx_set_cpu_arg(cpu, ~0);
cpu_do_idle();
}
int imx_cpu_kill(unsigned int cpu)
{
+ unsigned long timeout = jiffies + msecs_to_jiffies(50);
+
+ while (imx_get_cpu_arg(cpu) == 0)
+ if (time_after(jiffies, timeout))
+ return 0;
imx_enable_cpu(cpu, false);
+ imx_set_cpu_arg(cpu, 0);
return 1;
}
src_base + SRC_GPR1 + cpu * 8);
}
+u32 imx_get_cpu_arg(int cpu)
+{
+ cpu = cpu_logical_map(cpu);
+ return readl_relaxed(src_base + SRC_GPR1 + cpu * 8 + 4);
+}
+
+void imx_set_cpu_arg(int cpu, u32 arg)
+{
+ cpu = cpu_logical_map(cpu);
+ writel_relaxed(arg, src_base + SRC_GPR1 + cpu * 8 + 4);
+}
+
void imx_src_prepare_restart(void)
{
u32 val;
.duplex = DUPLEX_FULL,
};
+static struct mv643xx_eth_platform_data iomega_ix2_200_ge01_data = {
+ .phy_addr = MV643XX_ETH_PHY_ADDR(11),
+};
+
void __init iomega_ix2_200_init(void)
{
/*
* Basic setup. Needs to be called early.
*/
- kirkwood_ge01_init(&iomega_ix2_200_ge00_data);
+ kirkwood_ge00_init(&iomega_ix2_200_ge00_data);
+ kirkwood_ge01_init(&iomega_ix2_200_ge01_data);
}
static struct mvsdio_platform_data guruplug_mvsdio_data = {
/* unfortunately the CD signal has not been connected */
+ .gpio_card_detect = -1,
+ .gpio_write_protect = -1,
};
static struct gpio_led guruplug_led_pins[] = {
static struct mvsdio_platform_data openrd_mvsdio_data = {
.gpio_card_detect = 29, /* MPP29 used as SD card detect */
+ .gpio_write_protect = -1,
};
static unsigned int openrd_mpp_config[] __initdata = {
static struct mvsdio_platform_data rd88f6281_mvsdio_data = {
.gpio_card_detect = 28,
+ .gpio_write_protect = -1,
};
static unsigned int rd88f6281_mpp_config[] __initdata = {
#include <linux/errno.h>
#include <linux/smp.h>
-#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include "common.h"
static inline void cpu_enter_lowpower(void)
{
- /* Just flush the cache. Changing the coherency is not yet
- * available on msm. */
- flush_cache_all();
}
static inline void cpu_leave_lowpower(void)
{
u32 ctrl = readl_relaxed(event_base + TIMER_ENABLE);
- writel_relaxed(0, event_base + TIMER_CLEAR);
+ ctrl &= ~TIMER_ENABLE_EN;
+ writel_relaxed(ctrl, event_base + TIMER_ENABLE);
+
+ writel_relaxed(ctrl, event_base + TIMER_CLEAR);
writel_relaxed(cycles, event_base + TIMER_MATCH_VAL);
writel_relaxed(ctrl | TIMER_ENABLE_EN, event_base + TIMER_ENABLE);
return 0;
#define ARMADA_370_XP_MAX_PER_CPU_IRQS (28)
+#define ARMADA_370_XP_TIMER0_PER_CPU_IRQ (5)
+
#define ACTIVE_DOORBELLS (8)
static DEFINE_RAW_SPINLOCK(irq_controller_lock);
*/
static void armada_370_xp_irq_mask(struct irq_data *d)
{
-#ifdef CONFIG_SMP
irq_hw_number_t hwirq = irqd_to_hwirq(d);
- if (hwirq > ARMADA_370_XP_MAX_PER_CPU_IRQS)
+ if (hwirq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
writel(hwirq, main_int_base +
ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS);
else
writel(hwirq, per_cpu_int_base +
ARMADA_370_XP_INT_SET_MASK_OFFS);
-#else
- writel(irqd_to_hwirq(d),
- per_cpu_int_base + ARMADA_370_XP_INT_SET_MASK_OFFS);
-#endif
}
static void armada_370_xp_irq_unmask(struct irq_data *d)
{
-#ifdef CONFIG_SMP
irq_hw_number_t hwirq = irqd_to_hwirq(d);
- if (hwirq > ARMADA_370_XP_MAX_PER_CPU_IRQS)
+ if (hwirq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
writel(hwirq, main_int_base +
ARMADA_370_XP_INT_SET_ENABLE_OFFS);
else
writel(hwirq, per_cpu_int_base +
ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
-#else
- writel(irqd_to_hwirq(d),
- per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
-#endif
}
#ifdef CONFIG_SMP
unsigned int virq, irq_hw_number_t hw)
{
armada_370_xp_irq_mask(irq_get_irq_data(virq));
- writel(hw, main_int_base + ARMADA_370_XP_INT_SET_ENABLE_OFFS);
+ if (hw != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
+ writel(hw, per_cpu_int_base +
+ ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
+ else
+ writel(hw, main_int_base + ARMADA_370_XP_INT_SET_ENABLE_OFFS);
irq_set_status_flags(virq, IRQ_LEVEL);
- if (hw < ARMADA_370_XP_MAX_PER_CPU_IRQS) {
+ if (hw == ARMADA_370_XP_TIMER0_PER_CPU_IRQ) {
irq_set_percpu_devid(virq);
irq_set_chip_and_handler(virq, &armada_370_xp_irq_chip,
handle_percpu_devid_irq);
.lower_margin = 4,
.hsync_len = 1,
.vsync_len = 1,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
},
};
.lower_margin = 10,
.hsync_len = 10,
.vsync_len = 10,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
},
};
.lower_margin = 45,
.hsync_len = 1,
.vsync_len = 1,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT,
},
};
.lower_margin = 13,
.hsync_len = 48,
.vsync_len = 3,
- .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT |
- FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
},
};
.lower_margin = 0x15,
.hsync_len = 64,
.vsync_len = 4,
- .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT |
- FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
},
};
.lower_margin = 2,
.hsync_len = 15,
.vsync_len = 15,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT
},
};
mxsfb_pdata.mode_count = ARRAY_SIZE(mx23evk_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
static inline void enable_clk_enet_out(void)
mxsfb_pdata.mode_count = ARRAY_SIZE(mx28evk_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
mxs_saif_clkmux_select(MXS_DIGCTL_SAIF_CLKMUX_EXTMSTR0);
}
mxsfb_pdata.mode_count = ARRAY_SIZE(m28evk_video_modes);
mxsfb_pdata.default_bpp = 16;
mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT;
}
static void __init sc_sps1_init(void)
mxsfb_pdata.mode_count = ARRAY_SIZE(apx4devkit_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
#define ENET0_MDC__GPIO_4_0 MXS_GPIO_NR(4, 0)
mxsfb_pdata.mode_count = ARRAY_SIZE(cfa10049_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT;
}
static void __init cfa10037_init(void)
mxsfb_pdata.mode_count = ARRAY_SIZE(apf28dev_video_modes);
mxsfb_pdata.default_bpp = 16;
mxsfb_pdata.ld_intf_width = STMLCDIF_16BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
static void __init mxs_machine_init(void)
};
static struct clk usb_dc_ck = {
- .name = "usb_dc_ck",
- .ops = &clkops_generic,
- /* Direct from ULPD, no parent */
- .rate = 48000000,
- .enable_reg = OMAP1_IO_ADDRESS(SOFT_REQ_REG),
- .enable_bit = USB_REQ_EN_SHIFT,
-};
-
-static struct clk usb_dc_ck7xx = {
.name = "usb_dc_ck",
.ops = &clkops_generic,
/* Direct from ULPD, no parent */
CLK(NULL, "usb_clko", &usb_clko, CK_16XX | CK_1510 | CK_310),
CLK(NULL, "usb_hhc_ck", &usb_hhc_ck1510, CK_1510 | CK_310),
CLK(NULL, "usb_hhc_ck", &usb_hhc_ck16xx, CK_16XX),
- CLK(NULL, "usb_dc_ck", &usb_dc_ck, CK_16XX),
- CLK(NULL, "usb_dc_ck", &usb_dc_ck7xx, CK_7XX),
+ CLK(NULL, "usb_dc_ck", &usb_dc_ck, CK_16XX | CK_7XX),
CLK(NULL, "mclk", &mclk_1510, CK_1510 | CK_310),
CLK(NULL, "mclk", &mclk_16xx, CK_16XX),
CLK(NULL, "bclk", &bclk_1510, CK_1510 | CK_310),
*/
#define OMAP4_DPLL_ABE_DEFFREQ 98304000
+/*
+ * OMAP4 USB DPLL default frequency. In OMAP4430 TRM version V, section
+ * "3.6.3.9.5 DPLL_USB Preferred Settings" shows that the preferred
+ * locked frequency for the USB DPLL is 960MHz.
+ */
+#define OMAP4_DPLL_USB_DEFFREQ 960000000
+
/* Root clocks */
DEFINE_CLK_FIXED_RATE(extalt_clkin_ck, CLK_IS_ROOT, 59000000, 0x0);
OMAP4430_CM_L3INIT_MMC2_CLKCTRL, OMAP4430_CLKSEL_MASK,
hsmmc1_fclk_parents, func_dmic_abe_gfclk_ops);
+DEFINE_CLK_GATE(ocp2scp_usb_phy_phy_48m, "func_48m_fclk", &func_48m_fclk, 0x0,
+ OMAP4430_CM_L3INIT_USBPHYOCP2SCP_CLKCTRL,
+ OMAP4430_OPTFCLKEN_PHY_48M_SHIFT, 0x0, NULL);
+
DEFINE_CLK_GATE(sha2md5_fck, "l3_div_ck", &l3_div_ck, 0x0,
OMAP4430_CM_L4SEC_SHA2MD51_CLKCTRL,
OMAP4430_MODULEMODE_SWCTRL_SHIFT, 0x0, NULL);
CLK(NULL, "per_mcbsp4_gfclk", &per_mcbsp4_gfclk, CK_443X),
CLK(NULL, "hsmmc1_fclk", &hsmmc1_fclk, CK_443X),
CLK(NULL, "hsmmc2_fclk", &hsmmc2_fclk, CK_443X),
+ CLK(NULL, "ocp2scp_usb_phy_phy_48m", &ocp2scp_usb_phy_phy_48m, CK_443X),
CLK(NULL, "sha2md5_fck", &sha2md5_fck, CK_443X),
CLK(NULL, "slimbus1_fclk_1", &slimbus1_fclk_1, CK_443X),
CLK(NULL, "slimbus1_fclk_0", &slimbus1_fclk_0, CK_443X),
if (rc)
pr_err("%s: failed to configure ABE DPLL!\n", __func__);
+ /*
+ * Lock USB DPLL on OMAP4 devices so that the L3INIT power
+ * domain can transition to retention state when not in use.
+ */
+ rc = clk_set_rate(&dpll_usb_ck, OMAP4_DPLL_USB_DEFFREQ);
+ if (rc)
+ pr_err("%s: failed to configure USB DPLL!\n", __func__);
+
return 0;
}
struct omap_hwmod;
extern int omap_dss_reset(struct omap_hwmod *);
+/* SoC specific clock initializer */
+extern int (*omap_clk_init)(void);
+
#endif /* __ASSEMBLER__ */
#endif /* __ARCH_ARM_MACH_OMAP2PLUS_COMMON_H */
* If the processor type is Cortex-A8 and the revision is 0x0
* it means its Cortex r0p0 which is 3430 ES1.0.
*/
- cpuid = read_cpuid(CPUID_ID);
+ cpuid = read_cpuid_id();
if ((((cpuid >> 4) & 0xfff) == 0xc08) && ((cpuid & 0xf) == 0x0)) {
omap_revision = OMAP3430_REV_ES1_0;
cpu_rev = "1.0";
* Use ARM register to detect the correct ES version
*/
if (!rev && (hawkeye != 0xb94e) && (hawkeye != 0xb975)) {
- idcode = read_cpuid(CPUID_ID);
+ idcode = read_cpuid_id();
rev = (idcode & 0xf) - 1;
}
#include "prm3xxx.h"
#include "prm44xx.h"
+/*
+ * omap_clk_init: points to a function that does the SoC-specific
+ * clock initializations
+ */
+int (*omap_clk_init)(void);
+
/*
* The machine specific code may provide the extra mapping besides the
* default mapping provided here.
omap242x_clockdomains_init();
omap2420_hwmod_init();
omap_hwmod_init_postsetup();
- omap2420_clk_init();
+ omap_clk_init = omap2420_clk_init;
}
void __init omap2420_init_late(void)
omap243x_clockdomains_init();
omap2430_hwmod_init();
omap_hwmod_init_postsetup();
- omap2430_clk_init();
+ omap_clk_init = omap2430_clk_init;
}
void __init omap2430_init_late(void)
omap3xxx_clockdomains_init();
omap3xxx_hwmod_init();
omap_hwmod_init_postsetup();
- omap3xxx_clk_init();
+ omap_clk_init = omap3xxx_clk_init;
}
void __init omap3430_init_early(void)
omap3xxx_clockdomains_init();
omap3xxx_hwmod_init();
omap_hwmod_init_postsetup();
- omap3xxx_clk_init();
+ omap_clk_init = omap3xxx_clk_init;
}
void __init omap3_init_late(void)
am33xx_clockdomains_init();
am33xx_hwmod_init();
omap_hwmod_init_postsetup();
- am33xx_clk_init();
+ omap_clk_init = am33xx_clk_init;
}
#endif
omap44xx_clockdomains_init();
omap44xx_hwmod_init();
omap_hwmod_init_postsetup();
- omap4xxx_clk_init();
+ omap_clk_init = omap4xxx_clk_init;
}
void __init omap4430_init_late(void)
unsigned int boot_cpu = 0;
void __iomem *base = omap_get_wakeupgen_base();
- flush_cache_all();
- dsb();
-
/*
* we're ready for shutdown now, so do it
*/
unsigned int i = 0, ncores = 1, cpu_id;
/* Use ARM cpuid check here, as SoC detection will not work so early */
- cpu_id = read_cpuid(CPUID_ID) & CPU_MASK;
+ cpu_id = read_cpuid_id() & CPU_MASK;
if (cpu_id == CPU_CORTEX_A9) {
/*
* Currently we can't call ioremap here because
}
if (sf & SYSC_HAS_MIDLEMODE) {
- if (oh->flags & HWMOD_SWSUP_MSTANDBY) {
+ if (oh->flags & HWMOD_FORCE_MSTANDBY) {
+ idlemode = HWMOD_IDLEMODE_FORCE;
+ } else if (oh->flags & HWMOD_SWSUP_MSTANDBY) {
idlemode = HWMOD_IDLEMODE_NO;
} else {
if (sf & SYSC_HAS_ENAWAKEUP)
}
if (sf & SYSC_HAS_MIDLEMODE) {
- if (oh->flags & HWMOD_SWSUP_MSTANDBY) {
+ if ((oh->flags & HWMOD_SWSUP_MSTANDBY) ||
+ (oh->flags & HWMOD_FORCE_MSTANDBY)) {
idlemode = HWMOD_IDLEMODE_FORCE;
} else {
if (sf & SYSC_HAS_ENAWAKEUP)
*
* HWMOD_SWSUP_SIDLE: omap_hwmod code should manually bring module in and out
* of idle, rather than relying on module smart-idle
- * HWMOD_SWSUP_MSTDBY: omap_hwmod code should manually bring module in and out
- * of standby, rather than relying on module smart-standby
+ * HWMOD_SWSUP_MSTANDBY: omap_hwmod code should manually bring module in and
+ * out of standby, rather than relying on module smart-standby
* HWMOD_INIT_NO_RESET: don't reset this module at boot - important for
* SDRAM controller, etc. XXX probably belongs outside the main hwmod file
* XXX Should be HWMOD_SETUP_NO_RESET
* correctly, or this is being abused to deal with some PM latency
* issues -- but we're currently suffering from a shortage of
* folks who are able to track these issues down properly.
+ * HWMOD_FORCE_MSTANDBY: Always keep MIDLEMODE bits cleared so that device
+ * is kept in force-standby mode. Failing to do so causes PM problems
+ * with musb on OMAP3630 at least. Note that musb has a dedicated register
+ * to control MSTANDBY signal when MIDLEMODE is set to force-standby.
*/
#define HWMOD_SWSUP_SIDLE (1 << 0)
#define HWMOD_SWSUP_MSTANDBY (1 << 1)
#define HWMOD_16BIT_REG (1 << 8)
#define HWMOD_EXT_OPT_MAIN_CLK (1 << 9)
#define HWMOD_BLOCK_WFI (1 << 10)
+#define HWMOD_FORCE_MSTANDBY (1 << 11)
/*
* omap_hwmod._int_flags definitions
* Erratum ID: i479 idle_req / idle_ack mechanism potentially
* broken when autoidle is enabled
* workaround is to disable the autoidle bit at module level.
+ *
+ * Enabling the device in any other MIDLEMODE setting but force-idle
+ * causes core_pwrdm not enter idle states at least on OMAP3630.
+ * Note that musb has OTG_FORCESTDBY register that controls MSTANDBY
+ * signal when MIDLEMODE is set to force-idle.
*/
.flags = HWMOD_NO_OCP_AUTOIDLE | HWMOD_SWSUP_SIDLE
- | HWMOD_SWSUP_MSTANDBY,
+ | HWMOD_FORCE_MSTANDBY,
};
/* usb_otg_hs */
{ }
};
+static struct omap_hwmod_opt_clk ocp2scp_usb_phy_opt_clks[] = {
+ { .role = "48mhz", .clk = "ocp2scp_usb_phy_phy_48m" },
+};
+
/* ocp2scp_usb_phy */
static struct omap_hwmod omap44xx_ocp2scp_usb_phy_hwmod = {
.name = "ocp2scp_usb_phy",
},
},
.dev_attr = ocp2scp_dev_attr,
+ .opt_clks = ocp2scp_usb_phy_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(ocp2scp_usb_phy_opt_clks),
};
/*
clksrc_nr, clksrc_src) \
void __init omap##name##_gptimer_timer_init(void) \
{ \
+ if (omap_clk_init) \
+ omap_clk_init(); \
omap_dmtimer_init(); \
omap2_gp_clockevent_init((clkev_nr), clkev_src, clkev_prop); \
omap2_gptimer_clocksource_init((clksrc_nr), clksrc_src); \
clksrc_nr, clksrc_src) \
void __init omap##name##_sync32k_timer_init(void) \
{ \
+ if (omap_clk_init) \
+ omap_clk_init(); \
omap_dmtimer_init(); \
omap2_gp_clockevent_init((clkev_nr), clkev_src, clkev_prop); \
/* Enable the use of clocksource="gp_timer" kernel parameter */ \
#include <linux/errno.h>
#include <linux/smp.h>
-#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
static inline void platform_do_lowpower(unsigned int cpu)
{
- flush_cache_all();
-
/* we put the platform to just WFI */
for (;;) {
__asm__ __volatile__("dsb\n\t" "wfi\n\t"
#include <linux/errno.h>
#include <linux/smp.h>
-#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include <asm/smp_plat.h>
{
unsigned int v;
- flush_cache_all();
asm volatile(
" mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
#if defined(CONFIG_CPU_S3C2416)
#define NR_IRQS (IRQ_S3C2416_I2S1 + 1)
-#elif defined(CONFIG_CPU_S3C2443)
-#define NR_IRQS (IRQ_S3C2443_AC97+1)
#else
-#define NR_IRQS (IRQ_S3C2440_AC97+1)
+#define NR_IRQS (IRQ_S3C2443_AC97 + 1)
#endif
/* compatibility define. */
base = (void *)0xfd000000;
intc->reg_mask = base + 0xa4;
- intc->reg_pending = base + 0x08;
+ intc->reg_pending = base + 0xa8;
irq_num = 20;
irq_start = S3C2410_IRQ(32);
irq_offset = 4;
static void sh73a0_cpu_die(unsigned int cpu)
{
- /*
- * The ARM MPcore does not issue a cache coherency request for the L1
- * cache when powering off single CPUs. We must take care of this and
- * further caches.
- */
- dsb();
- flush_cache_all();
-
/* Set power off mode. This takes the CPU out of the MP cluster */
scu_power_mode(scu_base_addr(), SCU_PM_POWEROFF);
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/smp.h>
-#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include <asm/smp_plat.h>
{
unsigned int v;
- flush_cache_all();
asm volatile(
" mcr p15, 0, %1, c7, c5, 0\n"
" dsb\n"
extern int tegra_cpu_kill(unsigned int cpu);
extern void tegra_cpu_die(unsigned int cpu);
-extern int tegra_cpu_disable(unsigned int cpu);
#include <linux/smp.h>
#include <linux/clk/tegra.h>
-#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include "sleep.h"
BUG();
}
-int tegra_cpu_disable(unsigned int cpu)
-{
- /*
- * we don't allow CPU 0 to be shutdown (it is still too special
- * e.g. clock tick interrupts)
- */
- return cpu == 0 ? -EPERM : 0;
-}
-
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
extern void tegra20_hotplug_shutdown(void);
void __init tegra20_hotplug_init(void)
#ifdef CONFIG_HOTPLUG_CPU
.cpu_kill = tegra_cpu_kill,
.cpu_die = tegra_cpu_die,
- .cpu_disable = tegra_cpu_disable,
#endif
};
#endif
struct mmci_platform_data mop500_sdi0_data = {
- .ios_handler = mop500_sdi0_ios_handler,
.ocr_mask = MMC_VDD_29_30,
.f_max = 50000000,
.capabilities = MMC_CAP_4_BIT_DATA |
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
+#include <linux/clk.h>
#include <linux/io.h>
#include <linux/i2c.h>
#include <linux/platform_data/i2c-nomadik.h>
regulator_put(prox_regulator);
}
+void mop500_snowball_ethernet_clock_enable(void)
+{
+ struct clk *clk;
+
+ clk = clk_get_sys("fsmc", NULL);
+ if (!IS_ERR(clk))
+ clk_prepare_enable(clk);
+}
+
static struct cryp_platform_data u8500_cryp1_platform_data = {
.mem_to_engine = {
.dir = STEDMA40_MEM_TO_PERIPH,
mop500_audio_init(parent);
mop500_uart_init(parent);
+ mop500_snowball_ethernet_clock_enable();
+
/* This board has full regulator constraints */
regulator_has_full_constraints();
}
void __init snowball_pinmaps_init(void);
void __init hrefv60_pinmaps_init(void);
void mop500_audio_init(struct device *parent);
+void mop500_snowball_ethernet_clock_enable(void);
int __init mop500_uib_init(void);
void mop500_uib_i2c_add(int busnum, struct i2c_board_info *info,
/* Pinmaps must be in place before devices register */
if (of_machine_is_compatible("st-ericsson,mop500"))
mop500_pinmaps_init();
- else if (of_machine_is_compatible("calaosystems,snowball-a9500"))
+ else if (of_machine_is_compatible("calaosystems,snowball-a9500")) {
snowball_pinmaps_init();
- else if (of_machine_is_compatible("st-ericsson,hrefv60+"))
+ mop500_snowball_ethernet_clock_enable();
+ } else if (of_machine_is_compatible("st-ericsson,hrefv60+"))
hrefv60_pinmaps_init();
else if (of_machine_is_compatible("st-ericsson,ccu9540")) {}
/* TODO: Add pinmaps for ccu9540 board. */
#include <linux/errno.h>
#include <linux/smp.h>
-#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include <mach/setup.h>
*/
void __ref ux500_cpu_die(unsigned int cpu)
{
- flush_cache_all();
-
/* directly enter low power state, skipping secure registers */
for (;;) {
__asm__ __volatile__("dsb\n\t" "wfi\n\t"
#include <linux/errno.h>
#include <linux/smp.h>
-#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include <asm/cp15.h>
{
unsigned int v;
- flush_cache_all();
asm volatile(
"mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
depends on !MMU
select CPU_32v4T
select CPU_ABRT_LV4T
- select CPU_CACHE_V3 # although the core is v4t
+ select CPU_CACHE_V4
select CPU_CP15_MPU
select CPU_PABRT_LEGACY
help
select CPU_PABRT_V7
select CPU_TLB_V7 if MMU
+config CPU_THUMBONLY
+ bool
+ # There are no CPUs available with MMU that don't implement an ARM ISA:
+ depends on !MMU
+ help
+ Select this if your CPU doesn't support the 32 bit ARM instructions.
+
# Figure out what processor architecture version we should be using.
# This defines the compiler instruction set which depends on the machine type.
config CPU_32v3
bool
# The cache model
-config CPU_CACHE_V3
- bool
-
config CPU_CACHE_V4
bool
bool
config ARM_THUMB
- bool "Support Thumb user binaries"
+ bool "Support Thumb user binaries" if !CPU_THUMBONLY
depends on CPU_ARM720T || CPU_ARM740T || CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM940T || CPU_ARM946E || CPU_ARM1020 || CPU_ARM1020E || CPU_ARM1022 || CPU_ARM1026 || CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_V6 || CPU_V6K || CPU_V7 || CPU_FEROCEON
default y
help
obj-$(CONFIG_CPU_PABRT_V6) += pabort-v6.o
obj-$(CONFIG_CPU_PABRT_V7) += pabort-v7.o
-obj-$(CONFIG_CPU_CACHE_V3) += cache-v3.o
obj-$(CONFIG_CPU_CACHE_V4) += cache-v4.o
obj-$(CONFIG_CPU_CACHE_V4WT) += cache-v4wt.o
obj-$(CONFIG_CPU_CACHE_V4WB) += cache-v4wb.o
return -ENOMEM;
#endif
+#ifdef CONFIG_CPU_CP15
if (cpu_is_v6_unaligned()) {
cr_alignment &= ~CR_A;
cr_no_alignment &= ~CR_A;
set_cr(cr_alignment);
ai_usermode = safe_usermode(ai_usermode, false);
}
+#endif
hook_fault_code(FAULT_CODE_ALIGNMENT, do_alignment, SIGBUS, BUS_ADRALN,
"alignment exception");
outer_cache.inv_range = feroceon_l2_inv_range;
outer_cache.clean_range = feroceon_l2_clean_range;
outer_cache.flush_range = feroceon_l2_flush_range;
+ outer_cache.inv_all = l2_inv_all;
enable_l2();
int lockregs;
int i;
- switch (cache_id) {
+ switch (cache_id & L2X0_CACHE_ID_PART_MASK) {
case L2X0_CACHE_ID_PART_L310:
lockregs = 8;
break;
if (cache_id_part_number_from_dt)
cache_id = cache_id_part_number_from_dt;
else
- cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID)
- & L2X0_CACHE_ID_PART_MASK;
+ cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID);
aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL);
aux &= aux_mask;
aux |= aux_val;
/* Determine the number of ways */
- switch (cache_id) {
+ switch (cache_id & L2X0_CACHE_ID_PART_MASK) {
case L2X0_CACHE_ID_PART_L310:
if (aux & (1 << 16))
ways = 16;
.flush_all = l2x0_flush_all,
.inv_all = l2x0_inv_all,
.disable = l2x0_disable,
- .set_debug = pl310_set_debug,
},
};
data->save();
of_init = true;
- l2x0_init(l2x0_base, aux_val, aux_mask);
-
memcpy(&outer_cache, &data->outer_cache, sizeof(outer_cache));
+ l2x0_init(l2x0_base, aux_val, aux_mask);
return 0;
}
+++ /dev/null
-/*
- * linux/arch/arm/mm/cache-v3.S
- *
- * Copyright (C) 1997-2002 Russell king
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-#include <linux/linkage.h>
-#include <linux/init.h>
-#include <asm/page.h>
-#include "proc-macros.S"
-
-/*
- * flush_icache_all()
- *
- * Unconditionally clean and invalidate the entire icache.
- */
-ENTRY(v3_flush_icache_all)
- mov pc, lr
-ENDPROC(v3_flush_icache_all)
-
-/*
- * flush_user_cache_all()
- *
- * Invalidate all cache entries in a particular address
- * space.
- *
- * - mm - mm_struct describing address space
- */
-ENTRY(v3_flush_user_cache_all)
- /* FALLTHROUGH */
-/*
- * flush_kern_cache_all()
- *
- * Clean and invalidate the entire cache.
- */
-ENTRY(v3_flush_kern_cache_all)
- /* FALLTHROUGH */
-
-/*
- * flush_user_cache_range(start, end, flags)
- *
- * Invalidate a range of cache entries in the specified
- * address space.
- *
- * - start - start address (may not be aligned)
- * - end - end address (exclusive, may not be aligned)
- * - flags - vma_area_struct flags describing address space
- */
-ENTRY(v3_flush_user_cache_range)
- mov ip, #0
- mcreq p15, 0, ip, c7, c0, 0 @ flush ID cache
- mov pc, lr
-
-/*
- * coherent_kern_range(start, end)
- *
- * Ensure coherency between the Icache and the Dcache in the
- * region described by start. If you have non-snooping
- * Harvard caches, you need to implement this function.
- *
- * - start - virtual start address
- * - end - virtual end address
- */
-ENTRY(v3_coherent_kern_range)
- /* FALLTHROUGH */
-
-/*
- * coherent_user_range(start, end)
- *
- * Ensure coherency between the Icache and the Dcache in the
- * region described by start. If you have non-snooping
- * Harvard caches, you need to implement this function.
- *
- * - start - virtual start address
- * - end - virtual end address
- */
-ENTRY(v3_coherent_user_range)
- mov r0, #0
- mov pc, lr
-
-/*
- * flush_kern_dcache_area(void *page, size_t size)
- *
- * Ensure no D cache aliasing occurs, either with itself or
- * the I cache
- *
- * - addr - kernel address
- * - size - region size
- */
-ENTRY(v3_flush_kern_dcache_area)
- /* FALLTHROUGH */
-
-/*
- * dma_flush_range(start, end)
- *
- * Clean and invalidate the specified virtual address range.
- *
- * - start - virtual start address
- * - end - virtual end address
- */
-ENTRY(v3_dma_flush_range)
- mov r0, #0
- mcr p15, 0, r0, c7, c0, 0 @ flush ID cache
- mov pc, lr
-
-/*
- * dma_unmap_area(start, size, dir)
- * - start - kernel virtual start address
- * - size - size of region
- * - dir - DMA direction
- */
-ENTRY(v3_dma_unmap_area)
- teq r2, #DMA_TO_DEVICE
- bne v3_dma_flush_range
- /* FALLTHROUGH */
-
-/*
- * dma_map_area(start, size, dir)
- * - start - kernel virtual start address
- * - size - size of region
- * - dir - DMA direction
- */
-ENTRY(v3_dma_map_area)
- mov pc, lr
-ENDPROC(v3_dma_unmap_area)
-ENDPROC(v3_dma_map_area)
-
- .globl v3_flush_kern_cache_louis
- .equ v3_flush_kern_cache_louis, v3_flush_kern_cache_all
-
- __INITDATA
-
- @ define struct cpu_cache_fns (see <asm/cacheflush.h> and proc-macros.S)
- define_cache_functions v3
ENTRY(v4_flush_user_cache_range)
#ifdef CONFIG_CPU_CP15
mov ip, #0
- mcreq p15, 0, ip, c7, c7, 0 @ flush ID cache
+ mcr p15, 0, ip, c7, c7, 0 @ flush ID cache
mov pc, lr
#else
/* FALLTHROUGH */
static atomic64_t asid_generation = ATOMIC64_INIT(ASID_FIRST_VERSION);
static DECLARE_BITMAP(asid_map, NUM_USER_ASIDS);
-static DEFINE_PER_CPU(atomic64_t, active_asids);
+DEFINE_PER_CPU(atomic64_t, active_asids);
static DEFINE_PER_CPU(u64, reserved_asids);
static cpumask_t tlb_flush_pending;
if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) {
local_flush_bp_all();
local_flush_tlb_all();
+ dummy_flush_tlb_a15_erratum();
}
atomic64_set(&per_cpu(active_asids, cpu), asid);
if (PageHighMem(page)) {
if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
- vaddr = kmap_high_get(page);
- if (vaddr) {
- vaddr += offset;
- op(vaddr, len, dir);
- kunmap_high(page);
- } else if (cache_is_vipt()) {
- /* unmapped pages might still be cached */
+
+ if (cache_is_vipt_nonaliasing()) {
vaddr = kmap_atomic(page);
op(vaddr + offset, len, dir);
kunmap_atomic(vaddr);
+ } else {
+ vaddr = kmap_high_get(page);
+ if (vaddr) {
+ op(vaddr + offset, len, dir);
+ kunmap_high(page);
+ }
}
} else {
vaddr = page_address(page) + offset;
if (!PageHighMem(page)) {
__cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
} else {
- void *addr = kmap_high_get(page);
- if (addr) {
- __cpuc_flush_dcache_area(addr, PAGE_SIZE);
- kunmap_high(page);
- } else if (cache_is_vipt()) {
- /* unmapped pages might still be cached */
+ void *addr;
+
+ if (cache_is_vipt_nonaliasing()) {
addr = kmap_atomic(page);
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
kunmap_atomic(addr);
+ } else {
+ addr = kmap_high_get(page);
+ if (addr) {
+ __cpuc_flush_dcache_area(addr, PAGE_SIZE);
+ kunmap_high(page);
+ }
}
}
#include <asm/mach/pci.h>
#include "mm.h"
+#include "tcm.h"
/*
* empty_zero_page is a special page that is used for
}
};
+#ifdef CONFIG_CPU_CP15
/*
* These are useful for identifying cache coherency
* problems by allowing the cache or the cache and
}
#endif
+#else /* ifdef CONFIG_CPU_CP15 */
+
+static int __init early_cachepolicy(char *p)
+{
+ pr_warning("cachepolicy kernel parameter not supported without cp15\n");
+}
+early_param("cachepolicy", early_cachepolicy);
+
+static int __init noalign_setup(char *__unused)
+{
+ pr_warning("noalign kernel parameter not supported without cp15\n");
+}
+__setup("noalign", noalign_setup);
+
+#endif /* ifdef CONFIG_CPU_CP15 / else */
+
#define PROT_PTE_DEVICE L_PTE_PRESENT|L_PTE_YOUNG|L_PTE_DIRTY|L_PTE_XN
#define PROT_SECT_DEVICE PMD_TYPE_SECT|PMD_SECT_AP_WRITE
} while (pte++, addr += PAGE_SIZE, addr != end);
}
-static void __init alloc_init_section(pud_t *pud, unsigned long addr,
- unsigned long end, phys_addr_t phys,
- const struct mem_type *type)
+static void __init map_init_section(pmd_t *pmd, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ const struct mem_type *type)
{
- pmd_t *pmd = pmd_offset(pud, addr);
-
+#ifndef CONFIG_ARM_LPAE
/*
- * Try a section mapping - end, addr and phys must all be aligned
- * to a section boundary. Note that PMDs refer to the individual
- * L1 entries, whereas PGDs refer to a group of L1 entries making
- * up one logical pointer to an L2 table.
+ * In classic MMU format, puds and pmds are folded in to
+ * the pgds. pmd_offset gives the PGD entry. PGDs refer to a
+ * group of L1 entries making up one logical pointer to
+ * an L2 table (2MB), where as PMDs refer to the individual
+ * L1 entries (1MB). Hence increment to get the correct
+ * offset for odd 1MB sections.
+ * (See arch/arm/include/asm/pgtable-2level.h)
*/
- if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) {
- pmd_t *p = pmd;
-
-#ifndef CONFIG_ARM_LPAE
- if (addr & SECTION_SIZE)
- pmd++;
+ if (addr & SECTION_SIZE)
+ pmd++;
#endif
+ do {
+ *pmd = __pmd(phys | type->prot_sect);
+ phys += SECTION_SIZE;
+ } while (pmd++, addr += SECTION_SIZE, addr != end);
- do {
- *pmd = __pmd(phys | type->prot_sect);
- phys += SECTION_SIZE;
- } while (pmd++, addr += SECTION_SIZE, addr != end);
+ flush_pmd_entry(pmd);
+}
- flush_pmd_entry(p);
- } else {
+static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ const struct mem_type *type)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+ unsigned long next;
+
+ do {
/*
- * No need to loop; pte's aren't interested in the
- * individual L1 entries.
+ * With LPAE, we must loop over to map
+ * all the pmds for the given range.
*/
- alloc_init_pte(pmd, addr, end, __phys_to_pfn(phys), type);
- }
+ next = pmd_addr_end(addr, end);
+
+ /*
+ * Try a section mapping - addr, next and phys must all be
+ * aligned to a section boundary.
+ */
+ if (type->prot_sect &&
+ ((addr | next | phys) & ~SECTION_MASK) == 0) {
+ map_init_section(pmd, addr, next, phys, type);
+ } else {
+ alloc_init_pte(pmd, addr, next,
+ __phys_to_pfn(phys), type);
+ }
+
+ phys += next - addr;
+
+ } while (pmd++, addr = next, addr != end);
}
static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
do {
next = pud_addr_end(addr, end);
- alloc_init_section(pud, addr, next, phys, type);
+ alloc_init_pmd(pud, addr, next, phys, type);
phys += next - addr;
} while (pud++, addr = next, addr != end);
}
dma_contiguous_remap();
devicemaps_init(mdesc);
kmap_init();
+ tcm_init();
top_pmd = pmd_off_k(0xffff0000);
mcr p15, 0, r0, c6, c0 @ set area 0, default
ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM
- ldr r1, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB)
- mov r2, #10 @ 11 is the minimum (4KB)
-1: add r2, r2, #1 @ area size *= 2
- mov r1, r1, lsr #1
+ ldr r3, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB)
+ mov r4, #10 @ 11 is the minimum (4KB)
+1: add r4, r4, #1 @ area size *= 2
+ movs r3, r3, lsr #1
bne 1b @ count not zero r-shift
- orr r0, r0, r2, lsl #1 @ the area register value
+ orr r0, r0, r4, lsl #1 @ the area register value
orr r0, r0, #1 @ set enable bit
mcr p15, 0, r0, c6, c1 @ set area 1, RAM
ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH
- ldr r1, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB)
- mov r2, #10 @ 11 is the minimum (4KB)
-1: add r2, r2, #1 @ area size *= 2
- mov r1, r1, lsr #1
+ ldr r3, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB)
+ cmp r3, #0
+ moveq r0, #0
+ beq 2f
+ mov r4, #10 @ 11 is the minimum (4KB)
+1: add r4, r4, #1 @ area size *= 2
+ movs r3, r3, lsr #1
bne 1b @ count not zero r-shift
- orr r0, r0, r2, lsl #1 @ the area register value
+ orr r0, r0, r4, lsl #1 @ the area register value
orr r0, r0, #1 @ set enable bit
- mcr p15, 0, r0, c6, c2 @ set area 2, ROM/FLASH
+2: mcr p15, 0, r0, c6, c2 @ set area 2, ROM/FLASH
mov r0, #0x06
mcr p15, 0, r0, c2, c0 @ Region 1&2 cacheable
.long 0x41807400
.long 0xfffffff0
.long 0
+ .long 0
b __arm740_setup
.long cpu_arch_name
.long cpu_elf_name
- .long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT
+ .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_26BIT
.long cpu_arm740_name
.long arm740_processor_functions
.long 0
.long 0
- .long v3_cache_fns @ cache model
+ .long v4_cache_fns @ cache model
.size __arm740_proc_info, . - __arm740_proc_info
/* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */
.globl cpu_arm920_suspend_size
.equ cpu_arm920_suspend_size, 4 * 3
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_arm920_do_suspend)
stmfd sp!, {r4 - r6, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID
/* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */
.globl cpu_arm926_suspend_size
.equ cpu_arm926_suspend_size, 4 * 3
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_arm926_do_suspend)
stmfd sp!, {r4 - r6, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID
.globl cpu_mohawk_suspend_size
.equ cpu_mohawk_suspend_size, 4 * 6
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_mohawk_do_suspend)
stmfd sp!, {r4 - r9, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
.globl cpu_sa1100_suspend_size
.equ cpu_sa1100_suspend_size, 4 * 3
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_sa1100_do_suspend)
stmfd sp!, {r4 - r6, lr}
mrc p15, 0, r4, c3, c0, 0 @ domain ID
#ifndef MULTI_CPU
EXPORT_SYMBOL(cpu_dcache_clean_area);
+#ifdef CONFIG_MMU
EXPORT_SYMBOL(cpu_set_pte_ext);
+#endif
#else
EXPORT_SYMBOL(processor);
#endif
mov pc, lr
ENTRY(cpu_v6_dcache_clean_area)
-#ifndef TLB_CAN_READ_FROM_L1_CACHE
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
add r0, r0, #D_CACHE_LINE_SIZE
subs r1, r1, #D_CACHE_LINE_SIZE
bhi 1b
-#endif
mov pc, lr
/*
/* Suspend/resume support: taken from arch/arm/mach-s3c64xx/sleep.S */
.globl cpu_v6_suspend_size
.equ cpu_v6_suspend_size, 4 * 6
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_v6_do_suspend)
stmfd sp!, {r4 - r9, lr}
mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID
ARM( str r3, [r0, #2048]! )
THUMB( add r0, r0, #2048 )
THUMB( str r3, [r0] )
- mcr p15, 0, r0, c7, c10, 1 @ flush_pte
+ ALT_SMP(mov pc,lr)
+ ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
#endif
mov pc, lr
ENDPROC(cpu_v7_set_pte_ext)
tst r3, #1 << (55 - 32) @ L_PTE_DIRTY
orreq r2, #L_PTE_RDONLY
1: strd r2, r3, [r0]
- mcr p15, 0, r0, c7, c10, 1 @ flush_pte
+ ALT_SMP(mov pc, lr)
+ ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
#endif
mov pc, lr
ENDPROC(cpu_v7_set_pte_ext)
ENDPROC(cpu_v7_do_idle)
ENTRY(cpu_v7_dcache_clean_area)
-#ifndef TLB_CAN_READ_FROM_L1_CACHE
+ ALT_SMP(mov pc, lr) @ MP extensions imply L1 PTW
+ ALT_UP(W(nop))
dcache_line_size r2, r3
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
add r0, r0, r2
subs r1, r1, r2
bhi 1b
dsb
-#endif
mov pc, lr
ENDPROC(cpu_v7_dcache_clean_area)
__v7_proc __v7_ca9mp_setup
.size __v7_ca9mp_proc_info, . - __v7_ca9mp_proc_info
+#endif /* CONFIG_ARM_LPAE */
+
/*
* Marvell PJ4B processor.
*/
.long 0xfffffff0
__v7_proc __v7_pj4b_setup
.size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info
-#endif /* CONFIG_ARM_LPAE */
/*
* ARM Ltd. Cortex A7 processor.
__v7_ca7mp_proc_info:
.long 0x410fc070
.long 0xff0ffff0
- __v7_proc __v7_ca7mp_setup, hwcaps = HWCAP_IDIV
+ __v7_proc __v7_ca7mp_setup
.size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info
/*
__v7_ca15mp_proc_info:
.long 0x410fc0f0
.long 0xff0ffff0
- __v7_proc __v7_ca15mp_setup, hwcaps = HWCAP_IDIV
+ __v7_proc __v7_ca15mp_setup
.size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
+ /*
+ * Qualcomm Inc. Krait processors.
+ */
+ .type __krait_proc_info, #object
+__krait_proc_info:
+ .long 0x510f0400 @ Required ID value
+ .long 0xff0ffc00 @ Mask for ID
+ /*
+ * Some Krait processors don't indicate support for SDIV and UDIV
+ * instructions in the ARM instruction set, even though they actually
+ * do support them.
+ */
+ __v7_proc __v7_setup, hwcaps = HWCAP_IDIV
+ .size __krait_proc_info, . - __krait_proc_info
+
/*
* Match any ARMv7 processor core.
*/
.globl cpu_xsc3_suspend_size
.equ cpu_xsc3_suspend_size, 4 * 6
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_xsc3_do_suspend)
stmfd sp!, {r4 - r9, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
.globl cpu_xscale_suspend_size
.equ cpu_xscale_suspend_size, 4 * 6
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_xscale_do_suspend)
stmfd sp!, {r4 - r9, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
--- /dev/null
+/*
+ * Copyright (C) 2008-2009 ST-Ericsson AB
+ * License terms: GNU General Public License (GPL) version 2
+ * TCM memory handling for ARM systems
+ *
+ * Author: Linus Walleij <linus.walleij@stericsson.com>
+ * Author: Rickard Andersson <rickard.andersson@stericsson.com>
+ */
+
+#ifdef CONFIG_HAVE_TCM
+void __init tcm_init(void);
+#else
+/* No TCM support, just blank inlines to be optimized out */
+inline void tcm_init(void)
+{
+}
+#endif
void __iomem * __init early_io_map(phys_addr_t phys, unsigned long virt)
{
unsigned long size, mask;
- bool page64k = IS_ENABLED(ARM64_64K_PAGES);
+ bool page64k = IS_ENABLED(CONFIG_ARM64_64K_PAGES);
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
#define readw_be __raw_readw
#define readl_be __raw_readl
+#define writeb_relaxed writeb
+#define writew_relaxed writew
+#define writel_relaxed writel
+
#define writeb_be __raw_writeb
#define writew_be __raw_writew
#define writel_be __raw_writel
/* set interrupt enabled status */
static inline void arch_local_irq_restore(unsigned long flags)
{
- asm volatile (" mvc .s2 %0,CSR\n" : : "b"(flags));
+ asm volatile (" mvc .s2 %0,CSR\n" : : "b"(flags) : "memory");
}
/* unconditionally enable interrupts */
#define NR_PALINFO_ENTRIES (int) ARRAY_SIZE(palinfo_entries)
-/*
- * this array is used to keep track of the proc entries we create. This is
- * required in the module mode when we need to remove all entries. The procfs code
- * does not do recursion of deletion
- *
- * Notes:
- * - +1 accounts for the cpuN directory entry in /proc/pal
- */
-#define NR_PALINFO_PROC_ENTRIES (NR_CPUS*(NR_PALINFO_ENTRIES+1))
-
-static struct proc_dir_entry *palinfo_proc_entries[NR_PALINFO_PROC_ENTRIES];
static struct proc_dir_entry *palinfo_dir;
/*
static void __cpuinit
create_palinfo_proc_entries(unsigned int cpu)
{
-# define CPUSTR "cpu%d"
-
pal_func_cpu_u_t f;
- struct proc_dir_entry **pdir;
struct proc_dir_entry *cpu_dir;
int j;
- char cpustr[sizeof(CPUSTR)];
-
-
- /*
- * we keep track of created entries in a depth-first order for
- * cleanup purposes. Each entry is stored into palinfo_proc_entries
- */
- sprintf(cpustr,CPUSTR, cpu);
+ char cpustr[3+4+1]; /* cpu numbers are up to 4095 on itanic */
+ sprintf(cpustr, "cpu%d", cpu);
cpu_dir = proc_mkdir(cpustr, palinfo_dir);
+ if (!cpu_dir)
+ return;
f.req_cpu = cpu;
- /*
- * Compute the location to store per cpu entries
- * We dont store the top level entry in this list, but
- * remove it finally after removing all cpu entries.
- */
- pdir = &palinfo_proc_entries[cpu*(NR_PALINFO_ENTRIES+1)];
- *pdir++ = cpu_dir;
for (j=0; j < NR_PALINFO_ENTRIES; j++) {
f.func_id = j;
- *pdir = create_proc_read_entry(
- palinfo_entries[j].name, 0, cpu_dir,
- palinfo_read_entry, (void *)f.value);
- pdir++;
+ create_proc_read_entry(
+ palinfo_entries[j].name, 0, cpu_dir,
+ palinfo_read_entry, (void *)f.value);
}
}
static void
remove_palinfo_proc_entries(unsigned int hcpu)
{
- int j;
- struct proc_dir_entry *cpu_dir, **pdir;
-
- pdir = &palinfo_proc_entries[hcpu*(NR_PALINFO_ENTRIES+1)];
- cpu_dir = *pdir;
- *pdir++=NULL;
- for (j=0; j < (NR_PALINFO_ENTRIES); j++) {
- if ((*pdir)) {
- remove_proc_entry ((*pdir)->name, cpu_dir);
- *pdir ++= NULL;
- }
- }
-
- if (cpu_dir) {
- remove_proc_entry(cpu_dir->name, palinfo_dir);
- }
+ char cpustr[3+4+1]; /* cpu numbers are up to 4095 on itanic */
+ sprintf(cpustr, "cpu%d", hcpu);
+ remove_proc_subtree(cpustr, palinfo_dir);
}
static int __cpuinit palinfo_cpu_callback(struct notifier_block *nfb,
printk(KERN_INFO "PAL Information Facility v%s\n", PALINFO_VERSION);
palinfo_dir = proc_mkdir("pal", NULL);
+ if (!palinfo_dir)
+ return -ENOMEM;
/* Create palinfo dirs in /proc for all online cpus */
for_each_online_cpu(i) {
static void __exit
palinfo_exit(void)
{
- int i = 0;
-
- /* remove all nodes: depth first pass. Could optimize this */
- for_each_online_cpu(i) {
- remove_palinfo_proc_entries(i);
- }
-
- /*
- * Remove the top level entry finally
- */
- remove_proc_entry(palinfo_dir->name, NULL);
-
- /*
- * Unregister from cpu notifier callbacks
- */
unregister_hotcpu_notifier(&palinfo_cpu_notifier);
+ remove_proc_subtree("pal", NULL);
}
module_init(palinfo_init);
}
if (!need_resched()) {
- void (*idle)(void);
#ifdef CONFIG_SMP
min_xtp();
#endif
if (mark_idle)
(*mark_idle)(1);
- if (!idle)
- idle = default_idle;
- (*idle)();
+ default_idle();
if (mark_idle)
(*mark_idle)(0);
#ifdef CONFIG_SMP
return gpio < MCFGPIO_PIN_MAX ? 0 : __gpio_cansleep(gpio);
}
+static inline int gpio_request_one(unsigned gpio, unsigned long flags, const char *label)
+{
+ int err;
+
+ err = gpio_request(gpio, label);
+ if (err)
+ return err;
+
+ if (flags & GPIOF_DIR_IN)
+ err = gpio_direction_input(gpio);
+ else
+ err = gpio_direction_output(gpio,
+ (flags & GPIOF_INIT_HIGH) ? 1 : 0);
+
+ if (err)
+ gpio_free(gpio);
+
+ return err;
+}
+
#endif
select HAVE_KRETPROBES
select HAVE_DEBUG_KMEMLEAK
select ARCH_BINFMT_ELF_RANDOMIZE_PIE
- select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+ select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES && 64BIT
select RTC_LIB if !MACH_LOONGSON
select GENERIC_ATOMIC64 if !64BIT
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
bool "SNI RM200/300/400"
select FW_ARC if CPU_LITTLE_ENDIAN
select FW_ARC32 if CPU_LITTLE_ENDIAN
- select SNIPROM if CPU_BIG_ENDIAN
+ select FW_SNIPROM if CPU_BIG_ENDIAN
select ARCH_MAY_HAVE_PC_FDC
select BOOT_ELF32
select CEVT_R4K
config FW_ARC32
bool
-config SNIPROM
+config FW_SNIPROM
bool
config BOOT_ELF32
select CPU_SUPPORTS_32BIT_KERNEL
select CPU_SUPPORTS_64BIT_KERNEL
select CPU_SUPPORTS_HIGHMEM
- select CPU_HAS_LLSC
select WEAK_ORDERING
select WEAK_REORDERING_BEYOND_LLSC
select CPU_HAS_PREFETCH
strcpy(cfe_version, "unknown");
printk(KERN_INFO PFX "CFE version: %s\n", cfe_version);
- if (bcm63xx_nvram_init(boot_addr + BCM963XX_NVRAM_OFFSET)) {
- printk(KERN_ERR PFX "invalid nvram checksum\n");
- return;
- }
+ bcm63xx_nvram_init(boot_addr + BCM963XX_NVRAM_OFFSET);
board_name = bcm63xx_nvram_get_name();
/* find board by name */
static struct bcm963xx_nvram nvram;
static int mac_addr_used;
-int __init bcm63xx_nvram_init(void *addr)
+void __init bcm63xx_nvram_init(void *addr)
{
unsigned int check_len;
u32 crc, expected_crc;
crc = crc32_le(~0, (u8 *)&nvram, check_len);
if (crc != expected_crc)
- return -EINVAL;
-
- return 0;
+ pr_warn("nvram checksum failed, contents may be invalid (expected %08x, got %08x)\n",
+ expected_crc, crc);
}
u8 *bcm63xx_nvram_get_name(void)
return board_register_devices();
}
-device_initcall(bcm63xx_register_devices);
+arch_initcall(bcm63xx_register_devices);
static void octeon_generic_shutdown(void)
{
- int cpu, i;
+ int i;
+#ifdef CONFIG_SMP
+ int cpu;
+#endif
struct cvmx_bootmem_desc *bootmem_desc;
void *named_block_array_ptr;
*
* Initialized the local nvram copy from the target address and checks
* its checksum.
- *
- * Returns 0 on success.
*/
-int __init bcm63xx_nvram_init(void *nvram);
+void bcm63xx_nvram_init(void *nvram);
/**
* bcm63xx_nvram_get_name() - returns the board name according to nvram
/* #define cpu_has_prefetch ? */
#define cpu_has_mcheck 1
/* #define cpu_has_ejtag ? */
-#ifdef CONFIG_CPU_HAS_LLSC
#define cpu_has_llsc 1
-#else
-#define cpu_has_llsc 0
-#endif
/* #define cpu_has_vtag_icache ? */
/* #define cpu_has_dc_aliases ? */
/* #define cpu_has_ic_fills_f_dc ? */
unsigned int __dspctl; \
\
__asm__ __volatile__( \
+ " .set push \n" \
+ " .set dsp \n" \
" rddsp %0, %x1 \n" \
+ " .set pop \n" \
: "=r" (__dspctl) \
: "i" (mask)); \
__dspctl; \
#define wrdsp(val, mask) \
do { \
__asm__ __volatile__( \
+ " .set push \n" \
+ " .set dsp \n" \
" wrdsp %0, %x1 \n" \
+ " .set pop \n" \
: \
: "r" (val), "i" (mask)); \
} while (0)
-#define mflo0() ({ long mflo0; __asm__("mflo %0, $ac0" : "=r" (mflo0)); mflo0;})
-#define mflo1() ({ long mflo1; __asm__("mflo %0, $ac1" : "=r" (mflo1)); mflo1;})
-#define mflo2() ({ long mflo2; __asm__("mflo %0, $ac2" : "=r" (mflo2)); mflo2;})
-#define mflo3() ({ long mflo3; __asm__("mflo %0, $ac3" : "=r" (mflo3)); mflo3;})
-
-#define mfhi0() ({ long mfhi0; __asm__("mfhi %0, $ac0" : "=r" (mfhi0)); mfhi0;})
-#define mfhi1() ({ long mfhi1; __asm__("mfhi %0, $ac1" : "=r" (mfhi1)); mfhi1;})
-#define mfhi2() ({ long mfhi2; __asm__("mfhi %0, $ac2" : "=r" (mfhi2)); mfhi2;})
-#define mfhi3() ({ long mfhi3; __asm__("mfhi %0, $ac3" : "=r" (mfhi3)); mfhi3;})
-
-#define mtlo0(x) __asm__("mtlo %0, $ac0" ::"r" (x))
-#define mtlo1(x) __asm__("mtlo %0, $ac1" ::"r" (x))
-#define mtlo2(x) __asm__("mtlo %0, $ac2" ::"r" (x))
-#define mtlo3(x) __asm__("mtlo %0, $ac3" ::"r" (x))
-
-#define mthi0(x) __asm__("mthi %0, $ac0" ::"r" (x))
-#define mthi1(x) __asm__("mthi %0, $ac1" ::"r" (x))
-#define mthi2(x) __asm__("mthi %0, $ac2" ::"r" (x))
-#define mthi3(x) __asm__("mthi %0, $ac3" ::"r" (x))
+#define mflo0() \
+({ \
+ long mflo0; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mflo %0, $ac0 \n" \
+ " .set pop \n" \
+ : "=r" (mflo0)); \
+ mflo0; \
+})
+
+#define mflo1() \
+({ \
+ long mflo1; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mflo %0, $ac1 \n" \
+ " .set pop \n" \
+ : "=r" (mflo1)); \
+ mflo1; \
+})
+
+#define mflo2() \
+({ \
+ long mflo2; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mflo %0, $ac2 \n" \
+ " .set pop \n" \
+ : "=r" (mflo2)); \
+ mflo2; \
+})
+
+#define mflo3() \
+({ \
+ long mflo3; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mflo %0, $ac3 \n" \
+ " .set pop \n" \
+ : "=r" (mflo3)); \
+ mflo3; \
+})
+
+#define mfhi0() \
+({ \
+ long mfhi0; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mfhi %0, $ac0 \n" \
+ " .set pop \n" \
+ : "=r" (mfhi0)); \
+ mfhi0; \
+})
+
+#define mfhi1() \
+({ \
+ long mfhi1; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mfhi %0, $ac1 \n" \
+ " .set pop \n" \
+ : "=r" (mfhi1)); \
+ mfhi1; \
+})
+
+#define mfhi2() \
+({ \
+ long mfhi2; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mfhi %0, $ac2 \n" \
+ " .set pop \n" \
+ : "=r" (mfhi2)); \
+ mfhi2; \
+})
+
+#define mfhi3() \
+({ \
+ long mfhi3; \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mfhi %0, $ac3 \n" \
+ " .set pop \n" \
+ : "=r" (mfhi3)); \
+ mfhi3; \
+})
+
+
+#define mtlo0(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mtlo %0, $ac0 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mtlo1(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mtlo %0, $ac1 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mtlo2(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mtlo %0, $ac2 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mtlo3(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mtlo %0, $ac3 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mthi0(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mthi %0, $ac0 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mthi1(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mthi %0, $ac1 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mthi2(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mthi %0, $ac2 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
+
+#define mthi3(x) \
+({ \
+ __asm__( \
+ " .set push \n" \
+ " .set dsp \n" \
+ " mthi %0, $ac3 \n" \
+ " .set pop \n" \
+ : \
+ : "r" (x)); \
+})
#else
#include <asm/sigcontext.h>
#include <asm/siginfo.h>
-#define __ARCH_HAS_ODD_SIGACTION
+#define __ARCH_HAS_IRIX_SIGACTION
#endif /* _ASM_SIGNAL_H */
*
* SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
* Unix names RESETHAND and NODEFER respectively.
+ *
+ * SA_RESTORER used to be defined as 0x04000000 but only the O32 ABI ever
+ * supported its use and no libc was using it, so the entire sa-restorer
+ * functionality was removed with lmo commit 39bffc12c3580ab for 2.5.48
+ * retaining only the SA_RESTORER definition as a reminder to avoid
+ * accidental reuse of the mask bit.
*/
#define SA_ONSTACK 0x08000000
#define SA_RESETHAND 0x80000000
#define SA_NOMASK SA_NODEFER
#define SA_ONESHOT SA_RESETHAND
-#define SA_RESTORER 0x04000000 /* Only for o32 */
-
#define MINSIGSTKSZ 2048
#define SIGSTKSZ 8192
obj-$(CONFIG_JUMP_LABEL) += jump_label.o
#
-# DSP ASE supported for MIPS32 or MIPS64 Release 2 cores only. It is safe
-# to enable DSP assembler support here even if the MIPS Release 2 CPU we
-# are targetting does not support DSP because all code-paths making use of
-# it properly check that the running CPU *actually does* support these
-# instructions.
+# DSP ASE supported for MIPS32 or MIPS64 Release 2 cores only. It is not
+# safe to unconditionnaly use the assembler -mdsp / -mdspr2 switches
+# here because the compiler may use DSP ASE instructions (such as lwx) in
+# code paths where we cannot check that the CPU we are running on supports it.
+# Proper abstraction using HAVE_AS_DSP and macros is done in
+# arch/mips/include/asm/mipsregs.h.
#
ifeq ($(CONFIG_CPU_MIPSR2), y)
CFLAGS_DSP = -DHAVE_AS_DSP
-#
-# Check if assembler supports DSP ASE
-#
-ifeq ($(call cc-option-yn,-mdsp), y)
-CFLAGS_DSP += -mdsp
-endif
-
-#
-# Check if assembler supports DSP ASE Rev2
-#
-ifeq ($(call cc-option-yn,-mdspr2), y)
-CFLAGS_DSP += -mdspr2
-endif
-
CFLAGS_signal.o = $(CFLAGS_DSP)
CFLAGS_signal32.o = $(CFLAGS_DSP)
CFLAGS_process.o = $(CFLAGS_DSP)
c->tlbsize = 48;
break;
case PRID_IMP_VR41XX:
+ set_isa(c, MIPS_CPU_ISA_III);
+ c->options = R4K_OPTS;
+ c->tlbsize = 32;
switch (c->processor_id & 0xf0) {
case PRID_REV_VR4111:
c->cputype = CPU_VR4111;
__cpu_name[cpu] = "NEC VR4131";
} else {
c->cputype = CPU_VR4133;
+ c->options |= MIPS_CPU_LLSC;
__cpu_name[cpu] = "NEC VR4133";
}
break;
__cpu_name[cpu] = "NEC Vr41xx";
break;
}
- set_isa(c, MIPS_CPU_ISA_III);
- c->options = R4K_OPTS;
- c->tlbsize = 32;
break;
case PRID_IMP_R4300:
c->cputype = CPU_R4300;
if (c->options & MIPS_CPU_FPU) {
c->fpu_id = cpu_get_fpu_id();
- if (c->isa_level == MIPS_CPU_ISA_M32R1 ||
- c->isa_level == MIPS_CPU_ISA_M32R2 ||
- c->isa_level == MIPS_CPU_ISA_M64R1 ||
- c->isa_level == MIPS_CPU_ISA_M64R2) {
+ if (c->isa_level & (MIPS_CPU_ISA_M32R1 | MIPS_CPU_ISA_M32R2 |
+ MIPS_CPU_ISA_M64R1 | MIPS_CPU_ISA_M64R2)) {
if (c->fpu_id & MIPS_FPIR_3D)
c->ases |= MIPS_ASE_MIPS3D;
}
err = compat_sys_shmctl(first, second, compat_ptr(ptr));
break;
default:
- err = -EINVAL;
+ err = -ENOSYS;
break;
}
PTR_L a5, PT_R9(sp)
PTR_L a6, PT_R10(sp)
PTR_L a7, PT_R11(sp)
-#else
- PTR_ADDIU sp, PT_SIZE
#endif
-.endm
+ PTR_ADDIU sp, PT_SIZE
+ .endm
.macro RETURN_BACK
jr ra
.globl _mcount
_mcount:
b ftrace_stub
- addiu sp,sp,8
+#ifdef CONFIG_32BIT
+ addiu sp,sp,8
+#else
+ nop
+#endif
/* When tracing is activated, it calls ftrace_caller+8 (aka here) */
lw t1, function_trace_stop
if (cpu_has_mips_r) {
seq_printf(m, "isa\t\t\t:");
if (cpu_has_mips_1)
- seq_printf(m, "%s", "mips1");
+ seq_printf(m, "%s", " mips1");
if (cpu_has_mips_2)
seq_printf(m, "%s", " mips2");
if (cpu_has_mips_3)
#ifdef CONFIG_64BIT
status_set |= ST0_FR|ST0_KX|ST0_SX|ST0_UX;
#endif
- if (current_cpu_data.isa_level == MIPS_CPU_ISA_IV)
+ if (current_cpu_data.isa_level & MIPS_CPU_ISA_IV)
status_set |= ST0_XX;
if (cpu_has_dsp)
status_set |= ST0_MX;
unsigned bit = nr & SZLONG_MASK;
unsigned long mask;
unsigned long flags;
- unsigned long res;
+ int res;
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
raw_local_irq_save(flags);
- res = (mask & *a);
+ res = (mask & *a) != 0;
*a |= mask;
raw_local_irq_restore(flags);
return res;
unsigned bit = nr & SZLONG_MASK;
unsigned long mask;
unsigned long flags;
- unsigned long res;
+ int res;
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
raw_local_irq_save(flags);
- res = (mask & *a);
+ res = (mask & *a) != 0;
*a |= mask;
raw_local_irq_restore(flags);
return res;
unsigned bit = nr & SZLONG_MASK;
unsigned long mask;
unsigned long flags;
- unsigned long res;
+ int res;
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
raw_local_irq_save(flags);
- res = (mask & *a);
+ res = (mask & *a) != 0;
*a &= ~mask;
raw_local_irq_restore(flags);
return res;
unsigned bit = nr & SZLONG_MASK;
unsigned long mask;
unsigned long flags;
- unsigned long res;
+ int res;
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
raw_local_irq_save(flags);
- res = (mask & *a);
+ res = (mask & *a) != 0;
*a ^= mask;
raw_local_irq_restore(flags);
return res;
#endif
/* odd buffer alignment? */
-#ifdef CPU_MIPSR2
+#ifdef CONFIG_CPU_MIPSR2
wsbh v1, sum
movn sum, v1, t7
#else
addu sum, v1
#endif
-#ifdef CPU_MIPSR2
+#ifdef CONFIG_CPU_MIPSR2
wsbh v1, sum
movn sum, v1, odd
#else
return;
default:
- if (c->isa_level == MIPS_CPU_ISA_M32R1 ||
- c->isa_level == MIPS_CPU_ISA_M32R2 ||
- c->isa_level == MIPS_CPU_ISA_M64R1 ||
- c->isa_level == MIPS_CPU_ISA_M64R2) {
+ if (c->isa_level & (MIPS_CPU_ISA_M32R1 | MIPS_CPU_ISA_M32R2 |
+ MIPS_CPU_ISA_M64R1 | MIPS_CPU_ISA_M64R2)) {
#ifdef CONFIG_MIPS_CPU_SCACHE
if (mips_sc_init ()) {
scache_size = c->scache.ways * c->scache.sets * c->scache.linesz;
c->scache.flags |= MIPS_CACHE_NOT_PRESENT;
/* Ignore anything but MIPSxx processors */
- if (c->isa_level != MIPS_CPU_ISA_M32R1 &&
- c->isa_level != MIPS_CPU_ISA_M32R2 &&
- c->isa_level != MIPS_CPU_ISA_M64R1 &&
- c->isa_level != MIPS_CPU_ISA_M64R2)
+ if (!(c->isa_level & (MIPS_CPU_ISA_M32R1 | MIPS_CPU_ISA_M32R2 |
+ MIPS_CPU_ISA_M64R1 | MIPS_CPU_ISA_M64R2)))
return 0;
/* Does this MIPS32/MIPS64 CPU have a config2 register? */
#include <asm/mach-au1x00/au1000.h>
#include <asm/tlbmisc.h>
-#ifdef CONFIG_DEBUG_PCI
+#ifdef CONFIG_PCI_DEBUG
#define DBG(x...) printk(KERN_DEBUG x)
#else
#define DBG(x...) do {} while (0)
if (status & (1 << 29)) {
*data = 0xffffffff;
error = -1;
- DBG("alchemy-pci: master abort on cfg access %d bus %d dev %d",
+ DBG("alchemy-pci: master abort on cfg access %d bus %d dev %d\n",
access_type, bus->number, device);
} else if ((status >> 28) & 0xf) {
DBG("alchemy-pci: PCI ERR detected: dev %d, status %lx\n",
subi r12,r12,TI_FLAGS
4: /* Anything else left to do? */
- SET_DEFAULT_THREAD_PPR(r3, r9) /* Set thread.ppr = 3 */
+ SET_DEFAULT_THREAD_PPR(r3, r10) /* Set thread.ppr = 3 */
andi. r0,r9,(_TIF_SYSCALL_T_OR_A|_TIF_SINGLESTEP)
beq .ret_from_except_lite
/* Clear _TIF_EMULATE_STACK_STORE flag */
lis r11,_TIF_EMULATE_STACK_STORE@h
addi r5,r9,TI_FLAGS
- ldarx r4,0,r5
+0: ldarx r4,0,r5
andc r4,r4,r11
stdcx. r4,0,r5
bne- 0b
#include <asm/code-patching.h>
#include <asm/machdep.h>
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
extern void epapr_ev_idle(void);
extern u32 epapr_ev_idle_start[];
+#endif
bool epapr_paravirt_enabled;
for (i = 0; i < (len / 4); i++) {
patch_instruction(epapr_hypercall_start + i, insts[i]);
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
patch_instruction(epapr_ev_idle_start + i, insts[i]);
+#endif
}
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
if (of_get_property(hyper_node, "has-idle", NULL))
ppc_md.power_save = epapr_ev_idle;
+#endif
epapr_paravirt_enabled = true;
#endif /* __DISABLED__ */
-/*
- * r13 points to the PACA, r9 contains the saved CR,
- * r12 contain the saved SRR1, SRR0 is still ready for return
- * r3 has the faulting address
- * r9 - r13 are saved in paca->exslb.
- * r3 is saved in paca->slb_r3
- * We assume we aren't going to take any exceptions during this procedure.
- */
-_GLOBAL(slb_miss_realmode)
- mflr r10
-#ifdef CONFIG_RELOCATABLE
- mtctr r11
-#endif
-
- stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
- std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
-
- bl .slb_allocate_realmode
-
- /* All done -- return from exception. */
-
- ld r10,PACA_EXSLB+EX_LR(r13)
- ld r3,PACA_EXSLB+EX_R3(r13)
- lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
-
- mtlr r10
-
- andi. r10,r12,MSR_RI /* check for unrecoverable exception */
- beq- 2f
-
-.machine push
-.machine "power4"
- mtcrf 0x80,r9
- mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
-.machine pop
-
- RESTORE_PPR_PACA(PACA_EXSLB, r9)
- ld r9,PACA_EXSLB+EX_R9(r13)
- ld r10,PACA_EXSLB+EX_R10(r13)
- ld r11,PACA_EXSLB+EX_R11(r13)
- ld r12,PACA_EXSLB+EX_R12(r13)
- ld r13,PACA_EXSLB+EX_R13(r13)
- rfid
- b . /* prevent speculative execution */
-
-2: mfspr r11,SPRN_SRR0
- ld r10,PACAKBASE(r13)
- LOAD_HANDLER(r10,unrecov_slb)
- mtspr SPRN_SRR0,r10
- ld r10,PACAKMSR(r13)
- mtspr SPRN_SRR1,r10
- rfid
- b .
-
-unrecov_slb:
- EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
- DISABLE_INTS
- bl .save_nvgprs
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- bl .unrecoverable_exception
- b 1b
-
-
-#ifdef CONFIG_PPC_970_NAP
-power4_fixup_nap:
- andc r9,r9,r10
- std r9,TI_LOCAL_FLAGS(r11)
- ld r10,_LINK(r1) /* make idle task do the */
- std r10,_NIP(r1) /* equivalent of a blr */
- blr
-#endif
-
.align 7
.globl alignment_common
alignment_common:
#endif /* CONFIG_PPC_POWERNV */
+/*
+ * r13 points to the PACA, r9 contains the saved CR,
+ * r12 contain the saved SRR1, SRR0 is still ready for return
+ * r3 has the faulting address
+ * r9 - r13 are saved in paca->exslb.
+ * r3 is saved in paca->slb_r3
+ * We assume we aren't going to take any exceptions during this procedure.
+ */
+_GLOBAL(slb_miss_realmode)
+ mflr r10
+#ifdef CONFIG_RELOCATABLE
+ mtctr r11
+#endif
+
+ stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
+ std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
+
+ bl .slb_allocate_realmode
+
+ /* All done -- return from exception. */
+
+ ld r10,PACA_EXSLB+EX_LR(r13)
+ ld r3,PACA_EXSLB+EX_R3(r13)
+ lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
+
+ mtlr r10
+
+ andi. r10,r12,MSR_RI /* check for unrecoverable exception */
+ beq- 2f
+
+.machine push
+.machine "power4"
+ mtcrf 0x80,r9
+ mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
+.machine pop
+
+ RESTORE_PPR_PACA(PACA_EXSLB, r9)
+ ld r9,PACA_EXSLB+EX_R9(r13)
+ ld r10,PACA_EXSLB+EX_R10(r13)
+ ld r11,PACA_EXSLB+EX_R11(r13)
+ ld r12,PACA_EXSLB+EX_R12(r13)
+ ld r13,PACA_EXSLB+EX_R13(r13)
+ rfid
+ b . /* prevent speculative execution */
+
+2: mfspr r11,SPRN_SRR0
+ ld r10,PACAKBASE(r13)
+ LOAD_HANDLER(r10,unrecov_slb)
+ mtspr SPRN_SRR0,r10
+ ld r10,PACAKMSR(r13)
+ mtspr SPRN_SRR1,r10
+ rfid
+ b .
+
+unrecov_slb:
+ EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
+ DISABLE_INTS
+ bl .save_nvgprs
+1: addi r3,r1,STACK_FRAME_OVERHEAD
+ bl .unrecoverable_exception
+ b 1b
+
+
+#ifdef CONFIG_PPC_970_NAP
+power4_fixup_nap:
+ andc r9,r9,r10
+ std r9,TI_LOCAL_FLAGS(r11)
+ ld r10,_LINK(r1) /* make idle task do the */
+ std r10,_NIP(r1) /* equivalent of a blr */
+ blr
+#endif
+
/*
* Hash table stuff
*/
new->thread.regs->msr |=
(MSR_FP | new->thread.fpexc_mode);
}
+#ifdef CONFIG_ALTIVEC
if (msr & MSR_VEC) {
do_load_up_transact_altivec(&new->thread);
new->thread.regs->msr |= MSR_VEC;
}
+#endif
/* We may as well turn on VSX too since all the state is restored now */
if (msr & MSR_VSX)
new->thread.regs->msr |= MSR_VSX;
do_load_up_transact_fpu(¤t->thread);
regs->msr |= (MSR_FP | current->thread.fpexc_mode);
}
+#ifdef CONFIG_ALTIVEC
if (msr & MSR_VEC) {
do_load_up_transact_altivec(¤t->thread);
regs->msr |= MSR_VEC;
}
+#endif
return 0;
}
do_load_up_transact_fpu(¤t->thread);
regs->msr |= (MSR_FP | current->thread.fpexc_mode);
}
+#ifdef CONFIG_ALTIVEC
if (msr & MSR_VEC) {
do_load_up_transact_altivec(¤t->thread);
regs->msr |= MSR_VEC;
}
+#endif
return err;
}
or r5, r6, r5 /* Set MSR.FP+.VSX/.VEC */
mtmsr r5
+#ifdef CONFIG_ALTIVEC
/* FP and VEC registers: These are recheckpointed from thread.fpr[]
* and thread.vr[] respectively. The thread.transact_fpr[] version
* is more modern, and will be loaded subsequently by any FPUnavailable
REST_32VRS(0, r5, r3) /* r5 scratch, r3 THREAD ptr */
ld r5, THREAD_VRSAVE(r3)
mtspr SPRN_VRSAVE, r5
+#endif
dont_restore_vec:
andi. r0, r4, MSR_FP
#define E500_PID_NUM 3
#define E500_TLB_NUM 2
-#define E500_TLB_VALID 1
-#define E500_TLB_BITMAP 2
+/* entry is mapped somewhere in host TLB */
+#define E500_TLB_VALID (1 << 0)
+/* TLB1 entry is mapped by host TLB1, tracked by bitmaps */
+#define E500_TLB_BITMAP (1 << 1)
+/* TLB1 entry is mapped by host TLB0 */
#define E500_TLB_TLB0 (1 << 2)
struct tlbe_ref {
- pfn_t pfn;
- unsigned int flags; /* E500_TLB_* */
+ pfn_t pfn; /* valid only for TLB0, except briefly */
+ unsigned int flags; /* E500_TLB_* */
};
struct tlbe_priv {
- struct tlbe_ref ref; /* TLB0 only -- TLB1 uses tlb_refs */
+ struct tlbe_ref ref;
};
#ifdef CONFIG_KVM_E500V2
unsigned int gtlb_nv[E500_TLB_NUM];
- /*
- * information associated with each host TLB entry --
- * TLB1 only for now. If/when guest TLB1 entries can be
- * mapped with host TLB0, this will be used for that too.
- *
- * We don't want to use this for guest TLB0 because then we'd
- * have the overhead of doing the translation again even if
- * the entry is still in the guest TLB (e.g. we swapped out
- * and back, and our host TLB entries got evicted).
- */
- struct tlbe_ref *tlb_refs[E500_TLB_NUM];
unsigned int host_tlb1_nv;
u32 svr;
struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[tlbsel][esel].ref;
/* Don't bother with unmapped entries */
- if (!(ref->flags & E500_TLB_VALID))
- return;
+ if (!(ref->flags & E500_TLB_VALID)) {
+ WARN(ref->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),
+ "%s: flags %x\n", __func__, ref->flags);
+ WARN_ON(tlbsel == 1 && vcpu_e500->g2h_tlb1_map[esel]);
+ }
if (tlbsel == 1 && ref->flags & E500_TLB_BITMAP) {
u64 tmp = vcpu_e500->g2h_tlb1_map[esel];
pfn_t pfn)
{
ref->pfn = pfn;
- ref->flags = E500_TLB_VALID;
+ ref->flags |= E500_TLB_VALID;
if (tlbe_is_writable(gtlbe))
kvm_set_pfn_dirty(pfn);
static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
{
if (ref->flags & E500_TLB_VALID) {
+ /* FIXME: don't log bogus pfn for TLB1 */
trace_kvm_booke206_ref_release(ref->pfn, ref->flags);
ref->flags = 0;
}
static void clear_tlb_privs(struct kvmppc_vcpu_e500 *vcpu_e500)
{
- int tlbsel = 0;
- int i;
-
- for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) {
- struct tlbe_ref *ref =
- &vcpu_e500->gtlb_priv[tlbsel][i].ref;
- kvmppc_e500_ref_release(ref);
- }
-}
-
-static void clear_tlb_refs(struct kvmppc_vcpu_e500 *vcpu_e500)
-{
- int stlbsel = 1;
+ int tlbsel;
int i;
- kvmppc_e500_tlbil_all(vcpu_e500);
-
- for (i = 0; i < host_tlb_params[stlbsel].entries; i++) {
- struct tlbe_ref *ref =
- &vcpu_e500->tlb_refs[stlbsel][i];
- kvmppc_e500_ref_release(ref);
+ for (tlbsel = 0; tlbsel <= 1; tlbsel++) {
+ for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) {
+ struct tlbe_ref *ref =
+ &vcpu_e500->gtlb_priv[tlbsel][i].ref;
+ kvmppc_e500_ref_release(ref);
+ }
}
-
- clear_tlb_privs(vcpu_e500);
}
void kvmppc_core_flush_tlb(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
- clear_tlb_refs(vcpu_e500);
+ kvmppc_e500_tlbil_all(vcpu_e500);
+ clear_tlb_privs(vcpu_e500);
clear_tlb1_bitmap(vcpu_e500);
}
gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
}
- /* Drop old ref and setup new one. */
- kvmppc_e500_ref_release(ref);
kvmppc_e500_ref_setup(ref, gtlbe, pfn);
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
if (unlikely(vcpu_e500->host_tlb1_nv >= tlb1_max_shadow_size()))
vcpu_e500->host_tlb1_nv = 0;
- vcpu_e500->tlb_refs[1][sesel] = *ref;
- vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel;
- vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP;
if (vcpu_e500->h2g_tlb1_rmap[sesel]) {
- unsigned int idx = vcpu_e500->h2g_tlb1_rmap[sesel];
+ unsigned int idx = vcpu_e500->h2g_tlb1_rmap[sesel] - 1;
vcpu_e500->g2h_tlb1_map[idx] &= ~(1ULL << sesel);
}
- vcpu_e500->h2g_tlb1_rmap[sesel] = esel;
+
+ vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP;
+ vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel;
+ vcpu_e500->h2g_tlb1_rmap[sesel] = esel + 1;
+ WARN_ON(!(ref->flags & E500_TLB_VALID));
return sesel;
}
u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,
struct kvm_book3e_206_tlb_entry *stlbe, int esel)
{
- struct tlbe_ref ref;
+ struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[1][esel].ref;
int sesel;
int r;
- ref.flags = 0;
r = kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe,
- &ref);
+ ref);
if (r)
return r;
}
/* Otherwise map into TLB1 */
- sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, &ref, esel);
+ sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, ref, esel);
write_stlbe(vcpu_e500, gtlbe, stlbe, 1, sesel);
return 0;
case 0:
priv = &vcpu_e500->gtlb_priv[tlbsel][esel];
- /* Triggers after clear_tlb_refs or on initial mapping */
+ /* Triggers after clear_tlb_privs or on initial mapping */
if (!(priv->ref.flags & E500_TLB_VALID)) {
kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe);
} else {
host_tlb_params[0].entries / host_tlb_params[0].ways;
host_tlb_params[1].sets = 1;
- vcpu_e500->tlb_refs[0] =
- kzalloc(sizeof(struct tlbe_ref) * host_tlb_params[0].entries,
- GFP_KERNEL);
- if (!vcpu_e500->tlb_refs[0])
- goto err;
-
- vcpu_e500->tlb_refs[1] =
- kzalloc(sizeof(struct tlbe_ref) * host_tlb_params[1].entries,
- GFP_KERNEL);
- if (!vcpu_e500->tlb_refs[1])
- goto err;
-
vcpu_e500->h2g_tlb1_rmap = kzalloc(sizeof(unsigned int) *
host_tlb_params[1].entries,
GFP_KERNEL);
if (!vcpu_e500->h2g_tlb1_rmap)
- goto err;
+ return -EINVAL;
return 0;
-
-err:
- kfree(vcpu_e500->tlb_refs[0]);
- kfree(vcpu_e500->tlb_refs[1]);
- return -EINVAL;
}
void e500_mmu_host_uninit(struct kvmppc_vcpu_e500 *vcpu_e500)
{
kfree(vcpu_e500->h2g_tlb1_rmap);
- kfree(vcpu_e500->tlb_refs[0]);
- kfree(vcpu_e500->tlb_refs[1]);
}
{
}
+static DEFINE_PER_CPU(struct kvm_vcpu *, last_vcpu_on_cpu);
+
void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
mtspr(SPRN_GDEAR, vcpu->arch.shared->dar);
mtspr(SPRN_GESR, vcpu->arch.shared->esr);
- if (vcpu->arch.oldpir != mfspr(SPRN_PIR))
+ if (vcpu->arch.oldpir != mfspr(SPRN_PIR) ||
+ __get_cpu_var(last_vcpu_on_cpu) != vcpu) {
kvmppc_e500_tlbil_all(vcpu_e500);
+ __get_cpu_var(last_vcpu_on_cpu) = vcpu;
+ }
kvmppc_load_guest_fp(vcpu);
}
(0x1UL << 4), &dummy1, &dummy2);
if (lpar_rc == H_SUCCESS)
return i;
- BUG_ON(lpar_rc != H_NOT_FOUND);
+
+ /*
+ * The test for adjunct partition is performed before the
+ * ANDCOND test. H_RESOURCE may be returned, so we need to
+ * check for that as well.
+ */
+ BUG_ON(lpar_rc != H_NOT_FOUND && lpar_rc != H_RESOURCE);
slot_offset++;
slot_offset &= 0x7;
#define ioremap_nocache(addr, size) ioremap(addr, size)
#define ioremap_wc ioremap_nocache
-/* TODO: s390 cannot support io_remap_pfn_range... */
-#define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
- remap_pfn_range(vma, vaddr, pfn, size, prot)
-
static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
{
return (void __iomem *) offset;
(((unsigned long)(vaddr)) &zero_page_mask))))
#define __HAVE_COLOR_ZERO_PAGE
+/* TODO: s390 cannot support io_remap_pfn_range... */
+#define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
+ remap_pfn_range(vma, vaddr, pfn, size, prot)
+
#endif /* !__ASSEMBLY__ */
/*
#define _REGION3_ENTRY_CO 0x100 /* change-recording override */
/* Bits in the segment table entry */
+#define _SEGMENT_ENTRY_ORIGIN_LARGE ~0xfffffUL /* large page address */
#define _SEGMENT_ENTRY_ORIGIN ~0x7ffUL/* segment table origin */
#define _SEGMENT_ENTRY_RO 0x200 /* page protection bit */
#define _SEGMENT_ENTRY_INV 0x20 /* invalid segment table entry */
/*
* No page table caches to initialise
*/
-#define pgtable_cache_init() do { } while (0)
+static inline void pgtable_cache_init(void) { }
+static inline void check_pgt_cache(void) { }
#include <asm-generic/pgtable.h>
* >= -4095 (IS_ERR_VALUE(x) returns true), a fault has occured and the address
* contains the (negative) exception code.
*/
-static __always_inline unsigned long follow_table(struct mm_struct *mm,
- unsigned long addr, int write)
+#ifdef CONFIG_64BIT
+static unsigned long follow_table(struct mm_struct *mm,
+ unsigned long address, int write)
{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *ptep;
+ unsigned long *table = (unsigned long *)__pa(mm->pgd);
+
+ switch (mm->context.asce_bits & _ASCE_TYPE_MASK) {
+ case _ASCE_TYPE_REGION1:
+ table = table + ((address >> 53) & 0x7ff);
+ if (unlikely(*table & _REGION_ENTRY_INV))
+ return -0x39UL;
+ table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
+ case _ASCE_TYPE_REGION2:
+ table = table + ((address >> 42) & 0x7ff);
+ if (unlikely(*table & _REGION_ENTRY_INV))
+ return -0x3aUL;
+ table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
+ case _ASCE_TYPE_REGION3:
+ table = table + ((address >> 31) & 0x7ff);
+ if (unlikely(*table & _REGION_ENTRY_INV))
+ return -0x3bUL;
+ table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
+ case _ASCE_TYPE_SEGMENT:
+ table = table + ((address >> 20) & 0x7ff);
+ if (unlikely(*table & _SEGMENT_ENTRY_INV))
+ return -0x10UL;
+ if (unlikely(*table & _SEGMENT_ENTRY_LARGE)) {
+ if (write && (*table & _SEGMENT_ENTRY_RO))
+ return -0x04UL;
+ return (*table & _SEGMENT_ENTRY_ORIGIN_LARGE) +
+ (address & ~_SEGMENT_ENTRY_ORIGIN_LARGE);
+ }
+ table = (unsigned long *)(*table & _SEGMENT_ENTRY_ORIGIN);
+ }
+ table = table + ((address >> 12) & 0xff);
+ if (unlikely(*table & _PAGE_INVALID))
+ return -0x11UL;
+ if (write && (*table & _PAGE_RO))
+ return -0x04UL;
+ return (*table & PAGE_MASK) + (address & ~PAGE_MASK);
+}
- pgd = pgd_offset(mm, addr);
- if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
- return -0x3aUL;
+#else /* CONFIG_64BIT */
- pud = pud_offset(pgd, addr);
- if (pud_none(*pud) || unlikely(pud_bad(*pud)))
- return -0x3bUL;
+static unsigned long follow_table(struct mm_struct *mm,
+ unsigned long address, int write)
+{
+ unsigned long *table = (unsigned long *)__pa(mm->pgd);
- pmd = pmd_offset(pud, addr);
- if (pmd_none(*pmd))
+ table = table + ((address >> 20) & 0x7ff);
+ if (unlikely(*table & _SEGMENT_ENTRY_INV))
return -0x10UL;
- if (pmd_large(*pmd)) {
- if (write && (pmd_val(*pmd) & _SEGMENT_ENTRY_RO))
- return -0x04UL;
- return (pmd_val(*pmd) & HPAGE_MASK) + (addr & ~HPAGE_MASK);
- }
- if (unlikely(pmd_bad(*pmd)))
- return -0x10UL;
-
- ptep = pte_offset_map(pmd, addr);
- if (!pte_present(*ptep))
+ table = (unsigned long *)(*table & _SEGMENT_ENTRY_ORIGIN);
+ table = table + ((address >> 12) & 0xff);
+ if (unlikely(*table & _PAGE_INVALID))
return -0x11UL;
- if (write && (!pte_write(*ptep) || !pte_dirty(*ptep)))
+ if (write && (*table & _PAGE_RO))
return -0x04UL;
-
- return (pte_val(*ptep) & PAGE_MASK) + (addr & ~PAGE_MASK);
+ return (*table & PAGE_MASK) + (address & ~PAGE_MASK);
}
+#endif /* CONFIG_64BIT */
+
static __always_inline size_t __user_copy_pt(unsigned long uaddr, void *kptr,
size_t n, int write_user)
{
static size_t clear_user_pt(size_t n, void __user *to)
{
- void *zpage = &empty_zero_page;
+ void *zpage = (void *) empty_zero_page;
long done, size, ret;
done = 0;
generic-y += clkdev.h
+generic-y += cputime.h
generic-y += div64.h
+generic-y += emergency-restart.h
generic-y += exec.h
generic-y += local64.h
+generic-y += mutex.h
generic-y += irq_regs.h
generic-y += local.h
generic-y += module.h
+generic-y += serial.h
generic-y += trace_clock.h
+generic-y += types.h
generic-y += word-at-a-time.h
+++ /dev/null
-#ifndef __SPARC_CPUTIME_H
-#define __SPARC_CPUTIME_H
-
-#include <asm-generic/cputime.h>
-
-#endif /* __SPARC_CPUTIME_H */
+++ /dev/null
-#ifndef _ASM_EMERGENCY_RESTART_H
-#define _ASM_EMERGENCY_RESTART_H
-
-#include <asm-generic/emergency-restart.h>
-
-#endif /* _ASM_EMERGENCY_RESTART_H */
+++ /dev/null
-/*
- * Pull in the generic implementation for the mutex fastpath.
- *
- * TODO: implement optimized primitives instead, or leave the generic
- * implementation in place, or pick the atomic_xchg() based generic
- * implementation. (see asm-generic/mutex-xchg.h for details)
- */
-
-#include <asm-generic/mutex-dec.h>
return remap_pfn_range(vma, from, phys_base >> PAGE_SHIFT, size, prot);
}
+#include <asm/tlbflush.h>
#include <asm-generic/pgtable.h>
/* We provide our own get_unmapped_area to cope with VA holes and
+++ /dev/null
-#ifndef __SPARC_SERIAL_H
-#define __SPARC_SERIAL_H
-
-#define BASE_BAUD ( 1843200 / 16 )
-
-#endif /* __SPARC_SERIAL_H */
unsigned long, unsigned long);
void cpu_panic(void);
-extern void smp4m_irq_rotate(int cpu);
/*
* General functions that each host system must provide.
void sun4d_init_smp(void);
void smp_callin(void);
-void smp_boot_cpus(void);
void smp_store_cpu_info(int);
void smp_resched_interrupt(void);
#define raw_smp_processor_id() (current_thread_info()->cpu)
-#define prof_multiplier(__cpu) cpu_data(__cpu).multiplier
-#define prof_counter(__cpu) cpu_data(__cpu).counter
-
void smp_setup_cpu_possible_map(void);
#endif /* !(__ASSEMBLY__) */
* and 2 stores in this critical code path. -DaveM
*/
#define switch_to(prev, next, last) \
-do { flush_tlb_pending(); \
- save_and_clear_fpu(); \
+do { save_and_clear_fpu(); \
/* If you are tempted to conditionalize the following */ \
/* so that ASI is only written if it changes, think again. */ \
__asm__ __volatile__("wr %%g0, %0, %%asi" \
struct tlb_batch {
struct mm_struct *mm;
unsigned long tlb_nr;
+ unsigned long active;
unsigned long vaddrs[TLB_BATCH_NR];
};
extern void flush_tsb_kernel_range(unsigned long start, unsigned long end);
extern void flush_tsb_user(struct tlb_batch *tb);
+extern void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr);
/* TLB flush operations. */
-extern void flush_tlb_pending(void);
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+}
+
+static inline void flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
+{
+}
+
+static inline void flush_tlb_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+}
+
+#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
-#define flush_tlb_range(vma,start,end) \
- do { (void)(start); flush_tlb_pending(); } while (0)
-#define flush_tlb_page(vma,addr) flush_tlb_pending()
-#define flush_tlb_mm(mm) flush_tlb_pending()
+extern void flush_tlb_pending(void);
+extern void arch_enter_lazy_mmu_mode(void);
+extern void arch_leave_lazy_mmu_mode(void);
+#define arch_flush_lazy_mmu_mode() do {} while (0)
/* Local cpu only. */
extern void __flush_tlb_all(void);
-
+extern void __flush_tlb_page(unsigned long context, unsigned long vaddr);
extern void __flush_tlb_kernel_range(unsigned long start, unsigned long end);
#ifndef CONFIG_SMP
__flush_tlb_kernel_range(start,end); \
} while (0)
+static inline void global_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr)
+{
+ __flush_tlb_page(CTX_HWBITS(mm->context), vaddr);
+}
+
#else /* CONFIG_SMP */
extern void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end);
+extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr);
#define flush_tlb_kernel_range(start, end) \
do { flush_tsb_kernel_range(start,end); \
smp_flush_tlb_kernel_range(start, end); \
} while (0)
+#define global_flush_tlb_page(mm, vaddr) \
+ smp_flush_tlb_page(mm, vaddr)
+
#endif /* ! CONFIG_SMP */
#endif /* _SPARC64_TLBFLUSH_H */
header-y += termbits.h
header-y += termios.h
header-y += traps.h
-header-y += types.h
header-y += uctx.h
header-y += unistd.h
header-y += utrap.h
+++ /dev/null
-#ifndef _SPARC_TYPES_H
-#define _SPARC_TYPES_H
-/*
- * This file is never included by application software unless
- * explicitly requested (e.g., via linux/types.h) in which case the
- * application is Linux specific so (user-) name space pollution is
- * not a major issue. However, for interoperability, libraries still
- * need to be careful to avoid a name clashes.
- */
-
-#if defined(__sparc__)
-
-#include <asm-generic/int-ll64.h>
-
-#endif /* defined(__sparc__) */
-
-#endif /* defined(_SPARC_TYPES_H) */
}
extern unsigned long xcall_flush_tlb_mm;
-extern unsigned long xcall_flush_tlb_pending;
+extern unsigned long xcall_flush_tlb_page;
extern unsigned long xcall_flush_tlb_kernel_range;
extern unsigned long xcall_fetch_glob_regs;
extern unsigned long xcall_fetch_glob_pmu;
put_cpu();
}
+struct tlb_pending_info {
+ unsigned long ctx;
+ unsigned long nr;
+ unsigned long *vaddrs;
+};
+
+static void tlb_pending_func(void *info)
+{
+ struct tlb_pending_info *t = info;
+
+ __flush_tlb_pending(t->ctx, t->nr, t->vaddrs);
+}
+
void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long *vaddrs)
{
u32 ctx = CTX_HWBITS(mm->context);
+ struct tlb_pending_info info;
int cpu = get_cpu();
+ info.ctx = ctx;
+ info.nr = nr;
+ info.vaddrs = vaddrs;
+
if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
else
- smp_cross_call_masked(&xcall_flush_tlb_pending,
- ctx, nr, (unsigned long) vaddrs,
- mm_cpumask(mm));
+ smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
+ &info, 1);
__flush_tlb_pending(ctx, nr, vaddrs);
put_cpu();
}
+void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr)
+{
+ unsigned long context = CTX_HWBITS(mm->context);
+ int cpu = get_cpu();
+
+ if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
+ cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
+ else
+ smp_cross_call_masked(&xcall_flush_tlb_page,
+ context, vaddr, 0,
+ mm_cpumask(mm));
+ __flush_tlb_page(context, vaddr);
+
+ put_cpu();
+}
+
void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
start &= PAGE_MASK;
void bit_map_init(struct bit_map *t, unsigned long *map, int size)
{
-
- if ((size & 07) != 0)
- BUG();
- memset(map, 0, size>>3);
-
+ bitmap_zero(map, size);
memset(t, 0, sizeof *t);
spin_lock_init(&t->lock);
t->map = map;
#define IOMMU_RNGE IOMMU_RNGE_256MB
#define IOMMU_START 0xF0000000
#define IOMMU_WINSIZE (256*1024*1024U)
-#define IOMMU_NPTES (IOMMU_WINSIZE/PAGE_SIZE) /* 64K PTEs, 265KB */
+#define IOMMU_NPTES (IOMMU_WINSIZE/PAGE_SIZE) /* 64K PTEs, 256KB */
#define IOMMU_ORDER 6 /* 4096 * (1<<6) */
/* srmmu.c */
SRMMU_NOCACHE_ALIGN_MAX, 0UL);
memset(srmmu_nocache_pool, 0, srmmu_nocache_size);
- srmmu_nocache_bitmap = __alloc_bootmem(bitmap_bits >> 3, SMP_CACHE_BYTES, 0UL);
+ srmmu_nocache_bitmap =
+ __alloc_bootmem(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
+ SMP_CACHE_BYTES, 0UL);
bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits);
srmmu_swapper_pg_dir = __srmmu_get_nocache(SRMMU_PGD_TABLE_SIZE, SRMMU_PGD_TABLE_SIZE);
void flush_tlb_pending(void)
{
struct tlb_batch *tb = &get_cpu_var(tlb_batch);
+ struct mm_struct *mm = tb->mm;
- if (tb->tlb_nr) {
- flush_tsb_user(tb);
+ if (!tb->tlb_nr)
+ goto out;
- if (CTX_VALID(tb->mm->context)) {
+ flush_tsb_user(tb);
+
+ if (CTX_VALID(mm->context)) {
+ if (tb->tlb_nr == 1) {
+ global_flush_tlb_page(mm, tb->vaddrs[0]);
+ } else {
#ifdef CONFIG_SMP
smp_flush_tlb_pending(tb->mm, tb->tlb_nr,
&tb->vaddrs[0]);
tb->tlb_nr, &tb->vaddrs[0]);
#endif
}
- tb->tlb_nr = 0;
}
+ tb->tlb_nr = 0;
+
+out:
put_cpu_var(tlb_batch);
}
+void arch_enter_lazy_mmu_mode(void)
+{
+ struct tlb_batch *tb = &__get_cpu_var(tlb_batch);
+
+ tb->active = 1;
+}
+
+void arch_leave_lazy_mmu_mode(void)
+{
+ struct tlb_batch *tb = &__get_cpu_var(tlb_batch);
+
+ if (tb->tlb_nr)
+ flush_tlb_pending();
+ tb->active = 0;
+}
+
static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
bool exec)
{
nr = 0;
}
+ if (!tb->active) {
+ global_flush_tlb_page(mm, vaddr);
+ flush_tsb_user_page(mm, vaddr);
+ return;
+ }
+
if (nr == 0)
tb->mm = mm;
#include <linux/preempt.h>
#include <linux/slab.h>
#include <asm/page.h>
-#include <asm/tlbflush.h>
-#include <asm/tlb.h>
-#include <asm/mmu_context.h>
#include <asm/pgtable.h>
+#include <asm/mmu_context.h>
#include <asm/tsb.h>
+#include <asm/tlb.h>
#include <asm/oplib.h>
extern struct tsb swapper_tsb[KERNEL_TSB_NENTRIES];
}
}
-static void __flush_tsb_one(struct tlb_batch *tb, unsigned long hash_shift,
- unsigned long tsb, unsigned long nentries)
+static void __flush_tsb_one_entry(unsigned long tsb, unsigned long v,
+ unsigned long hash_shift,
+ unsigned long nentries)
{
- unsigned long i;
+ unsigned long tag, ent, hash;
- for (i = 0; i < tb->tlb_nr; i++) {
- unsigned long v = tb->vaddrs[i];
- unsigned long tag, ent, hash;
+ v &= ~0x1UL;
+ hash = tsb_hash(v, hash_shift, nentries);
+ ent = tsb + (hash * sizeof(struct tsb));
+ tag = (v >> 22UL);
- v &= ~0x1UL;
+ tsb_flush(ent, tag);
+}
- hash = tsb_hash(v, hash_shift, nentries);
- ent = tsb + (hash * sizeof(struct tsb));
- tag = (v >> 22UL);
+static void __flush_tsb_one(struct tlb_batch *tb, unsigned long hash_shift,
+ unsigned long tsb, unsigned long nentries)
+{
+ unsigned long i;
- tsb_flush(ent, tag);
- }
+ for (i = 0; i < tb->tlb_nr; i++)
+ __flush_tsb_one_entry(tsb, tb->vaddrs[i], hash_shift, nentries);
}
void flush_tsb_user(struct tlb_batch *tb)
spin_unlock_irqrestore(&mm->context.lock, flags);
}
+void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr)
+{
+ unsigned long nentries, base, flags;
+
+ spin_lock_irqsave(&mm->context.lock, flags);
+
+ base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
+ nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
+ if (tlb_type == cheetah_plus || tlb_type == hypervisor)
+ base = __pa(base);
+ __flush_tsb_one_entry(base, vaddr, PAGE_SHIFT, nentries);
+
+#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
+ if (mm->context.tsb_block[MM_TSB_HUGE].tsb) {
+ base = (unsigned long) mm->context.tsb_block[MM_TSB_HUGE].tsb;
+ nentries = mm->context.tsb_block[MM_TSB_HUGE].tsb_nentries;
+ if (tlb_type == cheetah_plus || tlb_type == hypervisor)
+ base = __pa(base);
+ __flush_tsb_one_entry(base, vaddr, HPAGE_SHIFT, nentries);
+ }
+#endif
+ spin_unlock_irqrestore(&mm->context.lock, flags);
+}
+
#define HV_PGSZ_IDX_BASE HV_PGSZ_IDX_8K
#define HV_PGSZ_MASK_BASE HV_PGSZ_MASK_8K
nop
nop
+ .align 32
+ .globl __flush_tlb_page
+__flush_tlb_page: /* 22 insns */
+ /* %o0 = context, %o1 = vaddr */
+ rdpr %pstate, %g7
+ andn %g7, PSTATE_IE, %g2
+ wrpr %g2, %pstate
+ mov SECONDARY_CONTEXT, %o4
+ ldxa [%o4] ASI_DMMU, %g2
+ stxa %o0, [%o4] ASI_DMMU
+ andcc %o1, 1, %g0
+ andn %o1, 1, %o3
+ be,pn %icc, 1f
+ or %o3, 0x10, %o3
+ stxa %g0, [%o3] ASI_IMMU_DEMAP
+1: stxa %g0, [%o3] ASI_DMMU_DEMAP
+ membar #Sync
+ stxa %g2, [%o4] ASI_DMMU
+ sethi %hi(KERNBASE), %o4
+ flush %o4
+ retl
+ wrpr %g7, 0x0, %pstate
+ nop
+ nop
+ nop
+ nop
+
.align 32
.globl __flush_tlb_pending
__flush_tlb_pending: /* 26 insns */
retl
wrpr %g7, 0x0, %pstate
+__cheetah_flush_tlb_page: /* 22 insns */
+ /* %o0 = context, %o1 = vaddr */
+ rdpr %pstate, %g7
+ andn %g7, PSTATE_IE, %g2
+ wrpr %g2, 0x0, %pstate
+ wrpr %g0, 1, %tl
+ mov PRIMARY_CONTEXT, %o4
+ ldxa [%o4] ASI_DMMU, %g2
+ srlx %g2, CTX_PGSZ1_NUC_SHIFT, %o3
+ sllx %o3, CTX_PGSZ1_NUC_SHIFT, %o3
+ or %o0, %o3, %o0 /* Preserve nucleus page size fields */
+ stxa %o0, [%o4] ASI_DMMU
+ andcc %o1, 1, %g0
+ be,pn %icc, 1f
+ andn %o1, 1, %o3
+ stxa %g0, [%o3] ASI_IMMU_DEMAP
+1: stxa %g0, [%o3] ASI_DMMU_DEMAP
+ membar #Sync
+ stxa %g2, [%o4] ASI_DMMU
+ sethi %hi(KERNBASE), %o4
+ flush %o4
+ wrpr %g0, 0, %tl
+ retl
+ wrpr %g7, 0x0, %pstate
+
__cheetah_flush_tlb_pending: /* 27 insns */
/* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
rdpr %pstate, %g7
retl
nop
+__hypervisor_flush_tlb_page: /* 11 insns */
+ /* %o0 = context, %o1 = vaddr */
+ mov %o0, %g2
+ mov %o1, %o0 /* ARG0: vaddr + IMMU-bit */
+ mov %g2, %o1 /* ARG1: mmu context */
+ mov HV_MMU_ALL, %o2 /* ARG2: flags */
+ srlx %o0, PAGE_SHIFT, %o0
+ sllx %o0, PAGE_SHIFT, %o0
+ ta HV_MMU_UNMAP_ADDR_TRAP
+ brnz,pn %o0, __hypervisor_tlb_tl0_error
+ mov HV_MMU_UNMAP_ADDR_TRAP, %o1
+ retl
+ nop
+
__hypervisor_flush_tlb_pending: /* 16 insns */
/* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
sllx %o1, 3, %g1
call tlb_patch_one
mov 19, %o2
+ sethi %hi(__flush_tlb_page), %o0
+ or %o0, %lo(__flush_tlb_page), %o0
+ sethi %hi(__cheetah_flush_tlb_page), %o1
+ or %o1, %lo(__cheetah_flush_tlb_page), %o1
+ call tlb_patch_one
+ mov 22, %o2
+
sethi %hi(__flush_tlb_pending), %o0
or %o0, %lo(__flush_tlb_pending), %o0
sethi %hi(__cheetah_flush_tlb_pending), %o1
nop
nop
- .globl xcall_flush_tlb_pending
-xcall_flush_tlb_pending: /* 21 insns */
- /* %g5=context, %g1=nr, %g7=vaddrs[] */
- sllx %g1, 3, %g1
+ .globl xcall_flush_tlb_page
+xcall_flush_tlb_page: /* 17 insns */
+ /* %g5=context, %g1=vaddr */
mov PRIMARY_CONTEXT, %g4
ldxa [%g4] ASI_DMMU, %g2
srlx %g2, CTX_PGSZ1_NUC_SHIFT, %g4
or %g5, %g4, %g5
mov PRIMARY_CONTEXT, %g4
stxa %g5, [%g4] ASI_DMMU
-1: sub %g1, (1 << 3), %g1
- ldx [%g7 + %g1], %g5
- andcc %g5, 0x1, %g0
+ andcc %g1, 0x1, %g0
be,pn %icc, 2f
-
- andn %g5, 0x1, %g5
+ andn %g1, 0x1, %g5
stxa %g0, [%g5] ASI_IMMU_DEMAP
2: stxa %g0, [%g5] ASI_DMMU_DEMAP
membar #Sync
- brnz,pt %g1, 1b
- nop
stxa %g2, [%g4] ASI_DMMU
retry
nop
+ nop
.globl xcall_flush_tlb_kernel_range
xcall_flush_tlb_kernel_range: /* 25 insns */
membar #Sync
retry
- .globl __hypervisor_xcall_flush_tlb_pending
-__hypervisor_xcall_flush_tlb_pending: /* 21 insns */
- /* %g5=ctx, %g1=nr, %g7=vaddrs[], %g2,%g3,%g4,g6=scratch */
- sllx %g1, 3, %g1
+ .globl __hypervisor_xcall_flush_tlb_page
+__hypervisor_xcall_flush_tlb_page: /* 17 insns */
+ /* %g5=ctx, %g1=vaddr */
mov %o0, %g2
mov %o1, %g3
mov %o2, %g4
-1: sub %g1, (1 << 3), %g1
- ldx [%g7 + %g1], %o0 /* ARG0: virtual address */
+ mov %g1, %o0 /* ARG0: virtual address */
mov %g5, %o1 /* ARG1: mmu context */
mov HV_MMU_ALL, %o2 /* ARG2: flags */
srlx %o0, PAGE_SHIFT, %o0
mov HV_MMU_UNMAP_ADDR_TRAP, %g6
brnz,a,pn %o0, __hypervisor_tlb_xcall_error
mov %o0, %g5
- brnz,pt %g1, 1b
- nop
mov %g2, %o0
mov %g3, %o1
mov %g4, %o2
call tlb_patch_one
mov 10, %o2
+ sethi %hi(__flush_tlb_page), %o0
+ or %o0, %lo(__flush_tlb_page), %o0
+ sethi %hi(__hypervisor_flush_tlb_page), %o1
+ or %o1, %lo(__hypervisor_flush_tlb_page), %o1
+ call tlb_patch_one
+ mov 11, %o2
+
sethi %hi(__flush_tlb_pending), %o0
or %o0, %lo(__flush_tlb_pending), %o0
sethi %hi(__hypervisor_flush_tlb_pending), %o1
call tlb_patch_one
mov 21, %o2
- sethi %hi(xcall_flush_tlb_pending), %o0
- or %o0, %lo(xcall_flush_tlb_pending), %o0
- sethi %hi(__hypervisor_xcall_flush_tlb_pending), %o1
- or %o1, %lo(__hypervisor_xcall_flush_tlb_pending), %o1
+ sethi %hi(xcall_flush_tlb_page), %o0
+ or %o0, %lo(xcall_flush_tlb_page), %o0
+ sethi %hi(__hypervisor_xcall_flush_tlb_page), %o1
+ or %o1, %lo(__hypervisor_xcall_flush_tlb_page), %o1
call tlb_patch_one
- mov 21, %o2
+ mov 17, %o2
sethi %hi(xcall_flush_tlb_kernel_range), %o0
or %o0, %lo(xcall_flush_tlb_kernel_range), %o0
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
-CONFIG_MULTICORE_RAID456=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
-CONFIG_MULTICORE_RAID456=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
#include <asm/percpu.h>
#include <arch/spr_def.h>
-/* Set and clear kernel interrupt masks. */
+/*
+ * Set and clear kernel interrupt masks.
+ *
+ * NOTE: __insn_mtspr() is a compiler builtin marked as a memory
+ * clobber. We rely on it being equivalent to a compiler barrier in
+ * this code since arch_local_irq_save() and friends must act as
+ * compiler barriers. This compiler semantic is baked into enough
+ * places that the compiler will maintain it going forward.
+ */
#if CHIP_HAS_SPLIT_INTR_MASK()
#if INT_PERF_COUNT < 32 || INT_AUX_PERF_COUNT < 32 || INT_MEM_ERROR >= 32
# error Fix assumptions about which word various interrupts are in
#ifdef CONFIG_BLK_DEV_INITRD
-/*
- * Note that the kernel can potentially support other compression
- * techniques than gz, though we don't do so by default. If we ever
- * decide to do so we can either look for other filename extensions,
- * or just allow a file with this name to be compressed with an
- * arbitrary compressor (somewhat counterintuitively).
- */
static int __initdata set_initramfs_file;
-static char __initdata initramfs_file[128] = "initramfs.cpio.gz";
+static char __initdata initramfs_file[128] = "initramfs";
static int __init setup_initramfs_file(char *str)
{
early_param("initramfs_file", setup_initramfs_file);
/*
- * We look for an "initramfs.cpio.gz" file in the hvfs.
- * If there is one, we allocate some memory for it and it will be
- * unpacked to the initramfs.
+ * We look for a file called "initramfs" in the hvfs. If there is one, we
+ * allocate some memory for it and it will be unpacked to the initramfs.
+ * If it's compressed, the initd code will uncompress it first.
*/
static void __init load_hv_initrd(void)
{
fd = hv_fs_findfile((HV_VirtAddr) initramfs_file);
if (fd == HV_ENOENT) {
- if (set_initramfs_file)
+ if (set_initramfs_file) {
pr_warning("No such hvfs initramfs file '%s'\n",
initramfs_file);
- return;
+ return;
+ } else {
+ /* Try old backwards-compatible name. */
+ fd = hv_fs_findfile((HV_VirtAddr)"initramfs.cpio.gz");
+ if (fd == HV_ENOENT)
+ return;
+ }
}
BUG_ON(fd < 0);
stat = hv_fs_fstat(fd);
config EFI
bool "EFI runtime service support"
depends on ACPI
+ select UCS2_STRING
---help---
This enables the kernel to use EFI runtime services that are
available (such as the EFI variable services).
# create a compressed vmlinux image from the original vmlinux
#
-targets := vmlinux.lds vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma vmlinux.bin.xz vmlinux.bin.lzo head_$(BITS).o misc.o string.o cmdline.o early_serial_console.o piggy.o
+targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma vmlinux.bin.xz vmlinux.bin.lzo
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
$(obj)/piggy.o
$(obj)/eboot.o: KBUILD_CFLAGS += -fshort-wchar -mno-red-zone
-$(obj)/efi_stub_$(BITS).o: KBUILD_CLFAGS += -fshort-wchar -mno-red-zone
ifeq ($(CONFIG_EFI_STUB), y)
VMLINUX_OBJS += $(obj)/eboot.o $(obj)/efi_stub_$(BITS).o
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
-targets += vmlinux.bin.all vmlinux.relocs
+targets += $(patsubst $(obj)/%,%,$(VMLINUX_OBJS)) vmlinux.bin.all vmlinux.relocs
CMD_RELOCS = arch/x86/tools/relocs
quiet_cmd_relocs = RELOCS $@
*size = len;
}
+static efi_status_t setup_efi_vars(struct boot_params *params)
+{
+ struct setup_data *data;
+ struct efi_var_bootdata *efidata;
+ u64 store_size, remaining_size, var_size;
+ efi_status_t status;
+
+ if (!sys_table->runtime->query_variable_info)
+ return EFI_UNSUPPORTED;
+
+ data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
+
+ while (data && data->next)
+ data = (struct setup_data *)(unsigned long)data->next;
+
+ status = efi_call_phys4(sys_table->runtime->query_variable_info,
+ EFI_VARIABLE_NON_VOLATILE |
+ EFI_VARIABLE_BOOTSERVICE_ACCESS |
+ EFI_VARIABLE_RUNTIME_ACCESS, &store_size,
+ &remaining_size, &var_size);
+
+ if (status != EFI_SUCCESS)
+ return status;
+
+ status = efi_call_phys3(sys_table->boottime->allocate_pool,
+ EFI_LOADER_DATA, sizeof(*efidata), &efidata);
+
+ if (status != EFI_SUCCESS)
+ return status;
+
+ efidata->data.type = SETUP_EFI_VARS;
+ efidata->data.len = sizeof(struct efi_var_bootdata) -
+ sizeof(struct setup_data);
+ efidata->data.next = 0;
+ efidata->store_size = store_size;
+ efidata->remaining_size = remaining_size;
+ efidata->max_var_size = var_size;
+
+ if (data)
+ data->next = (unsigned long)efidata;
+ else
+ params->hdr.setup_data = (unsigned long)efidata;
+
+}
+
static efi_status_t setup_efi_pci(struct boot_params *params)
{
efi_pci_io_protocol *pci;
setup_graphics(boot_params);
+ setup_efi_vars(boot_params);
+
setup_efi_pci(boot_params);
status = efi_call_phys3(sys_table->boottime->allocate_pool,
extern void efi_unmap_memmap(void);
extern void efi_memory_uc(u64 addr, unsigned long size);
+struct efi_var_bootdata {
+ struct setup_data data;
+ u64 store_size;
+ u64 remaining_size;
+ u64 max_var_size;
+};
+
#ifdef CONFIG_EFI
static inline bool efi_is_native(void)
PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave);
}
-void arch_flush_lazy_mmu_mode(void);
+static inline void arch_flush_lazy_mmu_mode(void)
+{
+ PVOP_VCALL0(pv_mmu_ops.lazy_mode.flush);
+}
static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
phys_addr_t phys, pgprot_t flags)
/* Set deferred update mode, used for batching operations. */
void (*enter)(void);
void (*leave)(void);
+ void (*flush)(void);
};
struct pv_time_ops {
void paravirt_enter_lazy_mmu(void);
void paravirt_leave_lazy_mmu(void);
+void paravirt_flush_lazy_mmu(void);
void _paravirt_nop(void);
u32 _paravirt_ident_32(u32);
*/
static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs)
{
- return regs->orig_ax & __SYSCALL_MASK;
+ return regs->orig_ax;
}
static inline void syscall_rollback(struct task_struct *task,
struct pt_regs *regs)
{
- regs->ax = regs->orig_ax & __SYSCALL_MASK;
+ regs->ax = regs->orig_ax;
}
static inline long syscall_get_error(struct task_struct *task,
#define tlb_flush(tlb) \
{ \
- if (tlb->fullmm == 0) \
+ if (!tlb->fullmm && !tlb->need_flush_all) \
flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \
else \
flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \
return _hypercall3(int, console_io, cmd, count, str);
}
-extern int __must_check HYPERVISOR_physdev_op_compat(int, void *);
+extern int __must_check xen_physdev_op_compat(int, void *);
static inline int
HYPERVISOR_physdev_op(int cmd, void *arg)
{
int rc = _hypercall2(int, physdev_op, cmd, arg);
if (unlikely(rc == -ENOSYS))
- rc = HYPERVISOR_physdev_op_compat(cmd, arg);
+ rc = xen_physdev_op_compat(cmd, arg);
return rc;
}
#define SETUP_E820_EXT 1
#define SETUP_DTB 2
#define SETUP_PCI 3
+#define SETUP_EFI_VARS 4
/* ram_size flags */
#define RAMDISK_IMAGE_START_MASK 0x07FF
#define SNB_C1_AUTO_UNDEMOTE (1UL << 27)
#define SNB_C3_AUTO_UNDEMOTE (1UL << 28)
+#define MSR_PLATFORM_INFO 0x000000ce
#define MSR_MTRRcap 0x000000fe
#define MSR_IA32_BBL_CR_CTL 0x00000119
#define MSR_IA32_BBL_CR_CTL3 0x0000011e
if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
return false;
- /*
- * Xen emulates Hyper-V to support enlightened Windows.
- * Check to see first if we are on a Xen Hypervisor.
- */
- if (xen_cpuid_base())
- return false;
-
cpuid(HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS,
&eax, &hyp_signature[0], &hyp_signature[1], &hyp_signature[2]);
if (ms_hyperv.features & HV_X64_MSR_TIME_REF_COUNT_AVAILABLE)
clocksource_register_hz(&hyperv_cs, NSEC_PER_SEC/100);
-#if IS_ENABLED(CONFIG_HYPERV)
- /*
- * Setup the IDT for hypervisor callback.
- */
- alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector);
-#endif
}
const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
void hv_register_vmbus_handler(int irq, irq_handler_t handler)
{
+ /*
+ * Setup the IDT for hypervisor callback.
+ */
+ alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector);
+
vmbus_irq = irq;
vmbus_isr = handler;
}
};
static struct extra_reg intel_snb_extra_regs[] __read_mostly = {
- INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0),
- INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
+ INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3f807f8fffull, RSP_0),
+ INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3f807f8fffull, RSP_1),
+ EVENT_EXTRA_END
+};
+
+static struct extra_reg intel_snbep_extra_regs[] __read_mostly = {
+ INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3fffff8fffull, RSP_0),
+ INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3fffff8fffull, RSP_1),
EVENT_EXTRA_END
};
x86_pmu.event_constraints = intel_snb_event_constraints;
x86_pmu.pebs_constraints = intel_snb_pebs_event_constraints;
x86_pmu.pebs_aliases = intel_pebs_aliases_snb;
- x86_pmu.extra_regs = intel_snb_extra_regs;
+ if (boot_cpu_data.x86_model == 45)
+ x86_pmu.extra_regs = intel_snbep_extra_regs;
+ else
+ x86_pmu.extra_regs = intel_snb_extra_regs;
/* all extra regs are per-cpu when HT is on */
x86_pmu.er_flags |= ERF_HAS_RSP_1;
x86_pmu.er_flags |= ERF_NO_HT_SHARING;
x86_pmu.event_constraints = intel_ivb_event_constraints;
x86_pmu.pebs_constraints = intel_ivb_pebs_event_constraints;
x86_pmu.pebs_aliases = intel_pebs_aliases_snb;
- x86_pmu.extra_regs = intel_snb_extra_regs;
+ if (boot_cpu_data.x86_model == 62)
+ x86_pmu.extra_regs = intel_snbep_extra_regs;
+ else
+ x86_pmu.extra_regs = intel_snb_extra_regs;
/* all extra regs are per-cpu when HT is on */
x86_pmu.er_flags |= ERF_HAS_RSP_1;
x86_pmu.er_flags |= ERF_NO_HT_SHARING;
if (top <= at)
return 0;
+ memset(®s, 0, sizeof(regs));
+
ds->bts_index = ds->bts_buffer_base;
perf_sample_data_init(&data, 0, event->hw.last_period);
- regs.ip = 0;
/*
* Prepare a generic sample, i.e. fill in the invariant fields.
u32 eax = 0x00000000;
u32 ebx, ecx = 0, edx;
- if (!have_cpuid_p())
- return X86_VENDOR_UNKNOWN;
-
native_cpuid(&eax, &ebx, &ecx, &edx);
if (CPUID_IS(CPUID_INTEL1, CPUID_INTEL2, CPUID_INTEL3, ebx, ecx, edx))
return X86_VENDOR_UNKNOWN;
}
+static int __cpuinit x86_family(void)
+{
+ u32 eax = 0x00000001;
+ u32 ebx, ecx = 0, edx;
+ int x86;
+
+ native_cpuid(&eax, &ebx, &ecx, &edx);
+
+ x86 = (eax >> 8) & 0xf;
+ if (x86 == 15)
+ x86 += (eax >> 20) & 0xff;
+
+ return x86;
+}
+
void __init load_ucode_bsp(void)
{
- int vendor = x86_vendor();
+ int vendor, x86;
+
+ if (!have_cpuid_p())
+ return;
- if (vendor == X86_VENDOR_INTEL)
+ vendor = x86_vendor();
+ x86 = x86_family();
+
+ if (vendor == X86_VENDOR_INTEL && x86 >= 6)
load_ucode_intel_bsp();
}
void __cpuinit load_ucode_ap(void)
{
- int vendor = x86_vendor();
+ int vendor, x86;
+
+ if (!have_cpuid_p())
+ return;
+
+ vendor = x86_vendor();
+ x86 = x86_family();
- if (vendor == X86_VENDOR_INTEL)
+ if (vendor == X86_VENDOR_INTEL && x86 >= 6)
load_ucode_intel_ap();
}
struct microcode_intel ***mc_saved;
mc_saved = (struct microcode_intel ***)
- __pa_symbol(&mc_saved_data->mc_saved);
+ __pa_nodebug(&mc_saved_data->mc_saved);
for (i = 0; i < mc_saved_data->mc_saved_count; i++) {
struct microcode_intel *p;
p = *(struct microcode_intel **)
- __pa(mc_saved_data->mc_saved + i);
- mc_saved_tmp[i] = (struct microcode_intel *)__pa(p);
+ __pa_nodebug(mc_saved_data->mc_saved + i);
+ mc_saved_tmp[i] = (struct microcode_intel *)__pa_nodebug(p);
}
}
#endif
struct cpio_data cd;
long offset = 0;
#ifdef CONFIG_X86_32
- char *p = (char *)__pa_symbol(ucode_name);
+ char *p = (char *)__pa_nodebug(ucode_name);
#else
char *p = ucode_name;
#endif
if (mc_intel == NULL)
return;
- delay_ucode_info_p = (int *)__pa_symbol(&delay_ucode_info);
- current_mc_date_p = (int *)__pa_symbol(¤t_mc_date);
+ delay_ucode_info_p = (int *)__pa_nodebug(&delay_ucode_info);
+ current_mc_date_p = (int *)__pa_nodebug(¤t_mc_date);
*delay_ucode_info_p = 1;
*current_mc_date_p = mc_intel->hdr.date;
}
#endif
-static int apply_microcode_early(struct mc_saved_data *mc_saved_data,
- struct ucode_cpu_info *uci)
+static int __cpuinit apply_microcode_early(struct mc_saved_data *mc_saved_data,
+ struct ucode_cpu_info *uci)
{
struct microcode_intel *mc_intel;
unsigned int val[2];
#ifdef CONFIG_X86_32
struct boot_params *boot_params_p;
- boot_params_p = (struct boot_params *)__pa_symbol(&boot_params);
+ boot_params_p = (struct boot_params *)__pa_nodebug(&boot_params);
ramdisk_image = boot_params_p->hdr.ramdisk_image;
ramdisk_size = boot_params_p->hdr.ramdisk_size;
initrd_start_early = ramdisk_image;
initrd_end_early = initrd_start_early + ramdisk_size;
_load_ucode_intel_bsp(
- (struct mc_saved_data *)__pa_symbol(&mc_saved_data),
- (unsigned long *)__pa_symbol(&mc_saved_in_initrd),
+ (struct mc_saved_data *)__pa_nodebug(&mc_saved_data),
+ (unsigned long *)__pa_nodebug(&mc_saved_in_initrd),
initrd_start_early, initrd_end_early, &uci);
#else
ramdisk_image = boot_params.hdr.ramdisk_image;
unsigned long *initrd_start_p;
mc_saved_in_initrd_p =
- (unsigned long *)__pa_symbol(mc_saved_in_initrd);
- mc_saved_data_p = (struct mc_saved_data *)__pa_symbol(&mc_saved_data);
- initrd_start_p = (unsigned long *)__pa_symbol(&initrd_start);
- initrd_start_addr = (unsigned long)__pa_symbol(*initrd_start_p);
+ (unsigned long *)__pa_nodebug(mc_saved_in_initrd);
+ mc_saved_data_p = (struct mc_saved_data *)__pa_nodebug(&mc_saved_data);
+ initrd_start_p = (unsigned long *)__pa_nodebug(&initrd_start);
+ initrd_start_addr = (unsigned long)__pa_nodebug(*initrd_start_p);
#else
mc_saved_data_p = &mc_saved_data;
mc_saved_in_initrd_p = mc_saved_in_initrd;
leave_lazy(PARAVIRT_LAZY_MMU);
}
+void paravirt_flush_lazy_mmu(void)
+{
+ preempt_disable();
+
+ if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
+ arch_leave_lazy_mmu_mode();
+ arch_enter_lazy_mmu_mode();
+ }
+
+ preempt_enable();
+}
+
void paravirt_start_context_switch(struct task_struct *prev)
{
BUG_ON(preemptible());
return this_cpu_read(paravirt_lazy_mode);
}
-void arch_flush_lazy_mmu_mode(void)
-{
- preempt_disable();
-
- if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
- arch_leave_lazy_mmu_mode();
- arch_enter_lazy_mmu_mode();
- }
-
- preempt_enable();
-}
-
struct pv_info pv_info = {
.name = "bare hardware",
.paravirt_enabled = 0,
.lazy_mode = {
.enter = paravirt_nop,
.leave = paravirt_nop,
+ .flush = paravirt_nop,
},
.set_fixmap = native_set_fixmap,
/*
* Keep the crash kernel below this limit. On 32 bits earlier kernels
* would limit the kernel to the low 512 MiB due to mapping restrictions.
+ * On 64bit, old kexec-tools need to under 896MiB.
*/
#ifdef CONFIG_X86_32
-# define CRASH_KERNEL_ADDR_MAX (512 << 20)
+# define CRASH_KERNEL_ADDR_LOW_MAX (512 << 20)
+# define CRASH_KERNEL_ADDR_HIGH_MAX (512 << 20)
#else
-# define CRASH_KERNEL_ADDR_MAX MAXMEM
+# define CRASH_KERNEL_ADDR_LOW_MAX (896UL<<20)
+# define CRASH_KERNEL_ADDR_HIGH_MAX MAXMEM
#endif
static void __init reserve_crashkernel_low(void)
unsigned long long low_base = 0, low_size = 0;
unsigned long total_low_mem;
unsigned long long base;
+ bool auto_set = false;
int ret;
total_low_mem = memblock_mem_size(1UL<<(32-PAGE_SHIFT));
+ /* crashkernel=Y,low */
ret = parse_crashkernel_low(boot_command_line, total_low_mem,
&low_size, &base);
- if (ret != 0 || low_size <= 0)
- return;
+ if (ret != 0) {
+ /*
+ * two parts from lib/swiotlb.c:
+ * swiotlb size: user specified with swiotlb= or default.
+ * swiotlb overflow buffer: now is hardcoded to 32k.
+ * We round it to 8M for other buffers that
+ * may need to stay low too.
+ */
+ low_size = swiotlb_size_or_default() + (8UL<<20);
+ auto_set = true;
+ } else {
+ /* passed with crashkernel=0,low ? */
+ if (!low_size)
+ return;
+ }
low_base = memblock_find_in_range(low_size, (1ULL<<32),
low_size, alignment);
if (!low_base) {
- pr_info("crashkernel low reservation failed - No suitable area found.\n");
+ if (!auto_set)
+ pr_info("crashkernel low reservation failed - No suitable area found.\n");
return;
}
const unsigned long long alignment = 16<<20; /* 16M */
unsigned long long total_mem;
unsigned long long crash_size, crash_base;
+ bool high = false;
int ret;
total_mem = memblock_phys_mem_size();
+ /* crashkernel=XM */
ret = parse_crashkernel(boot_command_line, total_mem,
&crash_size, &crash_base);
- if (ret != 0 || crash_size <= 0)
- return;
+ if (ret != 0 || crash_size <= 0) {
+ /* crashkernel=X,high */
+ ret = parse_crashkernel_high(boot_command_line, total_mem,
+ &crash_size, &crash_base);
+ if (ret != 0 || crash_size <= 0)
+ return;
+ high = true;
+ }
/* 0 means: find the address automatically */
if (crash_base <= 0) {
* kexec want bzImage is below CRASH_KERNEL_ADDR_MAX
*/
crash_base = memblock_find_in_range(alignment,
- CRASH_KERNEL_ADDR_MAX, crash_size, alignment);
+ high ? CRASH_KERNEL_ADDR_HIGH_MAX :
+ CRASH_KERNEL_ADDR_LOW_MAX,
+ crash_size, alignment);
if (!crash_base) {
pr_info("crashkernel reservation failed - No suitable area found.\n");
if (!pv_eoi_enabled(vcpu))
return 0;
return kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.pv_eoi.data,
- addr);
+ addr, sizeof(u8));
}
void kvm_lapic_init(void)
return 0;
}
- if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.apf.data, gpa))
+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.apf.data, gpa,
+ sizeof(u32)))
return 1;
vcpu->arch.apf.send_user_only = !(data & KVM_ASYNC_PF_SEND_ALWAYS);
gpa_offset = data & ~(PAGE_MASK | 1);
- /* Check that the address is 32-byte aligned. */
- if (gpa_offset & (sizeof(struct pvclock_vcpu_time_info) - 1))
- break;
-
if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
- &vcpu->arch.pv_time, data & ~1ULL))
+ &vcpu->arch.pv_time, data & ~1ULL,
+ sizeof(struct pvclock_vcpu_time_info)))
vcpu->arch.pv_time_enabled = false;
else
vcpu->arch.pv_time_enabled = true;
return 1;
if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.st.stime,
- data & KVM_STEAL_VALID_BITS))
+ data & KVM_STEAL_VALID_BITS,
+ sizeof(struct kvm_steal_time)))
return 1;
vcpu->arch.st.msr_val = data;
pv_mmu_ops.read_cr3 = lguest_read_cr3;
pv_mmu_ops.lazy_mode.enter = paravirt_enter_lazy_mmu;
pv_mmu_ops.lazy_mode.leave = lguest_leave_lazy_mmu_mode;
+ pv_mmu_ops.lazy_mode.flush = paravirt_flush_lazy_mmu;
pv_mmu_ops.pte_update = lguest_pte_update;
pv_mmu_ops.pte_update_defer = lguest_pte_update;
char c;
unsigned zero_len;
- for (; len; --len) {
+ for (; len; --len, to++) {
if (__get_user_nocheck(c, from++, sizeof(char)))
break;
- if (__put_user_nocheck(c, to++, sizeof(char)))
+ if (__put_user_nocheck(c, to, sizeof(char)))
break;
}
if (pgd_none(*pgd_ref))
return -1;
- if (pgd_none(*pgd))
+ if (pgd_none(*pgd)) {
set_pgd(pgd, *pgd_ref);
- else
+ arch_flush_lazy_mmu_mode();
+ } else {
BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
+ }
/*
* Below here mismatches are bugs because these lower tables
s->gpg++;
i += GPS/PAGE_SIZE;
} else if (level == PG_LEVEL_2M) {
- if (!(pte_val(*pte) & _PAGE_PSE)) {
+ if ((pte_val(*pte) & _PAGE_PRESENT) && !(pte_val(*pte) & _PAGE_PSE)) {
printk(KERN_ERR
"%lx level %d but not PSE %Lx\n",
addr, level, (u64)pte_val(*pte));
* We are safe now. Check whether the new pgprot is the same:
*/
old_pte = *kpte;
- old_prot = new_prot = req_prot = pte_pgprot(old_pte);
+ old_prot = req_prot = pte_pgprot(old_pte);
pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr);
pgprot_val(req_prot) |= pgprot_val(cpa->mask_set);
* a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL
* for the ancient hardware that doesn't support it.
*/
- if (pgprot_val(new_prot) & _PAGE_PRESENT)
- pgprot_val(new_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
+ if (pgprot_val(req_prot) & _PAGE_PRESENT)
+ pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
else
- pgprot_val(new_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
+ pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
- new_prot = canon_pgprot(new_prot);
+ req_prot = canon_pgprot(req_prot);
/*
* old_pte points to the large page base address. So we need
* but that can deadlock->flush only current cpu:
*/
__flush_tlb_all();
+
+ arch_flush_lazy_mmu_mode();
}
#ifdef CONFIG_HIBERNATION
void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
{
paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT);
+ /*
+ * NOTE! For PAE, any changes to the top page-directory-pointer-table
+ * entries need a full cr3 reload to flush.
+ */
+#ifdef CONFIG_X86_PAE
+ tlb->need_flush_all = 1;
+#endif
tlb_remove_page(tlb, virt_to_page(pmd));
}
#include <linux/io.h>
#include <linux/reboot.h>
#include <linux/bcd.h>
+#include <linux/ucs2_string.h>
#include <asm/setup.h>
#include <asm/efi.h>
#define EFI_DEBUG 1
+/*
+ * There's some additional metadata associated with each
+ * variable. Intel's reference implementation is 60 bytes - bump that
+ * to account for potential alignment constraints
+ */
+#define VAR_METADATA_SIZE 64
+
struct efi __read_mostly efi = {
.mps = EFI_INVALID_TABLE_ADDR,
.acpi = EFI_INVALID_TABLE_ADDR,
static struct efi efi_phys __initdata;
static efi_system_table_t efi_systab __initdata;
+static u64 efi_var_store_size;
+static u64 efi_var_remaining_size;
+static u64 efi_var_max_var_size;
+static u64 boot_used_size;
+static u64 boot_var_size;
+static u64 active_size;
+
unsigned long x86_efi_facility;
/*
}
early_param("add_efi_memmap", setup_add_efi_memmap);
+static bool efi_no_storage_paranoia;
+
+static int __init setup_storage_paranoia(char *arg)
+{
+ efi_no_storage_paranoia = true;
+ return 0;
+}
+early_param("efi_no_storage_paranoia", setup_storage_paranoia);
+
static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
{
efi_char16_t *name,
efi_guid_t *vendor)
{
- return efi_call_virt3(get_next_variable,
- name_size, name, vendor);
+ efi_status_t status;
+ static bool finished = false;
+ static u64 var_size;
+
+ status = efi_call_virt3(get_next_variable,
+ name_size, name, vendor);
+
+ if (status == EFI_NOT_FOUND) {
+ finished = true;
+ if (var_size < boot_used_size) {
+ boot_var_size = boot_used_size - var_size;
+ active_size += boot_var_size;
+ } else {
+ printk(KERN_WARNING FW_BUG "efi: Inconsistent initial sizes\n");
+ }
+ }
+
+ if (boot_used_size && !finished) {
+ unsigned long size;
+ u32 attr;
+ efi_status_t s;
+ void *tmp;
+
+ s = virt_efi_get_variable(name, vendor, &attr, &size, NULL);
+
+ if (s != EFI_BUFFER_TOO_SMALL || !size)
+ return status;
+
+ tmp = kmalloc(size, GFP_ATOMIC);
+
+ if (!tmp)
+ return status;
+
+ s = virt_efi_get_variable(name, vendor, &attr, &size, tmp);
+
+ if (s == EFI_SUCCESS && (attr & EFI_VARIABLE_NON_VOLATILE)) {
+ var_size += size;
+ var_size += ucs2_strsize(name, 1024);
+ active_size += size;
+ active_size += VAR_METADATA_SIZE;
+ active_size += ucs2_strsize(name, 1024);
+ }
+
+ kfree(tmp);
+ }
+
+ return status;
}
static efi_status_t virt_efi_set_variable(efi_char16_t *name,
unsigned long data_size,
void *data)
{
- return efi_call_virt5(set_variable,
- name, vendor, attr,
- data_size, data);
+ efi_status_t status;
+ u32 orig_attr = 0;
+ unsigned long orig_size = 0;
+
+ status = virt_efi_get_variable(name, vendor, &orig_attr, &orig_size,
+ NULL);
+
+ if (status != EFI_BUFFER_TOO_SMALL)
+ orig_size = 0;
+
+ status = efi_call_virt5(set_variable,
+ name, vendor, attr,
+ data_size, data);
+
+ if (status == EFI_SUCCESS) {
+ if (orig_size) {
+ active_size -= orig_size;
+ active_size -= ucs2_strsize(name, 1024);
+ active_size -= VAR_METADATA_SIZE;
+ }
+ if (data_size) {
+ active_size += data_size;
+ active_size += ucs2_strsize(name, 1024);
+ active_size += VAR_METADATA_SIZE;
+ }
+ }
+
+ return status;
}
static efi_status_t virt_efi_query_variable_info(u32 attr,
char vendor[100] = "unknown";
int i = 0;
void *tmp;
+ struct setup_data *data;
+ struct efi_var_bootdata *efi_var_data;
+ u64 pa_data;
#ifdef CONFIG_X86_32
if (boot_params.efi_info.efi_systab_hi ||
if (efi_systab_init(efi_phys.systab))
return;
+ pa_data = boot_params.hdr.setup_data;
+ while (pa_data) {
+ data = early_ioremap(pa_data, sizeof(*efi_var_data));
+ if (data->type == SETUP_EFI_VARS) {
+ efi_var_data = (struct efi_var_bootdata *)data;
+
+ efi_var_store_size = efi_var_data->store_size;
+ efi_var_remaining_size = efi_var_data->remaining_size;
+ efi_var_max_var_size = efi_var_data->max_var_size;
+ }
+ pa_data = data->next;
+ early_iounmap(data, sizeof(*efi_var_data));
+ }
+
+ boot_used_size = efi_var_store_size - efi_var_remaining_size;
+
set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility);
/*
}
return 0;
}
+
+/*
+ * Some firmware has serious problems when using more than 50% of the EFI
+ * variable store, i.e. it triggers bugs that can brick machines. Ensure that
+ * we never use more than this safe limit.
+ *
+ * Return EFI_SUCCESS if it is safe to write 'size' bytes to the variable
+ * store.
+ */
+efi_status_t efi_query_variable_store(u32 attributes, unsigned long size)
+{
+ efi_status_t status;
+ u64 storage_size, remaining_size, max_size;
+
+ status = efi.query_variable_info(attributes, &storage_size,
+ &remaining_size, &max_size);
+ if (status != EFI_SUCCESS)
+ return status;
+
+ if (!max_size && remaining_size > size)
+ printk_once(KERN_ERR FW_BUG "Broken EFI implementation"
+ " is returning MaxVariableSize=0\n");
+ /*
+ * Some firmware implementations refuse to boot if there's insufficient
+ * space in the variable store. We account for that by refusing the
+ * write if permitting it would reduce the available space to under
+ * 50%. However, some firmware won't reclaim variable space until
+ * after the used (not merely the actively used) space drops below
+ * a threshold. We can approximate that case with the value calculated
+ * above. If both the firmware and our calculations indicate that the
+ * available space would drop below 50%, refuse the write.
+ */
+
+ if (!storage_size || size > remaining_size ||
+ (max_size && size > max_size))
+ return EFI_OUT_OF_RESOURCES;
+
+ if (!efi_no_storage_paranoia &&
+ ((active_size + size + VAR_METADATA_SIZE > storage_size / 2) &&
+ (remaining_size - size < storage_size / 2)))
+ return EFI_OUT_OF_RESOURCES;
+
+ return EFI_SUCCESS;
+}
+EXPORT_SYMBOL_GPL(efi_query_variable_store);
__xen_write_cr3(true, cr3);
xen_mc_issue(PARAVIRT_LAZY_CPU); /* interrupts restored */
-
- pv_mmu_ops.write_cr3 = &xen_write_cr3;
}
#endif
}
/* Set the page permissions on an identity-mapped pages */
-static void set_page_prot(void *addr, pgprot_t prot)
+static void set_page_prot_flags(void *addr, pgprot_t prot, unsigned long flags)
{
unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
pte_t pte = pfn_pte(pfn, prot);
- if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
+ if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, flags))
BUG();
}
+static void set_page_prot(void *addr, pgprot_t prot)
+{
+ return set_page_prot_flags(addr, prot, UVMF_NONE);
+}
#ifdef CONFIG_X86_32
static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
{
unsigned long addr)
{
if (*pt_base == PFN_DOWN(__pa(addr))) {
- set_page_prot((void *)addr, PAGE_KERNEL);
+ set_page_prot_flags((void *)addr, PAGE_KERNEL, UVMF_INVLPG);
clear_page((void *)addr);
(*pt_base)++;
}
if (*pt_end == PFN_DOWN(__pa(addr))) {
- set_page_prot((void *)addr, PAGE_KERNEL);
+ set_page_prot_flags((void *)addr, PAGE_KERNEL, UVMF_INVLPG);
clear_page((void *)addr);
(*pt_end)--;
}
#endif
#ifdef CONFIG_X86_64
+ pv_mmu_ops.write_cr3 = &xen_write_cr3;
SetPagePinned(virt_to_page(level3_user_vsyscall));
#endif
xen_mark_init_mm_pinned();
.lazy_mode = {
.enter = paravirt_enter_lazy_mmu,
.leave = xen_leave_lazy_mmu,
+ .flush = paravirt_flush_lazy_mmu,
},
.set_fixmap = xen_set_fixmap,
EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_remap);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_remap);
+EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_complete);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_unplug);
DEFINE_IDA(blk_queue_ida);
* copied from blk_rq_pos(rq).
*/
if (error_sector)
- *error_sector = bio->bi_sector;
+ *error_sector = bio->bi_sector;
if (!bio_flagged(bio, BIO_UPTODATE))
ret = -EIO;
unsigned long val; \
ssize_t ret; \
ret = queue_var_store(&val, page, count); \
+ if (ret < 0) \
+ return ret; \
if (neg) \
val = !val; \
\
struct crypto_rfc4543_req_ctx {
u8 auth_tag[16];
+ u8 assocbuf[32];
struct scatterlist cipher[1];
struct scatterlist payload[2];
struct scatterlist assoc[2];
scatterwalk_crypto_chain(payload, dst, vdst == req->iv + 8, 2);
assoclen += 8 + req->cryptlen - (enc ? 0 : authsize);
- sg_init_table(assoc, 2);
- sg_set_page(assoc, sg_page(req->assoc), req->assoc->length,
- req->assoc->offset);
+ if (req->assoc->length == req->assoclen) {
+ sg_init_table(assoc, 2);
+ sg_set_page(assoc, sg_page(req->assoc), req->assoc->length,
+ req->assoc->offset);
+ } else {
+ BUG_ON(req->assoclen > sizeof(rctx->assocbuf));
+
+ scatterwalk_map_and_copy(rctx->assocbuf, req->assoc, 0,
+ req->assoclen, 0);
+
+ sg_init_table(assoc, 2);
+ sg_set_buf(assoc, rctx->assocbuf, req->assoclen);
+ }
scatterwalk_crypto_chain(assoc, payload, 0, 2);
aead_request_set_tfm(subreq, ctx->child);
config ACPI_BGRT
bool "Boottime Graphics Resource Table support"
- depends on EFI
+ depends on EFI && X86
help
This driver adds support for exposing the ACPI Boottime Graphics
Resource Table, which allows the operating system to obtain
acpi_handle handle;
acpi_status status;
- handle = ACPI_HANDLE(&adapter->dev);
+ handle = ACPI_HANDLE(adapter->dev.parent);
if (!handle)
return;
return rc;
data_len = estatus->data_length;
gdata = (struct acpi_hest_generic_data *)(estatus + 1);
- while (data_len > sizeof(*gdata)) {
+ while (data_len >= sizeof(*gdata)) {
gedata_len = gdata->error_data_length;
if (gedata_len > data_len - sizeof(*gdata))
return -EINVAL;
struct acpi_pci_root *root;
struct acpi_pci_driver *driver;
u32 flags, base_flags;
- bool is_osc_granted = false;
root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
if (!root)
flags = base_flags = OSC_PCI_SEGMENT_GROUPS_SUPPORT;
acpi_pci_osc_support(root, flags);
+ /*
+ * TBD: Need PCI interface for enumeration/configuration of roots.
+ */
+
+ mutex_lock(&acpi_pci_root_lock);
+ list_add_tail(&root->node, &acpi_pci_roots);
+ mutex_unlock(&acpi_pci_root_lock);
+
+ /*
+ * Scan the Root Bridge
+ * --------------------
+ * Must do this prior to any attempt to bind the root device, as the
+ * PCI namespace does not get created until this call is made (and
+ * thus the root bridge's pci_dev does not exist).
+ */
+ root->bus = pci_acpi_scan_root(root);
+ if (!root->bus) {
+ printk(KERN_ERR PREFIX
+ "Bus %04x:%02x not present in PCI namespace\n",
+ root->segment, (unsigned int)root->secondary.start);
+ result = -ENODEV;
+ goto out_del_root;
+ }
+
/* Indicate support for various _OSC capabilities. */
if (pci_ext_cfg_avail())
flags |= OSC_EXT_PCI_CONFIG_SUPPORT;
flags = base_flags;
}
}
+
if (!pcie_ports_disabled
&& (flags & ACPI_PCIE_REQ_SUPPORT) == ACPI_PCIE_REQ_SUPPORT) {
flags = OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL
status = acpi_pci_osc_control_set(device->handle, &flags,
OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL);
if (ACPI_SUCCESS(status)) {
- is_osc_granted = true;
dev_info(&device->dev,
"ACPI _OSC control (0x%02x) granted\n", flags);
+ if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) {
+ /*
+ * We have ASPM control, but the FADT indicates
+ * that it's unsupported. Clear it.
+ */
+ pcie_clear_aspm(root->bus);
+ }
} else {
- is_osc_granted = false;
dev_info(&device->dev,
"ACPI _OSC request failed (%s), "
"returned control mask: 0x%02x\n",
acpi_format_exception(status), flags);
+ pr_info("ACPI _OSC control for PCIe not granted, "
+ "disabling ASPM\n");
+ pcie_no_aspm();
}
} else {
dev_info(&device->dev,
- "Unable to request _OSC control "
- "(_OSC support mask: 0x%02x)\n", flags);
- }
-
- /*
- * TBD: Need PCI interface for enumeration/configuration of roots.
- */
-
- mutex_lock(&acpi_pci_root_lock);
- list_add_tail(&root->node, &acpi_pci_roots);
- mutex_unlock(&acpi_pci_root_lock);
-
- /*
- * Scan the Root Bridge
- * --------------------
- * Must do this prior to any attempt to bind the root device, as the
- * PCI namespace does not get created until this call is made (and
- * thus the root bridge's pci_dev does not exist).
- */
- root->bus = pci_acpi_scan_root(root);
- if (!root->bus) {
- printk(KERN_ERR PREFIX
- "Bus %04x:%02x not present in PCI namespace\n",
- root->segment, (unsigned int)root->secondary.start);
- result = -ENODEV;
- goto out_del_root;
- }
-
- /* ASPM setting */
- if (is_osc_granted) {
- if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM)
- pcie_clear_aspm(root->bus);
- } else {
- pr_info("ACPI _OSC control for PCIe not granted, "
- "disabling ASPM\n");
- pcie_no_aspm();
+ "Unable to request _OSC control "
+ "(_OSC support mask: 0x%02x)\n", flags);
}
pci_acpi_add_bus_pm_notifier(device, root->bus);
static void handle_root_bridge_removal(struct acpi_device *device)
{
+ acpi_status status;
struct acpi_eject_event *ej_event;
ej_event = kmalloc(sizeof(*ej_event), GFP_KERNEL);
ej_event->device = device;
ej_event->event = ACPI_NOTIFY_EJECT_REQUEST;
- acpi_bus_hot_remove_device(ej_event);
+ status = acpi_os_hotplug_execute(acpi_bus_hot_remove_device, ej_event);
+ if (ACPI_FAILURE(status))
+ kfree(ej_event);
}
static void _handle_hotplug_event_root(struct work_struct *work)
handle = hp_work->handle;
type = hp_work->type;
- root = acpi_pci_find_root(handle);
+ acpi_scan_lock_acquire();
+ root = acpi_pci_find_root(handle);
acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer);
switch (type) {
break;
}
+ acpi_scan_lock_release();
kfree(hp_work); /* allocated in handle_hotplug_event_bridge */
kfree(buffer.pointer);
}
static DEFINE_PER_CPU(struct cpuidle_device *, acpi_cpuidle_device);
-static struct acpi_processor_cx *acpi_cstate[CPUIDLE_STATE_MAX];
+static DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX],
+ acpi_cstate);
static int disabled_by_idle_boot_param(void)
{
struct cpuidle_driver *drv, int index)
{
struct acpi_processor *pr;
- struct acpi_processor_cx *cx = acpi_cstate[index];
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
pr = __this_cpu_read(processors);
*/
static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
{
- struct acpi_processor_cx *cx = acpi_cstate[index];
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
ACPI_FLUSH_CPU_CACHE();
struct cpuidle_driver *drv, int index)
{
struct acpi_processor *pr;
- struct acpi_processor_cx *cx = acpi_cstate[index];
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
pr = __this_cpu_read(processors);
struct cpuidle_driver *drv, int index)
{
struct acpi_processor *pr;
- struct acpi_processor_cx *cx = acpi_cstate[index];
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
pr = __this_cpu_read(processors);
!(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED))
continue;
#endif
- acpi_cstate[count] = cx;
+ per_cpu(acpi_cstate[count], dev->cpu) = cx;
count++;
if (count == CPUIDLE_STATE_MAX)
},
{
.callback = init_nvs_nosave,
+ .ident = "Sony Vaio VGN-FW21M",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FW21M"),
+ },
+ },
+ {
+ .callback = init_nvs_nosave,
.ident = "Sony Vaio VPCEB17FX",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
option libata.noacpi=1
config SATA_ZPODD
- bool "SATA Zero Power ODD Support"
+ bool "SATA Zero Power Optical Disc Drive (ZPODD) support"
depends on ATA_ACPI
default n
help
- This option adds support for SATA ZPODD. It requires both
- ODD and the platform support, and if enabled, will automatically
- power on/off the ODD when certain condition is satisfied. This
- does not impact user's experience of the ODD, only power is saved
- when ODD is not in use(i.e. no disc inside).
+ This option adds support for SATA Zero Power Optical Disc
+ Drive (ZPODD). It requires both the ODD and the platform
+ support, and if enabled, will automatically power on/off the
+ ODD when certain condition is satisfied. This does not impact
+ end user's experience of the ODD, only power is saved when
+ the ODD is not in use (i.e. no disc inside).
If unsure, say N.
{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
+ { PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
{ PCI_VDEVICE(INTEL, 0x8d04), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d06), board_ahci }, /* Wellsburg RAID */
tolapai_sata,
piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */
ich8_sata_snb,
+ ich8_2port_sata_snb,
};
struct piix_map_db {
/* SATA Controller IDE (Lynx Point) */
{ 0x8086, 0x8c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
/* SATA Controller IDE (Lynx Point) */
- { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
+ { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb },
/* SATA Controller IDE (Lynx Point) */
{ 0x8086, 0x8c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
/* SATA Controller IDE (Lynx Point-LP) */
[ich8m_apple_sata] = &ich8m_apple_map_db,
[tolapai_sata] = &tolapai_map_db,
[ich8_sata_snb] = &ich8_map_db,
+ [ich8_2port_sata_snb] = &ich8_2port_map_db,
};
static struct pci_bits piix_enable_bits[] = {
.udma_mask = ATA_UDMA6,
.port_ops = &piix_sata_ops,
},
+
+ [ich8_2port_sata_snb] =
+ {
+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SIDPR
+ | PIIX_FLAG_PIO16,
+ .pio_mask = ATA_PIO4,
+ .mwdma_mask = ATA_MWDMA2,
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &piix_sata_ops,
+ },
};
#define AHCI_PCI_BAR 5
static int prefer_ms_hyperv = 1;
module_param(prefer_ms_hyperv, int, 0);
+MODULE_PARM_DESC(prefer_ms_hyperv,
+ "Prefer Hyper-V paravirtualization drivers instead of ATA, "
+ "0 - Use ATA drivers, "
+ "1 (Default) - Use the paravirtualization drivers.");
static void piix_ignore_devices_quirk(struct ata_host *host)
{
handle = ata_dev_acpi_handle(dev);
if (handle)
- acpi_dev_pm_remove_dependent(handle, &sdev->sdev_gendev);
+ acpi_dev_pm_add_dependent(handle, &sdev->sdev_gendev);
}
static void ata_acpi_unregister_power_resource(struct ata_device *dev)
* from SATA Settings page of Identify Device Data Log.
*/
if (ata_id_has_devslp(dev->id)) {
- u8 sata_setting[ATA_SECT_SIZE];
+ u8 *sata_setting = ap->sector_buf;
int i, j;
dev->flags |= ATA_DFLAG_DEVSLP;
dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_128,
dev->max_sectors);
+ if (dev->horkage & ATA_HORKAGE_MAX_SEC_LBA48)
+ dev->max_sectors = ATA_MAX_SECTORS_LBA48;
+
if (ap->ops->dev_config)
ap->ops->dev_config(dev);
/* Weird ATAPI devices */
{ "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 },
{ "QUANTUM DAT DAT72-000", NULL, ATA_HORKAGE_ATAPI_MOD16_DMA },
+ { "Slimtype DVD A DS8A8SH", NULL, ATA_HORKAGE_MAX_SEC_LBA48 },
/* Devices we expect to fail diagnostics */
struct scsi_sense_hdr sshdr;
scsi_normalize_sense(sensebuf, SCSI_SENSE_BUFFERSIZE,
&sshdr);
- if (sshdr.sense_key == 0 &&
- sshdr.asc == 0 && sshdr.ascq == 0)
+ if (sshdr.sense_key == RECOVERED_ERROR &&
+ sshdr.asc == 0 && sshdr.ascq == 0x1d)
cmd_result &= ~SAM_STAT_CHECK_CONDITION;
}
struct scsi_sense_hdr sshdr;
scsi_normalize_sense(sensebuf, SCSI_SENSE_BUFFERSIZE,
&sshdr);
- if (sshdr.sense_key == 0 &&
- sshdr.asc == 0 && sshdr.ascq == 0)
+ if (sshdr.sense_key == RECOVERED_ERROR &&
+ sshdr.asc == 0 && sshdr.ascq == 0x1d)
cmd_result &= ~SAM_STAT_CHECK_CONDITION;
}
},
};
-static int __init pata_s3c_init(void)
-{
- return platform_driver_probe(&pata_s3c_driver, pata_s3c_probe);
-}
-
-static void __exit pata_s3c_exit(void)
-{
- platform_driver_unregister(&pata_s3c_driver);
-}
-
-module_init(pata_s3c_init);
-module_exit(pata_s3c_exit);
+module_platform_driver_probe(pata_s3c_driver, pata_s3c_probe);
MODULE_AUTHOR("Abhilash Kesavan, <a.kesavan@samsung.com>");
MODULE_DESCRIPTION("low-level driver for Samsung PATA controller");
if (hcr_base)
iounmap(hcr_base);
- if (host_priv)
- kfree(host_priv);
+ kfree(host_priv);
return retval;
}
#include "power.h"
static DEFINE_MUTEX(dev_pm_qos_mtx);
+static DEFINE_MUTEX(dev_pm_qos_sysfs_mtx);
static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers);
struct pm_qos_constraints *c;
struct pm_qos_flags *f;
- mutex_lock(&dev_pm_qos_mtx);
+ mutex_lock(&dev_pm_qos_sysfs_mtx);
/*
* If the device's PM QoS resume latency limit or PM QoS flags have been
* exposed to user space, they have to be hidden at this point.
*/
+ pm_qos_sysfs_remove_latency(dev);
+ pm_qos_sysfs_remove_flags(dev);
+
+ mutex_lock(&dev_pm_qos_mtx);
+
__dev_pm_qos_hide_latency_limit(dev);
__dev_pm_qos_hide_flags(dev);
out:
mutex_unlock(&dev_pm_qos_mtx);
+
+ mutex_unlock(&dev_pm_qos_sysfs_mtx);
}
/**
kfree(req);
}
+static void dev_pm_qos_drop_user_request(struct device *dev,
+ enum dev_pm_qos_req_type type)
+{
+ mutex_lock(&dev_pm_qos_mtx);
+ __dev_pm_qos_drop_user_request(dev, type);
+ mutex_unlock(&dev_pm_qos_mtx);
+}
+
/**
* dev_pm_qos_expose_latency_limit - Expose PM QoS latency limit to user space.
* @dev: Device whose PM QoS latency limit is to be exposed to user space.
return ret;
}
+ mutex_lock(&dev_pm_qos_sysfs_mtx);
+
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR_OR_NULL(dev->power.qos))
if (ret < 0) {
__dev_pm_qos_remove_request(req);
kfree(req);
+ mutex_unlock(&dev_pm_qos_mtx);
goto out;
}
-
dev->power.qos->latency_req = req;
+
+ mutex_unlock(&dev_pm_qos_mtx);
+
ret = pm_qos_sysfs_add_latency(dev);
if (ret)
- __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
+ dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
out:
- mutex_unlock(&dev_pm_qos_mtx);
+ mutex_unlock(&dev_pm_qos_sysfs_mtx);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_limit);
static void __dev_pm_qos_hide_latency_limit(struct device *dev)
{
- if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req) {
- pm_qos_sysfs_remove_latency(dev);
+ if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req)
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
- }
}
/**
*/
void dev_pm_qos_hide_latency_limit(struct device *dev)
{
+ mutex_lock(&dev_pm_qos_sysfs_mtx);
+
+ pm_qos_sysfs_remove_latency(dev);
+
mutex_lock(&dev_pm_qos_mtx);
__dev_pm_qos_hide_latency_limit(dev);
mutex_unlock(&dev_pm_qos_mtx);
+
+ mutex_unlock(&dev_pm_qos_sysfs_mtx);
}
EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_limit);
}
pm_runtime_get_sync(dev);
+ mutex_lock(&dev_pm_qos_sysfs_mtx);
+
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR_OR_NULL(dev->power.qos))
if (ret < 0) {
__dev_pm_qos_remove_request(req);
kfree(req);
+ mutex_unlock(&dev_pm_qos_mtx);
goto out;
}
-
dev->power.qos->flags_req = req;
+
+ mutex_unlock(&dev_pm_qos_mtx);
+
ret = pm_qos_sysfs_add_flags(dev);
if (ret)
- __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
+ dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
out:
- mutex_unlock(&dev_pm_qos_mtx);
+ mutex_unlock(&dev_pm_qos_sysfs_mtx);
pm_runtime_put(dev);
return ret;
}
static void __dev_pm_qos_hide_flags(struct device *dev)
{
- if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->flags_req) {
- pm_qos_sysfs_remove_flags(dev);
+ if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->flags_req)
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
- }
}
/**
void dev_pm_qos_hide_flags(struct device *dev)
{
pm_runtime_get_sync(dev);
+ mutex_lock(&dev_pm_qos_sysfs_mtx);
+
+ pm_qos_sysfs_remove_flags(dev);
+
mutex_lock(&dev_pm_qos_mtx);
__dev_pm_qos_hide_flags(dev);
mutex_unlock(&dev_pm_qos_mtx);
+
+ mutex_unlock(&dev_pm_qos_sysfs_mtx);
pm_runtime_put(dev);
}
EXPORT_SYMBOL_GPL(dev_pm_qos_hide_flags);
base = 0;
if (max < rbnode->base_reg + rbnode->blklen)
- end = rbnode->base_reg + rbnode->blklen - max;
+ end = max - rbnode->base_reg + 1;
else
end = rbnode->blklen;
}
}
+ regmap_debugfs_init(map, config->name);
+
ret = regcache_init(map, config);
if (ret != 0)
goto err_range;
- regmap_debugfs_init(map, config->name);
-
/* Add a devres resource for dev_get_regmap() */
m = devres_alloc(dev_get_regmap_release, sizeof(*m), GFP_KERNEL);
if (!m) {
kfree(async->work_buf);
kfree(async);
}
+
+ return ret;
}
trace_regmap_hw_write_start(map->dev, reg,
If unsure, say N.
config BLK_DEV_RSXX
- tristate "RamSam PCIe Flash SSD Device Driver"
+ tristate "IBM FlashSystem 70/80 PCIe SSD Device Driver"
depends on PCI
help
Device driver for IBM's high speed PCIe SSD
- storage devices: RamSan-70 and RamSan-80.
+ storage devices: FlashSystem-70 and FlashSystem-80.
To compile this driver as a module, choose M here: the
module will be called rsxx.
{
struct sk_buff *skb;
- skb = alloc_skb(len, GFP_ATOMIC);
+ skb = alloc_skb(len + MAX_HEADER, GFP_ATOMIC);
if (skb) {
+ skb_reserve(skb, MAX_HEADER);
skb_reset_mac_header(skb);
skb_reset_network_header(skb);
skb->protocol = __constant_htons(ETH_P_AOE);
if (rc)
return rc;
h->cfgtable = remap_pci_mem(pci_resource_start(h->pdev,
- cfg_base_addr_index) + cfg_offset, sizeof(h->cfgtable));
+ cfg_base_addr_index) + cfg_offset, sizeof(*h->cfgtable));
if (!h->cfgtable)
return -ENOMEM;
rc = write_driver_ver_to_cfgtable(h->cfgtable);
lo->lo_flags |= LO_FLAGS_PARTSCAN;
if (lo->lo_flags & LO_FLAGS_PARTSCAN)
ioctl_by_bdev(bdev, BLKRRPART, 0);
+
+ /* Grab the block_device to prevent its destruction after we
+ * put /dev/loopXX inode. Later in loop_clr_fd() we bdput(bdev).
+ */
+ bdgrab(bdev);
return 0;
out_clr:
memset(lo->lo_encrypt_key, 0, LO_KEY_SIZE);
memset(lo->lo_crypt_name, 0, LO_NAME_SIZE);
memset(lo->lo_file_name, 0, LO_NAME_SIZE);
- if (bdev)
+ if (bdev) {
+ bdput(bdev);
invalidate_bdev(bdev);
+ }
set_capacity(lo->lo_disk, 0);
loop_sysfs_exit(lo);
if (bdev) {
goto out_free_dev;
i = err;
+ err = -ENOMEM;
lo->lo_queue = blk_alloc_queue(GFP_KERNEL);
if (!lo->lo_queue)
goto out_free_dev;
gpio_direction_output(host->rst, 1);
/* reset out pin */
- if (!(prv_data->dev_attr & MG_DEV_MASK))
+ if (!(prv_data->dev_attr & MG_DEV_MASK)) {
+ err = -EINVAL;
goto probe_err_3a;
+ }
if (prv_data->dev_attr != MG_BOOT_DEV) {
rsc = platform_get_resource_byname(plat_dev, IORESOURCE_IO,
/* Device instance number, incremented each time a device is probed. */
static int instance;
+struct list_head online_list;
+struct list_head removing_list;
+spinlock_t dev_lock;
+
/*
* Global variable used to hold the major block device number
* allocated in mtip_init().
*/
static int mtip_major;
static struct dentry *dfs_parent;
+static struct dentry *dfs_device_status;
static u32 cpu_use[NR_CPUS];
/*
* Reset the HBA (without sleeping)
*
- * Just like hba_reset, except does not call sleep, so can be
- * run from interrupt/tasklet context.
- *
* @dd Pointer to the driver data structure.
*
* return value
* 0 The reset was successful.
* -1 The HBA Reset bit did not clear.
*/
-static int hba_reset_nosleep(struct driver_data *dd)
+static int mtip_hba_reset(struct driver_data *dd)
{
unsigned long timeout;
- /* Chip quirk: quiesce any chip function */
- mdelay(10);
-
/* Set the reset bit */
writel(HOST_RESET, dd->mmio + HOST_CTL);
/* Flush */
readl(dd->mmio + HOST_CTL);
- /*
- * Wait 10ms then spin for up to 1 second
- * waiting for reset acknowledgement
- */
- timeout = jiffies + msecs_to_jiffies(1000);
- mdelay(10);
- while ((readl(dd->mmio + HOST_CTL) & HOST_RESET)
- && time_before(jiffies, timeout))
- mdelay(1);
+ /* Spin for up to 2 seconds, waiting for reset acknowledgement */
+ timeout = jiffies + msecs_to_jiffies(2000);
+ do {
+ mdelay(10);
+ if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag))
+ return -1;
- if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag))
- return -1;
+ } while ((readl(dd->mmio + HOST_CTL) & HOST_RESET)
+ && time_before(jiffies, timeout));
if (readl(dd->mmio + HOST_CTL) & HOST_RESET)
return -1;
dev_warn(&port->dd->pdev->dev,
"PxCMD.CR not clear, escalating reset\n");
- if (hba_reset_nosleep(port->dd))
+ if (mtip_hba_reset(port->dd))
dev_err(&port->dd->pdev->dev,
"HBA reset escalation failed.\n");
}
+static int mtip_device_reset(struct driver_data *dd)
+{
+ int rv = 0;
+
+ if (mtip_check_surprise_removal(dd->pdev))
+ return 0;
+
+ if (mtip_hba_reset(dd) < 0)
+ rv = -EFAULT;
+
+ mdelay(1);
+ mtip_init_port(dd->port);
+ mtip_start_port(dd->port);
+
+ /* Enable interrupts on the HBA. */
+ writel(readl(dd->mmio + HOST_CTL) | HOST_IRQ_EN,
+ dd->mmio + HOST_CTL);
+ return rv;
+}
+
/*
* Helper function for tag logging
*/
if (cmdto_cnt) {
print_tags(port->dd, "timed out", tagaccum, cmdto_cnt);
if (!test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) {
- mtip_restart_port(port);
+ mtip_device_reset(port->dd);
wake_up_interruptible(&port->svc_wait);
}
clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags);
int rv = 0, ready2go = 1;
struct mtip_cmd *int_cmd = &port->commands[MTIP_TAG_INTERNAL];
unsigned long to;
+ struct driver_data *dd = port->dd;
/* Make sure the buffer is 8 byte aligned. This is asic specific. */
if (buffer & 0x00000007) {
- dev_err(&port->dd->pdev->dev,
- "SG buffer is not 8 byte aligned\n");
+ dev_err(&dd->pdev->dev, "SG buffer is not 8 byte aligned\n");
return -EFAULT;
}
mdelay(100);
} while (time_before(jiffies, to));
if (!ready2go) {
- dev_warn(&port->dd->pdev->dev,
+ dev_warn(&dd->pdev->dev,
"Internal cmd active. new cmd [%02X]\n", fis->command);
return -EBUSY;
}
set_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags);
port->ic_pause_timer = 0;
- if (fis->command == ATA_CMD_SEC_ERASE_UNIT)
- clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags);
- else if (fis->command == ATA_CMD_DOWNLOAD_MICRO)
- clear_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags);
+ clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags);
+ clear_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags);
if (atomic == GFP_KERNEL) {
if (fis->command != ATA_CMD_STANDBYNOW1) {
/* wait for io to complete if non atomic */
if (mtip_quiesce_io(port, 5000) < 0) {
- dev_warn(&port->dd->pdev->dev,
+ dev_warn(&dd->pdev->dev,
"Failed to quiesce IO\n");
release_slot(port, MTIP_TAG_INTERNAL);
clear_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags);
/* Issue the command to the hardware */
mtip_issue_non_ncq_command(port, MTIP_TAG_INTERNAL);
- /* Poll if atomic, wait_for_completion otherwise */
if (atomic == GFP_KERNEL) {
/* Wait for the command to complete or timeout. */
- if (wait_for_completion_timeout(
+ if (wait_for_completion_interruptible_timeout(
&wait,
- msecs_to_jiffies(timeout)) == 0) {
- dev_err(&port->dd->pdev->dev,
- "Internal command did not complete [%d] "
- "within timeout of %lu ms\n",
- atomic, timeout);
- if (mtip_check_surprise_removal(port->dd->pdev) ||
+ msecs_to_jiffies(timeout)) <= 0) {
+ if (rv == -ERESTARTSYS) { /* interrupted */
+ dev_err(&dd->pdev->dev,
+ "Internal command [%02X] was interrupted after %lu ms\n",
+ fis->command, timeout);
+ rv = -EINTR;
+ goto exec_ic_exit;
+ } else if (rv == 0) /* timeout */
+ dev_err(&dd->pdev->dev,
+ "Internal command did not complete [%02X] within timeout of %lu ms\n",
+ fis->command, timeout);
+ else
+ dev_err(&dd->pdev->dev,
+ "Internal command [%02X] wait returned code [%d] after %lu ms - unhandled\n",
+ fis->command, rv, timeout);
+
+ if (mtip_check_surprise_removal(dd->pdev) ||
test_bit(MTIP_DDF_REMOVE_PENDING_BIT,
- &port->dd->dd_flag)) {
+ &dd->dd_flag)) {
+ dev_err(&dd->pdev->dev,
+ "Internal command [%02X] wait returned due to SR\n",
+ fis->command);
rv = -ENXIO;
goto exec_ic_exit;
}
+ mtip_device_reset(dd); /* recover from timeout issue */
rv = -EAGAIN;
+ goto exec_ic_exit;
}
} else {
+ u32 hba_stat, port_stat;
+
/* Spin for <timeout> checking if command still outstanding */
timeout = jiffies + msecs_to_jiffies(timeout);
while ((readl(port->cmd_issue[MTIP_TAG_INTERNAL])
& (1 << MTIP_TAG_INTERNAL))
&& time_before(jiffies, timeout)) {
- if (mtip_check_surprise_removal(port->dd->pdev)) {
+ if (mtip_check_surprise_removal(dd->pdev)) {
rv = -ENXIO;
goto exec_ic_exit;
}
if ((fis->command != ATA_CMD_STANDBYNOW1) &&
test_bit(MTIP_DDF_REMOVE_PENDING_BIT,
- &port->dd->dd_flag)) {
+ &dd->dd_flag)) {
rv = -ENXIO;
goto exec_ic_exit;
}
- if (readl(port->mmio + PORT_IRQ_STAT) & PORT_IRQ_ERR) {
- atomic_inc(&int_cmd->active); /* error */
- break;
+ port_stat = readl(port->mmio + PORT_IRQ_STAT);
+ if (!port_stat)
+ continue;
+
+ if (port_stat & PORT_IRQ_ERR) {
+ dev_err(&dd->pdev->dev,
+ "Internal command [%02X] failed\n",
+ fis->command);
+ mtip_device_reset(dd);
+ rv = -EIO;
+ goto exec_ic_exit;
+ } else {
+ writel(port_stat, port->mmio + PORT_IRQ_STAT);
+ hba_stat = readl(dd->mmio + HOST_IRQ_STAT);
+ if (hba_stat)
+ writel(hba_stat,
+ dd->mmio + HOST_IRQ_STAT);
}
+ break;
}
}
- if (atomic_read(&int_cmd->active) > 1) {
- dev_err(&port->dd->pdev->dev,
- "Internal command [%02X] failed\n", fis->command);
- rv = -EIO;
- }
if (readl(port->cmd_issue[MTIP_TAG_INTERNAL])
& (1 << MTIP_TAG_INTERNAL)) {
rv = -ENXIO;
- if (!test_bit(MTIP_DDF_REMOVE_PENDING_BIT,
- &port->dd->dd_flag)) {
- mtip_restart_port(port);
+ if (!test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) {
+ mtip_device_reset(dd);
rv = -EAGAIN;
}
}
* -EINVAL Invalid parameters passed in, trim not supported
* -EIO Error submitting trim request to hw
*/
-static int mtip_send_trim(struct driver_data *dd, unsigned int lba, unsigned int len)
+static int mtip_send_trim(struct driver_data *dd, unsigned int lba,
+ unsigned int len)
{
int i, rv = 0;
u64 tlba, tlen, sect_left;
return (bool) !!port->identify_valid;
}
-/*
- * Reset the HBA.
- *
- * Resets the HBA by setting the HBA Reset bit in the Global
- * HBA Control register. After setting the HBA Reset bit the
- * function waits for 1 second before reading the HBA Reset
- * bit to make sure it has cleared. If HBA Reset is not clear
- * an error is returned. Cannot be used in non-blockable
- * context.
- *
- * @dd Pointer to the driver data structure.
- *
- * return value
- * 0 The reset was successful.
- * -1 The HBA Reset bit did not clear.
- */
-static int mtip_hba_reset(struct driver_data *dd)
-{
- mtip_deinit_port(dd->port);
-
- /* Set the reset bit */
- writel(HOST_RESET, dd->mmio + HOST_CTL);
-
- /* Flush */
- readl(dd->mmio + HOST_CTL);
-
- /* Wait for reset to clear */
- ssleep(1);
-
- /* Check the bit has cleared */
- if (readl(dd->mmio + HOST_CTL) & HOST_RESET) {
- dev_err(&dd->pdev->dev,
- "Reset bit did not clear.\n");
- return -1;
- }
-
- return 0;
-}
-
/*
* Display the identify command data.
*
static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL);
+/* debugsfs entries */
+
+static ssize_t show_device_status(struct device_driver *drv, char *buf)
+{
+ int size = 0;
+ struct driver_data *dd, *tmp;
+ unsigned long flags;
+ char id_buf[42];
+ u16 status = 0;
+
+ spin_lock_irqsave(&dev_lock, flags);
+ size += sprintf(&buf[size], "Devices Present:\n");
+ list_for_each_entry_safe(dd, tmp, &online_list, online_list) {
+ if (dd->pdev) {
+ if (dd->port &&
+ dd->port->identify &&
+ dd->port->identify_valid) {
+ strlcpy(id_buf,
+ (char *) (dd->port->identify + 10), 21);
+ status = *(dd->port->identify + 141);
+ } else {
+ memset(id_buf, 0, 42);
+ status = 0;
+ }
+
+ if (dd->port &&
+ test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags)) {
+ size += sprintf(&buf[size],
+ " device %s %s (ftl rebuild %d %%)\n",
+ dev_name(&dd->pdev->dev),
+ id_buf,
+ status);
+ } else {
+ size += sprintf(&buf[size],
+ " device %s %s\n",
+ dev_name(&dd->pdev->dev),
+ id_buf);
+ }
+ }
+ }
+
+ size += sprintf(&buf[size], "Devices Being Removed:\n");
+ list_for_each_entry_safe(dd, tmp, &removing_list, remove_list) {
+ if (dd->pdev) {
+ if (dd->port &&
+ dd->port->identify &&
+ dd->port->identify_valid) {
+ strlcpy(id_buf,
+ (char *) (dd->port->identify+10), 21);
+ status = *(dd->port->identify + 141);
+ } else {
+ memset(id_buf, 0, 42);
+ status = 0;
+ }
+
+ if (dd->port &&
+ test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags)) {
+ size += sprintf(&buf[size],
+ " device %s %s (ftl rebuild %d %%)\n",
+ dev_name(&dd->pdev->dev),
+ id_buf,
+ status);
+ } else {
+ size += sprintf(&buf[size],
+ " device %s %s\n",
+ dev_name(&dd->pdev->dev),
+ id_buf);
+ }
+ }
+ }
+ spin_unlock_irqrestore(&dev_lock, flags);
+
+ return size;
+}
+
+static ssize_t mtip_hw_read_device_status(struct file *f, char __user *ubuf,
+ size_t len, loff_t *offset)
+{
+ int size = *offset;
+ char buf[MTIP_DFS_MAX_BUF_SIZE];
+
+ if (!len || *offset)
+ return 0;
+
+ size += show_device_status(NULL, buf);
+
+ *offset = size <= len ? size : len;
+ size = copy_to_user(ubuf, buf, *offset);
+ if (size)
+ return -EFAULT;
+
+ return *offset;
+}
+
static ssize_t mtip_hw_read_registers(struct file *f, char __user *ubuf,
size_t len, loff_t *offset)
{
return *offset;
}
+static const struct file_operations mtip_device_status_fops = {
+ .owner = THIS_MODULE,
+ .open = simple_open,
+ .read = mtip_hw_read_device_status,
+ .llseek = no_llseek,
+};
+
static const struct file_operations mtip_regs_fops = {
.owner = THIS_MODULE,
.open = simple_open,
const struct cpumask *node_mask;
int cpu, i = 0, j = 0;
int my_node = NUMA_NO_NODE;
+ unsigned long flags;
/* Allocate memory for this devices private data. */
my_node = pcibus_to_node(pdev->bus);
dd->pdev = pdev;
dd->numa_node = my_node;
+ INIT_LIST_HEAD(&dd->online_list);
+ INIT_LIST_HEAD(&dd->remove_list);
+
memset(dd->workq_name, 0, 32);
snprintf(dd->workq_name, 31, "mtipq%d", dd->instance);
dd->isr_workq = create_workqueue(dd->workq_name);
if (!dd->isr_workq) {
dev_warn(&pdev->dev, "Can't create wq %d\n", dd->instance);
+ rv = -ENOMEM;
goto block_initialize_err;
}
INIT_WORK(&dd->work[7].work, mtip_workq_sdbf7);
pci_set_master(pdev);
- if (pci_enable_msi(pdev)) {
+ rv = pci_enable_msi(pdev);
+ if (rv) {
dev_warn(&pdev->dev,
"Unable to enable MSI interrupt.\n");
goto block_initialize_err;
instance++;
if (rv != MTIP_FTL_REBUILD_MAGIC)
set_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag);
+ else
+ rv = 0; /* device in rebuild state, return 0 from probe */
+
+ /* Add to online list even if in ftl rebuild */
+ spin_lock_irqsave(&dev_lock, flags);
+ list_add(&dd->online_list, &online_list);
+ spin_unlock_irqrestore(&dev_lock, flags);
+
goto done;
block_initialize_err:
{
struct driver_data *dd = pci_get_drvdata(pdev);
int counter = 0;
+ unsigned long flags;
set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag);
+ spin_lock_irqsave(&dev_lock, flags);
+ list_del_init(&dd->online_list);
+ list_add(&dd->remove_list, &removing_list);
+ spin_unlock_irqrestore(&dev_lock, flags);
+
if (mtip_check_surprise_removal(pdev)) {
while (!test_bit(MTIP_DDF_CLEANUP_BIT, &dd->dd_flag)) {
counter++;
pci_disable_msi(pdev);
+ spin_lock_irqsave(&dev_lock, flags);
+ list_del_init(&dd->remove_list);
+ spin_unlock_irqrestore(&dev_lock, flags);
+
kfree(dd);
pcim_iounmap_regions(pdev, 1 << MTIP_ABAR);
}
pr_info(MTIP_DRV_NAME " Version " MTIP_DRV_VERSION "\n");
+ spin_lock_init(&dev_lock);
+
+ INIT_LIST_HEAD(&online_list);
+ INIT_LIST_HEAD(&removing_list);
+
/* Allocate a major block device number to use with this driver. */
error = register_blkdev(0, MTIP_DRV_NAME);
if (error <= 0) {
}
mtip_major = error;
- if (!dfs_parent) {
- dfs_parent = debugfs_create_dir("rssd", NULL);
- if (IS_ERR_OR_NULL(dfs_parent)) {
- pr_warn("Error creating debugfs parent\n");
- dfs_parent = NULL;
+ dfs_parent = debugfs_create_dir("rssd", NULL);
+ if (IS_ERR_OR_NULL(dfs_parent)) {
+ pr_warn("Error creating debugfs parent\n");
+ dfs_parent = NULL;
+ }
+ if (dfs_parent) {
+ dfs_device_status = debugfs_create_file("device_status",
+ S_IRUGO, dfs_parent, NULL,
+ &mtip_device_status_fops);
+ if (IS_ERR_OR_NULL(dfs_device_status)) {
+ pr_err("Error creating device_status node\n");
+ dfs_device_status = NULL;
}
}
MTIP_PF_EH_ACTIVE_BIT = 1, /* error handling */
MTIP_PF_SE_ACTIVE_BIT = 2, /* secure erase */
MTIP_PF_DM_ACTIVE_BIT = 3, /* download microcde */
- MTIP_PF_PAUSE_IO = ((1 << MTIP_PF_IC_ACTIVE_BIT) | \
- (1 << MTIP_PF_EH_ACTIVE_BIT) | \
- (1 << MTIP_PF_SE_ACTIVE_BIT) | \
+ MTIP_PF_PAUSE_IO = ((1 << MTIP_PF_IC_ACTIVE_BIT) |
+ (1 << MTIP_PF_EH_ACTIVE_BIT) |
+ (1 << MTIP_PF_SE_ACTIVE_BIT) |
(1 << MTIP_PF_DM_ACTIVE_BIT)),
MTIP_PF_SVC_THD_ACTIVE_BIT = 4,
MTIP_DDF_REMOVE_PENDING_BIT = 1,
MTIP_DDF_OVER_TEMP_BIT = 2,
MTIP_DDF_WRITE_PROTECT_BIT = 3,
- MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) | \
- (1 << MTIP_DDF_SEC_LOCK_BIT) | \
- (1 << MTIP_DDF_OVER_TEMP_BIT) | \
+ MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) |
+ (1 << MTIP_DDF_SEC_LOCK_BIT) |
+ (1 << MTIP_DDF_OVER_TEMP_BIT) |
(1 << MTIP_DDF_WRITE_PROTECT_BIT)),
MTIP_DDF_CLEANUP_BIT = 5,
#define MTIP_TRIM_TIMEOUT_MS 240000
#define MTIP_MAX_TRIM_ENTRIES 8
-#define MTIP_MAX_TRIM_ENTRY_LEN 0xfff8
+#define MTIP_MAX_TRIM_ENTRY_LEN 0xfff8
struct mtip_trim_entry {
u32 lba; /* starting lba of region */
atomic_t irq_workers_active;
int isr_binding;
+
+ struct list_head online_list; /* linkage for online list */
+
+ struct list_head remove_list; /* linkage for removing list */
};
#endif
BUILD_BUG_ON(sizeof(struct nvme_id_ctrl) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
+ BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
}
typedef void (*nvme_completion_fn)(struct nvme_dev *, void *,
*fn = special_completion;
return CMD_CTX_INVALID;
}
- *fn = info[cmdid].fn;
+ if (fn)
+ *fn = info[cmdid].fn;
ctx = info[cmdid].ctx;
info[cmdid].fn = special_completion;
info[cmdid].ctx = CMD_CTX_COMPLETED;
iod->offset = offsetof(struct nvme_iod, sg[nseg]);
iod->npages = -1;
iod->length = nbytes;
+ iod->nents = 0;
}
return iod;
struct bio *bio = iod->private;
u16 status = le16_to_cpup(&cqe->status) >> 1;
- dma_unmap_sg(&dev->pci_dev->dev, iod->sg, iod->nents,
+ if (iod->nents)
+ dma_unmap_sg(&dev->pci_dev->dev, iod->sg, iod->nents,
bio_data_dir(bio) ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
nvme_free_iod(dev, iod);
if (status) {
result = nvme_map_bio(nvmeq->q_dmadev, iod, bio, dma_dir, psegs);
if (result < 0)
- goto free_iod;
+ goto free_cmdid;
length = result;
cmnd->rw.command_id = cmdid;
return 0;
+ free_cmdid:
+ free_cmdid(nvmeq, cmdid, NULL);
free_iod:
nvme_free_iod(nvmeq->dev, iod);
nomem:
return nvme_submit_admin_cmd(dev, &c, NULL);
}
-static int nvme_get_features(struct nvme_dev *dev, unsigned fid,
- unsigned nsid, dma_addr_t dma_addr)
+static int nvme_get_features(struct nvme_dev *dev, unsigned fid, unsigned nsid,
+ dma_addr_t dma_addr, u32 *result)
{
struct nvme_command c;
c.features.prp1 = cpu_to_le64(dma_addr);
c.features.fid = cpu_to_le32(fid);
- return nvme_submit_admin_cmd(dev, &c, NULL);
+ return nvme_submit_admin_cmd(dev, &c, result);
}
static int nvme_set_features(struct nvme_dev *dev, unsigned fid,
spin_lock_irq(&nvmeq->q_lock);
nvme_cancel_ios(nvmeq, false);
+ while (bio_list_peek(&nvmeq->sq_cong)) {
+ struct bio *bio = bio_list_pop(&nvmeq->sq_cong);
+ bio_endio(bio, -EIO);
+ }
spin_unlock_irq(&nvmeq->q_lock);
irq_set_affinity_hint(vector, NULL);
if (length != cmd.data_len)
status = -ENOMEM;
else
- status = nvme_submit_admin_cmd(dev, &c, NULL);
+ status = nvme_submit_admin_cmd(dev, &c, &cmd.result);
if (cmd.data_len) {
nvme_unmap_user_pages(dev, cmd.opcode & 1, iod);
nvme_free_iod(dev, iod);
}
+
+ if (!status && copy_to_user(&ucmd->result, &cmd.result,
+ sizeof(cmd.result)))
+ status = -EFAULT;
+
return status;
}
continue;
res = nvme_get_features(dev, NVME_FEAT_LBA_RANGE, i,
- dma_addr + 4096);
+ dma_addr + 4096, NULL);
if (res)
- continue;
+ memset(mem + 4096, 0, 4096);
ns = nvme_alloc_ns(dev, i, mem, mem + 4096);
if (ns)
return atomic_read(&obj_request->done) != 0;
}
+static void
+rbd_img_obj_request_read_callback(struct rbd_obj_request *obj_request)
+{
+ dout("%s: obj %p img %p result %d %llu/%llu\n", __func__,
+ obj_request, obj_request->img_request, obj_request->result,
+ obj_request->xferred, obj_request->length);
+ /*
+ * ENOENT means a hole in the image. We zero-fill the
+ * entire length of the request. A short read also implies
+ * zero-fill to the end of the request. Either way we
+ * update the xferred count to indicate the whole request
+ * was satisfied.
+ */
+ BUG_ON(obj_request->type != OBJ_REQUEST_BIO);
+ if (obj_request->result == -ENOENT) {
+ zero_bio_chain(obj_request->bio_list, 0);
+ obj_request->result = 0;
+ obj_request->xferred = obj_request->length;
+ } else if (obj_request->xferred < obj_request->length &&
+ !obj_request->result) {
+ zero_bio_chain(obj_request->bio_list, obj_request->xferred);
+ obj_request->xferred = obj_request->length;
+ }
+ obj_request_done_set(obj_request);
+}
+
static void rbd_obj_request_complete(struct rbd_obj_request *obj_request)
{
dout("%s: obj %p cb %p\n", __func__, obj_request,
{
dout("%s: obj %p result %d %llu/%llu\n", __func__, obj_request,
obj_request->result, obj_request->xferred, obj_request->length);
- /*
- * ENOENT means a hole in the object. We zero-fill the
- * entire length of the request. A short read also implies
- * zero-fill to the end of the request. Either way we
- * update the xferred count to indicate the whole request
- * was satisfied.
- */
- if (obj_request->result == -ENOENT) {
- zero_bio_chain(obj_request->bio_list, 0);
- obj_request->result = 0;
- obj_request->xferred = obj_request->length;
- } else if (obj_request->xferred < obj_request->length &&
- !obj_request->result) {
- zero_bio_chain(obj_request->bio_list, obj_request->xferred);
- obj_request->xferred = obj_request->length;
- }
- obj_request_done_set(obj_request);
+ if (obj_request->img_request)
+ rbd_img_obj_request_read_callback(obj_request);
+ else
+ obj_request_done_set(obj_request);
}
static void rbd_osd_write_callback(struct rbd_obj_request *obj_request)
struct rbd_device *rbd_dev = img_request->rbd_dev;
struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
struct rbd_obj_request *obj_request;
+ struct rbd_obj_request *next_obj_request;
dout("%s: img %p\n", __func__, img_request);
- for_each_obj_request(img_request, obj_request) {
+ for_each_obj_request_safe(img_request, obj_request, next_obj_request) {
int ret;
obj_request->callback = rbd_img_obj_callback;
obj-$(CONFIG_BLK_DEV_RSXX) += rsxx.o
-rsxx-y := config.o core.o cregs.o dev.o dma.o
+rsxx-objs := config.o core.o cregs.o dev.o dma.o
#include "rsxx_priv.h"
#include "rsxx_cfg.h"
-static void initialize_config(void *config)
+static void initialize_config(struct rsxx_card_cfg *cfg)
{
- struct rsxx_card_cfg *cfg = config;
-
cfg->hdr.version = RSXX_CFG_VERSION;
cfg->data.block_size = RSXX_HW_BLK_SIZE;
cfg->data.stripe_size = RSXX_HW_BLK_SIZE;
- cfg->data.vendor_id = RSXX_VENDOR_ID_TMS_IBM;
+ cfg->data.vendor_id = RSXX_VENDOR_ID_IBM;
cfg->data.cache_order = (-1);
cfg->data.intr_coal.mode = RSXX_INTR_COAL_DISABLED;
cfg->data.intr_coal.count = 0;
} else {
dev_info(CARD_TO_DEV(card),
"Initializing card configuration.\n");
- initialize_config(card);
+ initialize_config(&card->config);
st = rsxx_save_config(card);
if (st)
return st;
#include <linux/reboot.h>
#include <linux/slab.h>
#include <linux/bitops.h>
+#include <linux/delay.h>
#include <linux/genhd.h>
#include <linux/idr.h>
#define NO_LEGACY 0
-MODULE_DESCRIPTION("IBM RamSan PCIe Flash SSD Device Driver");
-MODULE_AUTHOR("IBM <support@ramsan.com>");
+MODULE_DESCRIPTION("IBM FlashSystem 70/80 PCIe SSD Device Driver");
+MODULE_AUTHOR("Joshua Morris/Philip Kelleher, IBM");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRIVER_VERSION);
static DEFINE_SPINLOCK(rsxx_ida_lock);
/*----------------- Interrupt Control & Handling -------------------*/
+
+static void rsxx_mask_interrupts(struct rsxx_cardinfo *card)
+{
+ card->isr_mask = 0;
+ card->ier_mask = 0;
+}
+
static void __enable_intr(unsigned int *mask, unsigned int intr)
{
*mask |= intr;
*/
void rsxx_enable_ier(struct rsxx_cardinfo *card, unsigned int intr)
{
- if (unlikely(card->halt))
+ if (unlikely(card->halt) ||
+ unlikely(card->eeh_state))
return;
__enable_intr(&card->ier_mask, intr);
void rsxx_disable_ier(struct rsxx_cardinfo *card, unsigned int intr)
{
+ if (unlikely(card->eeh_state))
+ return;
+
__disable_intr(&card->ier_mask, intr);
iowrite32(card->ier_mask, card->regmap + IER);
}
void rsxx_enable_ier_and_isr(struct rsxx_cardinfo *card,
unsigned int intr)
{
- if (unlikely(card->halt))
+ if (unlikely(card->halt) ||
+ unlikely(card->eeh_state))
return;
__enable_intr(&card->isr_mask, intr);
void rsxx_disable_ier_and_isr(struct rsxx_cardinfo *card,
unsigned int intr)
{
+ if (unlikely(card->eeh_state))
+ return;
+
__disable_intr(&card->isr_mask, intr);
__disable_intr(&card->ier_mask, intr);
iowrite32(card->ier_mask, card->regmap + IER);
do {
reread_isr = 0;
+ if (unlikely(card->eeh_state))
+ break;
+
isr = ioread32(card->regmap + ISR);
if (isr == 0xffffffff) {
/*
}
/*----------------- Card Event Handler -------------------*/
-static char *rsxx_card_state_to_str(unsigned int state)
+static const char * const rsxx_card_state_to_str(unsigned int state)
{
- static char *state_strings[] = {
+ static const char * const state_strings[] = {
"Unknown", "Shutdown", "Starting", "Formatting",
"Uninitialized", "Good", "Shutting Down",
"Fault", "Read Only Fault", "dStroying"
return 0;
}
+static int rsxx_eeh_frozen(struct pci_dev *dev)
+{
+ struct rsxx_cardinfo *card = pci_get_drvdata(dev);
+ int i;
+ int st;
+
+ dev_warn(&dev->dev, "IBM FlashSystem PCI: preparing for slot reset.\n");
+
+ card->eeh_state = 1;
+ rsxx_mask_interrupts(card);
+
+ /*
+ * We need to guarantee that the write for eeh_state and masking
+ * interrupts does not become reordered. This will prevent a possible
+ * race condition with the EEH code.
+ */
+ wmb();
+
+ pci_disable_device(dev);
+
+ st = rsxx_eeh_save_issued_dmas(card);
+ if (st)
+ return st;
+
+ rsxx_eeh_save_issued_creg(card);
+
+ for (i = 0; i < card->n_targets; i++) {
+ if (card->ctrl[i].status.buf)
+ pci_free_consistent(card->dev, STATUS_BUFFER_SIZE8,
+ card->ctrl[i].status.buf,
+ card->ctrl[i].status.dma_addr);
+ if (card->ctrl[i].cmd.buf)
+ pci_free_consistent(card->dev, COMMAND_BUFFER_SIZE8,
+ card->ctrl[i].cmd.buf,
+ card->ctrl[i].cmd.dma_addr);
+ }
+
+ return 0;
+}
+
+static void rsxx_eeh_failure(struct pci_dev *dev)
+{
+ struct rsxx_cardinfo *card = pci_get_drvdata(dev);
+ int i;
+
+ dev_err(&dev->dev, "IBM FlashSystem PCI: disabling failed card.\n");
+
+ card->eeh_state = 1;
+
+ for (i = 0; i < card->n_targets; i++)
+ del_timer_sync(&card->ctrl[i].activity_timer);
+
+ rsxx_eeh_cancel_dmas(card);
+}
+
+static int rsxx_eeh_fifo_flush_poll(struct rsxx_cardinfo *card)
+{
+ unsigned int status;
+ int iter = 0;
+
+ /* We need to wait for the hardware to reset */
+ while (iter++ < 10) {
+ status = ioread32(card->regmap + PCI_RECONFIG);
+
+ if (status & RSXX_FLUSH_BUSY) {
+ ssleep(1);
+ continue;
+ }
+
+ if (status & RSXX_FLUSH_TIMEOUT)
+ dev_warn(CARD_TO_DEV(card), "HW: flash controller timeout\n");
+ return 0;
+ }
+
+ /* Hardware failed resetting itself. */
+ return -1;
+}
+
+static pci_ers_result_t rsxx_error_detected(struct pci_dev *dev,
+ enum pci_channel_state error)
+{
+ int st;
+
+ if (dev->revision < RSXX_EEH_SUPPORT)
+ return PCI_ERS_RESULT_NONE;
+
+ if (error == pci_channel_io_perm_failure) {
+ rsxx_eeh_failure(dev);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ st = rsxx_eeh_frozen(dev);
+ if (st) {
+ dev_err(&dev->dev, "Slot reset setup failed\n");
+ rsxx_eeh_failure(dev);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+static pci_ers_result_t rsxx_slot_reset(struct pci_dev *dev)
+{
+ struct rsxx_cardinfo *card = pci_get_drvdata(dev);
+ unsigned long flags;
+ int i;
+ int st;
+
+ dev_warn(&dev->dev,
+ "IBM FlashSystem PCI: recovering from slot reset.\n");
+
+ st = pci_enable_device(dev);
+ if (st)
+ goto failed_hw_setup;
+
+ pci_set_master(dev);
+
+ st = rsxx_eeh_fifo_flush_poll(card);
+ if (st)
+ goto failed_hw_setup;
+
+ rsxx_dma_queue_reset(card);
+
+ for (i = 0; i < card->n_targets; i++) {
+ st = rsxx_hw_buffers_init(dev, &card->ctrl[i]);
+ if (st)
+ goto failed_hw_buffers_init;
+ }
+
+ if (card->config_valid)
+ rsxx_dma_configure(card);
+
+ /* Clears the ISR register from spurious interrupts */
+ st = ioread32(card->regmap + ISR);
+
+ card->eeh_state = 0;
+
+ st = rsxx_eeh_remap_dmas(card);
+ if (st)
+ goto failed_remap_dmas;
+
+ spin_lock_irqsave(&card->irq_lock, flags);
+ if (card->n_targets & RSXX_MAX_TARGETS)
+ rsxx_enable_ier_and_isr(card, CR_INTR_ALL_G);
+ else
+ rsxx_enable_ier_and_isr(card, CR_INTR_ALL_C);
+ spin_unlock_irqrestore(&card->irq_lock, flags);
+
+ rsxx_kick_creg_queue(card);
+
+ for (i = 0; i < card->n_targets; i++) {
+ spin_lock(&card->ctrl[i].queue_lock);
+ if (list_empty(&card->ctrl[i].queue)) {
+ spin_unlock(&card->ctrl[i].queue_lock);
+ continue;
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+
+ queue_work(card->ctrl[i].issue_wq,
+ &card->ctrl[i].issue_dma_work);
+ }
+
+ dev_info(&dev->dev, "IBM FlashSystem PCI: recovery complete.\n");
+
+ return PCI_ERS_RESULT_RECOVERED;
+
+failed_hw_buffers_init:
+failed_remap_dmas:
+ for (i = 0; i < card->n_targets; i++) {
+ if (card->ctrl[i].status.buf)
+ pci_free_consistent(card->dev,
+ STATUS_BUFFER_SIZE8,
+ card->ctrl[i].status.buf,
+ card->ctrl[i].status.dma_addr);
+ if (card->ctrl[i].cmd.buf)
+ pci_free_consistent(card->dev,
+ COMMAND_BUFFER_SIZE8,
+ card->ctrl[i].cmd.buf,
+ card->ctrl[i].cmd.dma_addr);
+ }
+failed_hw_setup:
+ rsxx_eeh_failure(dev);
+ return PCI_ERS_RESULT_DISCONNECT;
+
+}
+
/*----------------- Driver Initialization & Setup -------------------*/
/* Returns: 0 if the driver is compatible with the device
-1 if the driver is NOT compatible with the device */
spin_lock_init(&card->irq_lock);
card->halt = 0;
+ card->eeh_state = 0;
spin_lock_irq(&card->irq_lock);
rsxx_disable_ier_and_isr(card, CR_INTR_ALL);
rsxx_disable_ier_and_isr(card, CR_INTR_EVENT);
spin_unlock_irqrestore(&card->irq_lock, flags);
- /* Prevent work_structs from re-queuing themselves. */
- card->halt = 1;
-
cancel_work_sync(&card->event_work);
rsxx_destroy_dev(card);
spin_lock_irqsave(&card->irq_lock, flags);
rsxx_disable_ier_and_isr(card, CR_INTR_ALL);
spin_unlock_irqrestore(&card->irq_lock, flags);
+
+ /* Prevent work_structs from re-queuing themselves. */
+ card->halt = 1;
+
free_irq(dev->irq, card);
if (!force_legacy)
card_shutdown(card);
}
+static const struct pci_error_handlers rsxx_err_handler = {
+ .error_detected = rsxx_error_detected,
+ .slot_reset = rsxx_slot_reset,
+};
+
static DEFINE_PCI_DEVICE_TABLE(rsxx_pci_ids) = {
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS70_FLASH)},
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS70D_FLASH)},
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS80_FLASH)},
- {PCI_DEVICE(PCI_VENDOR_ID_TMS_IBM, PCI_DEVICE_ID_RS81_FLASH)},
+ {PCI_DEVICE(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_FS70_FLASH)},
+ {PCI_DEVICE(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_FS80_FLASH)},
{0,},
};
.remove = rsxx_pci_remove,
.suspend = rsxx_pci_suspend,
.shutdown = rsxx_pci_shutdown,
+ .err_handler = &rsxx_err_handler,
};
static int __init rsxx_core_init(void)
#error Unknown endianess!!! Aborting...
#endif
-static void copy_to_creg_data(struct rsxx_cardinfo *card,
+static int copy_to_creg_data(struct rsxx_cardinfo *card,
int cnt8,
void *buf,
unsigned int stream)
int i = 0;
u32 *data = buf;
+ if (unlikely(card->eeh_state))
+ return -EIO;
+
for (i = 0; cnt8 > 0; i++, cnt8 -= 4) {
/*
* Firmware implementation makes it necessary to byte swap on
else
iowrite32(data[i], card->regmap + CREG_DATA(i));
}
+
+ return 0;
}
-static void copy_from_creg_data(struct rsxx_cardinfo *card,
+static int copy_from_creg_data(struct rsxx_cardinfo *card,
int cnt8,
void *buf,
unsigned int stream)
int i = 0;
u32 *data = buf;
+ if (unlikely(card->eeh_state))
+ return -EIO;
+
for (i = 0; cnt8 > 0; i++, cnt8 -= 4) {
/*
* Firmware implementation makes it necessary to byte swap on
else
data[i] = ioread32(card->regmap + CREG_DATA(i));
}
-}
-
-static struct creg_cmd *pop_active_cmd(struct rsxx_cardinfo *card)
-{
- struct creg_cmd *cmd;
- /*
- * Spin lock is needed because this can be called in atomic/interrupt
- * context.
- */
- spin_lock_bh(&card->creg_ctrl.lock);
- cmd = card->creg_ctrl.active_cmd;
- card->creg_ctrl.active_cmd = NULL;
- spin_unlock_bh(&card->creg_ctrl.lock);
-
- return cmd;
+ return 0;
}
static void creg_issue_cmd(struct rsxx_cardinfo *card, struct creg_cmd *cmd)
{
+ int st;
+
+ if (unlikely(card->eeh_state))
+ return;
+
iowrite32(cmd->addr, card->regmap + CREG_ADD);
iowrite32(cmd->cnt8, card->regmap + CREG_CNT);
if (cmd->op == CREG_OP_WRITE) {
- if (cmd->buf)
- copy_to_creg_data(card, cmd->cnt8,
- cmd->buf, cmd->stream);
+ if (cmd->buf) {
+ st = copy_to_creg_data(card, cmd->cnt8,
+ cmd->buf, cmd->stream);
+ if (st)
+ return;
+ }
}
- /*
- * Data copy must complete before initiating the command. This is
- * needed for weakly ordered processors (i.e. PowerPC), so that all
- * neccessary registers are written before we kick the hardware.
- */
- wmb();
+ if (unlikely(card->eeh_state))
+ return;
/* Setting the valid bit will kick off the command. */
iowrite32(cmd->op, card->regmap + CREG_CMD);
cmd->cb_private = cb_private;
cmd->status = 0;
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
list_add_tail(&cmd->list, &card->creg_ctrl.queue);
card->creg_ctrl.q_depth++;
creg_kick_queue(card);
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
return 0;
}
struct rsxx_cardinfo *card = (struct rsxx_cardinfo *) data;
struct creg_cmd *cmd;
- cmd = pop_active_cmd(card);
+ spin_lock(&card->creg_ctrl.lock);
+ cmd = card->creg_ctrl.active_cmd;
+ card->creg_ctrl.active_cmd = NULL;
+ spin_unlock(&card->creg_ctrl.lock);
+
if (cmd == NULL) {
card->creg_ctrl.creg_stats.creg_timeout++;
dev_warn(CARD_TO_DEV(card),
if (del_timer_sync(&card->creg_ctrl.cmd_timer) == 0)
card->creg_ctrl.creg_stats.failed_cancel_timer++;
- cmd = pop_active_cmd(card);
+ spin_lock_bh(&card->creg_ctrl.lock);
+ cmd = card->creg_ctrl.active_cmd;
+ card->creg_ctrl.active_cmd = NULL;
+ spin_unlock_bh(&card->creg_ctrl.lock);
+
if (cmd == NULL) {
dev_err(CARD_TO_DEV(card),
"Spurious creg interrupt!\n");
goto creg_done;
}
- copy_from_creg_data(card, cnt8, cmd->buf, cmd->stream);
+ st = copy_from_creg_data(card, cnt8, cmd->buf, cmd->stream);
}
creg_done:
kmem_cache_free(creg_cmd_pool, cmd);
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
card->creg_ctrl.active = 0;
creg_kick_queue(card);
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
}
static void creg_reset(struct rsxx_cardinfo *card)
"Resetting creg interface for recovery\n");
/* Cancel outstanding commands */
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
list_for_each_entry_safe(cmd, tmp, &card->creg_ctrl.queue, list) {
list_del(&cmd->list);
card->creg_ctrl.q_depth--;
card->creg_ctrl.active = 0;
}
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
card->creg_ctrl.reset = 0;
spin_lock_irqsave(&card->irq_lock, flags);
return st;
/*
- * This timeout is neccessary for unresponsive hardware. The additional
+ * This timeout is necessary for unresponsive hardware. The additional
* 20 seconds to used to guarantee that each cregs requests has time to
* complete.
*/
- timeout = msecs_to_jiffies((CREG_TIMEOUT_MSEC *
- card->creg_ctrl.q_depth) + 20000);
+ timeout = msecs_to_jiffies(CREG_TIMEOUT_MSEC *
+ card->creg_ctrl.q_depth + 20000);
/*
* The creg interface is guaranteed to complete. It has a timeout
return 0;
}
+void rsxx_eeh_save_issued_creg(struct rsxx_cardinfo *card)
+{
+ struct creg_cmd *cmd = NULL;
+
+ cmd = card->creg_ctrl.active_cmd;
+ card->creg_ctrl.active_cmd = NULL;
+
+ if (cmd) {
+ del_timer_sync(&card->creg_ctrl.cmd_timer);
+
+ spin_lock_bh(&card->creg_ctrl.lock);
+ list_add(&cmd->list, &card->creg_ctrl.queue);
+ card->creg_ctrl.q_depth++;
+ card->creg_ctrl.active = 0;
+ spin_unlock_bh(&card->creg_ctrl.lock);
+ }
+}
+
+void rsxx_kick_creg_queue(struct rsxx_cardinfo *card)
+{
+ spin_lock_bh(&card->creg_ctrl.lock);
+ if (!list_empty(&card->creg_ctrl.queue))
+ creg_kick_queue(card);
+ spin_unlock_bh(&card->creg_ctrl.lock);
+}
+
/*------------ Initialization & Setup --------------*/
int rsxx_creg_setup(struct rsxx_cardinfo *card)
{
int cnt = 0;
/* Cancel outstanding commands */
- spin_lock(&card->creg_ctrl.lock);
+ spin_lock_bh(&card->creg_ctrl.lock);
list_for_each_entry_safe(cmd, tmp, &card->creg_ctrl.queue, list) {
list_del(&cmd->list);
if (cmd->cb)
"Canceled active creg command\n");
kmem_cache_free(creg_cmd_pool, cmd);
}
- spin_unlock(&card->creg_ctrl.lock);
+ spin_unlock_bh(&card->creg_ctrl.lock);
cancel_work_sync(&card->creg_ctrl.done_work);
}
struct rsxx_dma {
struct list_head list;
u8 cmd;
- unsigned int laddr; /* Logical address on the ramsan */
+ unsigned int laddr; /* Logical address */
struct {
u32 off;
u32 cnt;
HW_STATUS_FAULT = 0x08,
};
-#define STATUS_BUFFER_SIZE8 4096
-#define COMMAND_BUFFER_SIZE8 4096
-
static struct kmem_cache *rsxx_dma_pool;
struct dma_tracker {
return tgt;
}
-static void rsxx_dma_queue_reset(struct rsxx_cardinfo *card)
+void rsxx_dma_queue_reset(struct rsxx_cardinfo *card)
{
/* Reset all DMA Command/Status Queues */
iowrite32(DMA_QUEUE_RESET, card->regmap + RESET);
u32 q_depth = 0;
u32 intr_coal;
- if (card->config.data.intr_coal.mode != RSXX_INTR_COAL_AUTO_TUNE)
+ if (card->config.data.intr_coal.mode != RSXX_INTR_COAL_AUTO_TUNE ||
+ unlikely(card->eeh_state))
return;
for (i = 0; i < card->n_targets; i++)
}
/*----------------- RSXX DMA Handling -------------------*/
-static void rsxx_complete_dma(struct rsxx_cardinfo *card,
+static void rsxx_complete_dma(struct rsxx_dma_ctrl *ctrl,
struct rsxx_dma *dma,
unsigned int status)
{
if (status & DMA_SW_ERR)
- printk_ratelimited(KERN_ERR
- "SW Error in DMA(cmd x%02x, laddr x%08x)\n",
- dma->cmd, dma->laddr);
+ ctrl->stats.dma_sw_err++;
if (status & DMA_HW_FAULT)
- printk_ratelimited(KERN_ERR
- "HW Fault in DMA(cmd x%02x, laddr x%08x)\n",
- dma->cmd, dma->laddr);
+ ctrl->stats.dma_hw_fault++;
if (status & DMA_CANCELLED)
- printk_ratelimited(KERN_ERR
- "DMA Cancelled(cmd x%02x, laddr x%08x)\n",
- dma->cmd, dma->laddr);
+ ctrl->stats.dma_cancelled++;
if (dma->dma_addr)
- pci_unmap_page(card->dev, dma->dma_addr, get_dma_size(dma),
+ pci_unmap_page(ctrl->card->dev, dma->dma_addr,
+ get_dma_size(dma),
dma->cmd == HW_CMD_BLK_WRITE ?
PCI_DMA_TODEVICE :
PCI_DMA_FROMDEVICE);
if (dma->cb)
- dma->cb(card, dma->cb_data, status ? 1 : 0);
+ dma->cb(ctrl->card, dma->cb_data, status ? 1 : 0);
kmem_cache_free(rsxx_dma_pool, dma);
}
if (requeue_cmd)
rsxx_requeue_dma(ctrl, dma);
else
- rsxx_complete_dma(ctrl->card, dma, status);
+ rsxx_complete_dma(ctrl, dma, status);
}
static void dma_engine_stalled(unsigned long data)
{
struct rsxx_dma_ctrl *ctrl = (struct rsxx_dma_ctrl *)data;
- if (atomic_read(&ctrl->stats.hw_q_depth) == 0)
+ if (atomic_read(&ctrl->stats.hw_q_depth) == 0 ||
+ unlikely(ctrl->card->eeh_state))
return;
if (ctrl->cmd.idx != ioread32(ctrl->regmap + SW_CMD_IDX)) {
ctrl = container_of(work, struct rsxx_dma_ctrl, issue_dma_work);
hw_cmd_buf = ctrl->cmd.buf;
- if (unlikely(ctrl->card->halt))
+ if (unlikely(ctrl->card->halt) ||
+ unlikely(ctrl->card->eeh_state))
return;
while (1) {
*/
if (unlikely(ctrl->card->dma_fault)) {
push_tracker(ctrl->trackers, tag);
- rsxx_complete_dma(ctrl->card, dma, DMA_CANCELLED);
+ rsxx_complete_dma(ctrl, dma, DMA_CANCELLED);
continue;
}
/* Let HW know we've queued commands. */
if (cmds_pending) {
- /*
- * We must guarantee that the CPU writes to 'ctrl->cmd.buf'
- * (which is in PCI-consistent system-memory) from the loop
- * above make it into the coherency domain before the
- * following PIO "trigger" updating the cmd.idx. A WMB is
- * sufficient. We need not explicitly CPU cache-flush since
- * the memory is a PCI-consistent (ie; coherent) mapping.
- */
- wmb();
-
atomic_add(cmds_pending, &ctrl->stats.hw_q_depth);
mod_timer(&ctrl->activity_timer,
jiffies + DMA_ACTIVITY_TIMEOUT);
+
+ if (unlikely(ctrl->card->eeh_state)) {
+ del_timer_sync(&ctrl->activity_timer);
+ return;
+ }
+
iowrite32(ctrl->cmd.idx, ctrl->regmap + SW_CMD_IDX);
}
}
hw_st_buf = ctrl->status.buf;
if (unlikely(ctrl->card->halt) ||
- unlikely(ctrl->card->dma_fault))
+ unlikely(ctrl->card->dma_fault) ||
+ unlikely(ctrl->card->eeh_state))
return;
count = le16_to_cpu(hw_st_buf[ctrl->status.idx].count);
if (status)
rsxx_handle_dma_error(ctrl, dma, status);
else
- rsxx_complete_dma(ctrl->card, dma, 0);
+ rsxx_complete_dma(ctrl, dma, 0);
push_tracker(ctrl->trackers, tag);
/*----------------- DMA Engine Initialization & Setup -------------------*/
+int rsxx_hw_buffers_init(struct pci_dev *dev, struct rsxx_dma_ctrl *ctrl)
+{
+ ctrl->status.buf = pci_alloc_consistent(dev, STATUS_BUFFER_SIZE8,
+ &ctrl->status.dma_addr);
+ ctrl->cmd.buf = pci_alloc_consistent(dev, COMMAND_BUFFER_SIZE8,
+ &ctrl->cmd.dma_addr);
+ if (ctrl->status.buf == NULL || ctrl->cmd.buf == NULL)
+ return -ENOMEM;
+
+ memset(ctrl->status.buf, 0xac, STATUS_BUFFER_SIZE8);
+ iowrite32(lower_32_bits(ctrl->status.dma_addr),
+ ctrl->regmap + SB_ADD_LO);
+ iowrite32(upper_32_bits(ctrl->status.dma_addr),
+ ctrl->regmap + SB_ADD_HI);
+
+ memset(ctrl->cmd.buf, 0x83, COMMAND_BUFFER_SIZE8);
+ iowrite32(lower_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_LO);
+ iowrite32(upper_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_HI);
+
+ ctrl->status.idx = ioread32(ctrl->regmap + HW_STATUS_CNT);
+ if (ctrl->status.idx > RSXX_MAX_OUTSTANDING_CMDS) {
+ dev_crit(&dev->dev, "Failed reading status cnt x%x\n",
+ ctrl->status.idx);
+ return -EINVAL;
+ }
+ iowrite32(ctrl->status.idx, ctrl->regmap + HW_STATUS_CNT);
+ iowrite32(ctrl->status.idx, ctrl->regmap + SW_STATUS_CNT);
+
+ ctrl->cmd.idx = ioread32(ctrl->regmap + HW_CMD_IDX);
+ if (ctrl->cmd.idx > RSXX_MAX_OUTSTANDING_CMDS) {
+ dev_crit(&dev->dev, "Failed reading cmd cnt x%x\n",
+ ctrl->status.idx);
+ return -EINVAL;
+ }
+ iowrite32(ctrl->cmd.idx, ctrl->regmap + HW_CMD_IDX);
+ iowrite32(ctrl->cmd.idx, ctrl->regmap + SW_CMD_IDX);
+
+ return 0;
+}
+
static int rsxx_dma_ctrl_init(struct pci_dev *dev,
struct rsxx_dma_ctrl *ctrl)
{
int i;
+ int st;
memset(&ctrl->stats, 0, sizeof(ctrl->stats));
- ctrl->status.buf = pci_alloc_consistent(dev, STATUS_BUFFER_SIZE8,
- &ctrl->status.dma_addr);
- ctrl->cmd.buf = pci_alloc_consistent(dev, COMMAND_BUFFER_SIZE8,
- &ctrl->cmd.dma_addr);
- if (ctrl->status.buf == NULL || ctrl->cmd.buf == NULL)
- return -ENOMEM;
-
ctrl->trackers = vmalloc(DMA_TRACKER_LIST_SIZE8);
if (!ctrl->trackers)
return -ENOMEM;
INIT_WORK(&ctrl->issue_dma_work, rsxx_issue_dmas);
INIT_WORK(&ctrl->dma_done_work, rsxx_dma_done);
- memset(ctrl->status.buf, 0xac, STATUS_BUFFER_SIZE8);
- iowrite32(lower_32_bits(ctrl->status.dma_addr),
- ctrl->regmap + SB_ADD_LO);
- iowrite32(upper_32_bits(ctrl->status.dma_addr),
- ctrl->regmap + SB_ADD_HI);
-
- memset(ctrl->cmd.buf, 0x83, COMMAND_BUFFER_SIZE8);
- iowrite32(lower_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_LO);
- iowrite32(upper_32_bits(ctrl->cmd.dma_addr), ctrl->regmap + CB_ADD_HI);
-
- ctrl->status.idx = ioread32(ctrl->regmap + HW_STATUS_CNT);
- if (ctrl->status.idx > RSXX_MAX_OUTSTANDING_CMDS) {
- dev_crit(&dev->dev, "Failed reading status cnt x%x\n",
- ctrl->status.idx);
- return -EINVAL;
- }
- iowrite32(ctrl->status.idx, ctrl->regmap + HW_STATUS_CNT);
- iowrite32(ctrl->status.idx, ctrl->regmap + SW_STATUS_CNT);
-
- ctrl->cmd.idx = ioread32(ctrl->regmap + HW_CMD_IDX);
- if (ctrl->cmd.idx > RSXX_MAX_OUTSTANDING_CMDS) {
- dev_crit(&dev->dev, "Failed reading cmd cnt x%x\n",
- ctrl->status.idx);
- return -EINVAL;
- }
- iowrite32(ctrl->cmd.idx, ctrl->regmap + HW_CMD_IDX);
- iowrite32(ctrl->cmd.idx, ctrl->regmap + SW_CMD_IDX);
-
- wmb();
+ st = rsxx_hw_buffers_init(dev, ctrl);
+ if (st)
+ return st;
return 0;
}
return 0;
}
-static int rsxx_dma_configure(struct rsxx_cardinfo *card)
+int rsxx_dma_configure(struct rsxx_cardinfo *card)
{
u32 intr_coal;
}
}
+int rsxx_eeh_save_issued_dmas(struct rsxx_cardinfo *card)
+{
+ int i;
+ int j;
+ int cnt;
+ struct rsxx_dma *dma;
+ struct list_head *issued_dmas;
+
+ issued_dmas = kzalloc(sizeof(*issued_dmas) * card->n_targets,
+ GFP_KERNEL);
+ if (!issued_dmas)
+ return -ENOMEM;
+
+ for (i = 0; i < card->n_targets; i++) {
+ INIT_LIST_HEAD(&issued_dmas[i]);
+ cnt = 0;
+ for (j = 0; j < RSXX_MAX_OUTSTANDING_CMDS; j++) {
+ dma = get_tracker_dma(card->ctrl[i].trackers, j);
+ if (dma == NULL)
+ continue;
+
+ if (dma->cmd == HW_CMD_BLK_WRITE)
+ card->ctrl[i].stats.writes_issued--;
+ else if (dma->cmd == HW_CMD_BLK_DISCARD)
+ card->ctrl[i].stats.discards_issued--;
+ else
+ card->ctrl[i].stats.reads_issued--;
+
+ list_add_tail(&dma->list, &issued_dmas[i]);
+ push_tracker(card->ctrl[i].trackers, j);
+ cnt++;
+ }
+
+ spin_lock(&card->ctrl[i].queue_lock);
+ list_splice(&issued_dmas[i], &card->ctrl[i].queue);
+
+ atomic_sub(cnt, &card->ctrl[i].stats.hw_q_depth);
+ card->ctrl[i].stats.sw_q_depth += cnt;
+ card->ctrl[i].e_cnt = 0;
+
+ list_for_each_entry(dma, &card->ctrl[i].queue, list) {
+ if (dma->dma_addr)
+ pci_unmap_page(card->dev, dma->dma_addr,
+ get_dma_size(dma),
+ dma->cmd == HW_CMD_BLK_WRITE ?
+ PCI_DMA_TODEVICE :
+ PCI_DMA_FROMDEVICE);
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+ }
+
+ kfree(issued_dmas);
+
+ return 0;
+}
+
+void rsxx_eeh_cancel_dmas(struct rsxx_cardinfo *card)
+{
+ struct rsxx_dma *dma;
+ struct rsxx_dma *tmp;
+ int i;
+
+ for (i = 0; i < card->n_targets; i++) {
+ spin_lock(&card->ctrl[i].queue_lock);
+ list_for_each_entry_safe(dma, tmp, &card->ctrl[i].queue, list) {
+ list_del(&dma->list);
+
+ rsxx_complete_dma(&card->ctrl[i], dma, DMA_CANCELLED);
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+ }
+}
+
+int rsxx_eeh_remap_dmas(struct rsxx_cardinfo *card)
+{
+ struct rsxx_dma *dma;
+ int i;
+
+ for (i = 0; i < card->n_targets; i++) {
+ spin_lock(&card->ctrl[i].queue_lock);
+ list_for_each_entry(dma, &card->ctrl[i].queue, list) {
+ dma->dma_addr = pci_map_page(card->dev, dma->page,
+ dma->pg_off, get_dma_size(dma),
+ dma->cmd == HW_CMD_BLK_WRITE ?
+ PCI_DMA_TODEVICE :
+ PCI_DMA_FROMDEVICE);
+ if (!dma->dma_addr) {
+ spin_unlock(&card->ctrl[i].queue_lock);
+ kmem_cache_free(rsxx_dma_pool, dma);
+ return -ENOMEM;
+ }
+ }
+ spin_unlock(&card->ctrl[i].queue_lock);
+ }
+
+ return 0;
+}
int rsxx_dma_init(void)
{
/*----------------- IOCTL Definitions -------------------*/
+#define RSXX_MAX_DATA 8
+
struct rsxx_reg_access {
__u32 addr;
__u32 cnt;
__u32 stat;
__u32 stream;
- __u32 data[8];
+ __u32 data[RSXX_MAX_DATA];
};
-#define RSXX_MAX_REG_CNT (8 * (sizeof(__u32)))
+#define RSXX_MAX_REG_CNT (RSXX_MAX_DATA * (sizeof(__u32)))
#define RSXX_IOC_MAGIC 'r'
};
/* Vendor ID Values */
-#define RSXX_VENDOR_ID_TMS_IBM 0
+#define RSXX_VENDOR_ID_IBM 0
#define RSXX_VENDOR_ID_DSI 1
#define RSXX_VENDOR_COUNT 2
struct proc_cmd;
-#define PCI_VENDOR_ID_TMS_IBM 0x15B6
-#define PCI_DEVICE_ID_RS70_FLASH 0x0019
-#define PCI_DEVICE_ID_RS70D_FLASH 0x001A
-#define PCI_DEVICE_ID_RS80_FLASH 0x001C
-#define PCI_DEVICE_ID_RS81_FLASH 0x001E
+#define PCI_DEVICE_ID_FS70_FLASH 0x04A9
+#define PCI_DEVICE_ID_FS80_FLASH 0x04AA
#define RS70_PCI_REV_SUPPORTED 4
#define DRIVER_NAME "rsxx"
-#define DRIVER_VERSION "3.7"
+#define DRIVER_VERSION "4.0"
/* Block size is 4096 */
#define RSXX_HW_BLK_SHIFT 12
#define RSXX_MAX_OUTSTANDING_CMDS 255
#define RSXX_CS_IDX_MASK 0xff
+#define STATUS_BUFFER_SIZE8 4096
+#define COMMAND_BUFFER_SIZE8 4096
+
#define RSXX_MAX_TARGETS 8
struct dma_tracker_list;
u32 discards_failed;
u32 done_rescheduled;
u32 issue_rescheduled;
+ u32 dma_sw_err;
+ u32 dma_hw_fault;
+ u32 dma_cancelled;
u32 sw_q_depth; /* Number of DMAs on the SW queue. */
atomic_t hw_q_depth; /* Number of DMAs queued to HW. */
};
struct rsxx_cardinfo {
struct pci_dev *dev;
unsigned int halt;
+ unsigned int eeh_state;
void __iomem *regmap;
spinlock_t irq_lock;
PERF_RD512_HI = 0xac,
PERF_WR512_LO = 0xb0,
PERF_WR512_HI = 0xb4,
+ PCI_RECONFIG = 0xb8,
};
enum rsxx_intr {
CR_INTR_DMA5 = 0x00000080,
CR_INTR_DMA6 = 0x00000100,
CR_INTR_DMA7 = 0x00000200,
+ CR_INTR_ALL_C = 0x0000003f,
+ CR_INTR_ALL_G = 0x000003ff,
CR_INTR_DMA_ALL = 0x000003f5,
CR_INTR_ALL = 0xffffffff,
};
DMA_QUEUE_RESET = 0x00000001,
};
+enum rsxx_hw_fifo_flush {
+ RSXX_FLUSH_BUSY = 0x00000002,
+ RSXX_FLUSH_TIMEOUT = 0x00000004,
+};
+
enum rsxx_pci_revision {
RSXX_DISCARD_SUPPORT = 2,
+ RSXX_EEH_SUPPORT = 3,
};
enum rsxx_creg_cmd {
void rsxx_dma_destroy(struct rsxx_cardinfo *card);
int rsxx_dma_init(void);
void rsxx_dma_cleanup(void);
+void rsxx_dma_queue_reset(struct rsxx_cardinfo *card);
+int rsxx_dma_configure(struct rsxx_cardinfo *card);
int rsxx_dma_queue_bio(struct rsxx_cardinfo *card,
struct bio *bio,
atomic_t *n_dmas,
rsxx_dma_cb cb,
void *cb_data);
+int rsxx_hw_buffers_init(struct pci_dev *dev, struct rsxx_dma_ctrl *ctrl);
+int rsxx_eeh_save_issued_dmas(struct rsxx_cardinfo *card);
+void rsxx_eeh_cancel_dmas(struct rsxx_cardinfo *card);
+int rsxx_eeh_remap_dmas(struct rsxx_cardinfo *card);
/***** cregs.c *****/
int rsxx_creg_write(struct rsxx_cardinfo *card, u32 addr,
void rsxx_creg_destroy(struct rsxx_cardinfo *card);
int rsxx_creg_init(void);
void rsxx_creg_cleanup(void);
-
int rsxx_reg_access(struct rsxx_cardinfo *card,
struct rsxx_reg_access __user *ucmd,
int read);
+void rsxx_eeh_save_issued_creg(struct rsxx_cardinfo *card);
+void rsxx_kick_creg_queue(struct rsxx_cardinfo *card);
#define foreach_grant_safe(pos, n, rbtree, node) \
for ((pos) = container_of(rb_first((rbtree)), typeof(*(pos)), node), \
- (n) = rb_next(&(pos)->node); \
+ (n) = (&(pos)->node != NULL) ? rb_next(&(pos)->node) : NULL; \
&(pos)->node != NULL; \
(pos) = container_of(n, typeof(*(pos)), node), \
(n) = (&(pos)->node != NULL) ? rb_next(&(pos)->node) : NULL)
static void print_stats(struct xen_blkif *blkif)
{
- pr_info("xen-blkback (%s): oo %3d | rd %4d | wr %4d | f %4d"
- " | ds %4d\n",
+ pr_info("xen-blkback (%s): oo %3llu | rd %4llu | wr %4llu | f %4llu"
+ " | ds %4llu\n",
current->comm, blkif->st_oo_req,
blkif->st_rd_req, blkif->st_wr_req,
blkif->st_f_req, blkif->st_ds_req);
}
struct seg_buf {
- unsigned long buf;
+ unsigned int offset;
unsigned int nsec;
};
/*
* If this is a new persistent grant
* save the handler
*/
- persistent_gnts[i]->handle = map[j].handle;
- persistent_gnts[i]->dev_bus_addr =
- map[j++].dev_bus_addr;
+ persistent_gnts[i]->handle = map[j++].handle;
}
pending_handle(pending_req, i) =
persistent_gnts[i]->handle;
if (ret)
continue;
-
- seg[i].buf = persistent_gnts[i]->dev_bus_addr |
- (req->u.rw.seg[i].first_sect << 9);
} else {
- pending_handle(pending_req, i) = map[j].handle;
+ pending_handle(pending_req, i) = map[j++].handle;
bitmap_set(pending_req->unmap_seg, i, 1);
- if (ret) {
- j++;
+ if (ret)
continue;
- }
-
- seg[i].buf = map[j++].dev_bus_addr |
- (req->u.rw.seg[i].first_sect << 9);
}
+ seg[i].offset = (req->u.rw.seg[i].first_sect << 9);
}
return ret;
}
return err;
}
+static int dispatch_other_io(struct xen_blkif *blkif,
+ struct blkif_request *req,
+ struct pending_req *pending_req)
+{
+ free_req(pending_req);
+ make_response(blkif, req->u.other.id, req->operation,
+ BLKIF_RSP_EOPNOTSUPP);
+ return -EIO;
+}
+
static void xen_blk_drain_io(struct xen_blkif *blkif)
{
atomic_set(&blkif->drain, 1);
/* Apply all sanity checks to /private copy/ of request. */
barrier();
- if (unlikely(req.operation == BLKIF_OP_DISCARD)) {
+
+ switch (req.operation) {
+ case BLKIF_OP_READ:
+ case BLKIF_OP_WRITE:
+ case BLKIF_OP_WRITE_BARRIER:
+ case BLKIF_OP_FLUSH_DISKCACHE:
+ if (dispatch_rw_block_io(blkif, &req, pending_req))
+ goto done;
+ break;
+ case BLKIF_OP_DISCARD:
free_req(pending_req);
if (dispatch_discard_io(blkif, &req))
- break;
- } else if (dispatch_rw_block_io(blkif, &req, pending_req))
+ goto done;
break;
+ default:
+ if (dispatch_other_io(blkif, &req, pending_req))
+ goto done;
+ break;
+ }
/* Yield point for this unbounded loop. */
cond_resched();
}
-
+done:
return more_to_do;
}
pr_debug(DRV_PFX "access denied: %s of [%llu,%llu] on dev=%04x\n",
operation == READ ? "read" : "write",
preq.sector_number,
- preq.sector_number + preq.nr_sects, preq.dev);
+ preq.sector_number + preq.nr_sects,
+ blkif->vbd.pdevice);
goto fail_response;
}
(bio_add_page(bio,
pages[i],
seg[i].nsec << 9,
- seg[i].buf & ~PAGE_MASK) == 0)) {
+ seg[i].offset) == 0)) {
bio = bio_alloc(GFP_KERNEL, nseg-i);
if (unlikely(bio == NULL))
bio->bi_end_io = end_block_io_op;
}
- /*
- * We set it one so that the last submit_bio does not have to call
- * atomic_inc.
- */
atomic_set(&pending_req->pendcnt, nbio);
-
- /* Get a reference count for the disk queue and start sending I/O */
blk_start_plug(&plug);
for (i = 0; i < nbio; i++)
fail_put_bio:
for (i = 0; i < nbio; i++)
bio_put(biolist[i]);
+ atomic_set(&pending_req->pendcnt, 1);
__end_block_io_op(pending_req, -EINVAL);
msleep(1); /* back off a bit */
return -EIO;
uint64_t nr_sectors;
} __attribute__((__packed__));
+struct blkif_x86_32_request_other {
+ uint8_t _pad1;
+ blkif_vdev_t _pad2;
+ uint64_t id; /* private guest value, echoed in resp */
+} __attribute__((__packed__));
+
struct blkif_x86_32_request {
uint8_t operation; /* BLKIF_OP_??? */
union {
struct blkif_x86_32_request_rw rw;
struct blkif_x86_32_request_discard discard;
+ struct blkif_x86_32_request_other other;
} u;
} __attribute__((__packed__));
uint64_t nr_sectors;
} __attribute__((__packed__));
+struct blkif_x86_64_request_other {
+ uint8_t _pad1;
+ blkif_vdev_t _pad2;
+ uint32_t _pad3; /* offsetof(blkif_..,u.discard.id)==8 */
+ uint64_t id; /* private guest value, echoed in resp */
+} __attribute__((__packed__));
+
struct blkif_x86_64_request {
uint8_t operation; /* BLKIF_OP_??? */
union {
struct blkif_x86_64_request_rw rw;
struct blkif_x86_64_request_discard discard;
+ struct blkif_x86_64_request_other other;
} u;
} __attribute__((__packed__));
struct page *page;
grant_ref_t gnt;
grant_handle_t handle;
- uint64_t dev_bus_addr;
struct rb_node node;
};
/* statistics */
unsigned long st_print;
- int st_rd_req;
- int st_wr_req;
- int st_oo_req;
- int st_f_req;
- int st_ds_req;
- int st_rd_sect;
- int st_wr_sect;
+ unsigned long long st_rd_req;
+ unsigned long long st_wr_req;
+ unsigned long long st_oo_req;
+ unsigned long long st_f_req;
+ unsigned long long st_ds_req;
+ unsigned long long st_rd_sect;
+ unsigned long long st_wr_sect;
wait_queue_head_t waiting_to_free;
};
dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
break;
default:
+ /*
+ * Don't know how to translate this op. Only get the
+ * ID so failure can be reported to the frontend.
+ */
+ dst->u.other.id = src->u.other.id;
break;
}
}
dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
break;
default:
+ /*
+ * Don't know how to translate this op. Only get the
+ * ID so failure can be reported to the frontend.
+ */
+ dst->u.other.id = src->u.other.id;
break;
}
}
} \
static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL)
-VBD_SHOW(oo_req, "%d\n", be->blkif->st_oo_req);
-VBD_SHOW(rd_req, "%d\n", be->blkif->st_rd_req);
-VBD_SHOW(wr_req, "%d\n", be->blkif->st_wr_req);
-VBD_SHOW(f_req, "%d\n", be->blkif->st_f_req);
-VBD_SHOW(ds_req, "%d\n", be->blkif->st_ds_req);
-VBD_SHOW(rd_sect, "%d\n", be->blkif->st_rd_sect);
-VBD_SHOW(wr_sect, "%d\n", be->blkif->st_wr_sect);
+VBD_SHOW(oo_req, "%llu\n", be->blkif->st_oo_req);
+VBD_SHOW(rd_req, "%llu\n", be->blkif->st_rd_req);
+VBD_SHOW(wr_req, "%llu\n", be->blkif->st_wr_req);
+VBD_SHOW(f_req, "%llu\n", be->blkif->st_f_req);
+VBD_SHOW(ds_req, "%llu\n", be->blkif->st_ds_req);
+VBD_SHOW(rd_sect, "%llu\n", be->blkif->st_rd_sect);
+VBD_SHOW(wr_sect, "%llu\n", be->blkif->st_wr_sect);
static struct attribute *xen_vbdstat_attrs[] = {
&dev_attr_oo_req.attr,
#include <linux/mutex.h>
#include <linux/scatterlist.h>
#include <linux/bitmap.h>
-#include <linux/llist.h>
+#include <linux/list.h>
#include <xen/xen.h>
#include <xen/xenbus.h>
struct grant {
grant_ref_t gref;
unsigned long pfn;
- struct llist_node node;
+ struct list_head node;
};
struct blk_shadow {
struct blkif_request req;
struct request *request;
- unsigned long frame[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct grant *grants_used[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
struct work_struct work;
struct gnttab_free_callback callback;
struct blk_shadow shadow[BLK_RING_SIZE];
- struct llist_head persistent_gnts;
+ struct list_head persistent_gnts;
unsigned int persistent_gnts_c;
unsigned long shadow_free;
unsigned int feature_flush;
return 0;
}
+static int fill_grant_buffer(struct blkfront_info *info, int num)
+{
+ struct page *granted_page;
+ struct grant *gnt_list_entry, *n;
+ int i = 0;
+
+ while(i < num) {
+ gnt_list_entry = kzalloc(sizeof(struct grant), GFP_NOIO);
+ if (!gnt_list_entry)
+ goto out_of_memory;
+
+ granted_page = alloc_page(GFP_NOIO);
+ if (!granted_page) {
+ kfree(gnt_list_entry);
+ goto out_of_memory;
+ }
+
+ gnt_list_entry->pfn = page_to_pfn(granted_page);
+ gnt_list_entry->gref = GRANT_INVALID_REF;
+ list_add(&gnt_list_entry->node, &info->persistent_gnts);
+ i++;
+ }
+
+ return 0;
+
+out_of_memory:
+ list_for_each_entry_safe(gnt_list_entry, n,
+ &info->persistent_gnts, node) {
+ list_del(&gnt_list_entry->node);
+ __free_page(pfn_to_page(gnt_list_entry->pfn));
+ kfree(gnt_list_entry);
+ i--;
+ }
+ BUG_ON(i != 0);
+ return -ENOMEM;
+}
+
+static struct grant *get_grant(grant_ref_t *gref_head,
+ struct blkfront_info *info)
+{
+ struct grant *gnt_list_entry;
+ unsigned long buffer_mfn;
+
+ BUG_ON(list_empty(&info->persistent_gnts));
+ gnt_list_entry = list_first_entry(&info->persistent_gnts, struct grant,
+ node);
+ list_del(&gnt_list_entry->node);
+
+ if (gnt_list_entry->gref != GRANT_INVALID_REF) {
+ info->persistent_gnts_c--;
+ return gnt_list_entry;
+ }
+
+ /* Assign a gref to this page */
+ gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
+ BUG_ON(gnt_list_entry->gref == -ENOSPC);
+ buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn);
+ gnttab_grant_foreign_access_ref(gnt_list_entry->gref,
+ info->xbdev->otherend_id,
+ buffer_mfn, 0);
+ return gnt_list_entry;
+}
+
static const char *op_name(int op)
{
static const char *const names[] = {
static int blkif_queue_request(struct request *req)
{
struct blkfront_info *info = req->rq_disk->private_data;
- unsigned long buffer_mfn;
struct blkif_request *ring_req;
unsigned long id;
unsigned int fsect, lsect;
*/
bool new_persistent_gnts;
grant_ref_t gref_head;
- struct page *granted_page;
struct grant *gnt_list_entry = NULL;
struct scatterlist *sg;
fsect = sg->offset >> 9;
lsect = fsect + (sg->length >> 9) - 1;
- if (info->persistent_gnts_c) {
- BUG_ON(llist_empty(&info->persistent_gnts));
- gnt_list_entry = llist_entry(
- llist_del_first(&info->persistent_gnts),
- struct grant, node);
-
- ref = gnt_list_entry->gref;
- buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn);
- info->persistent_gnts_c--;
- } else {
- ref = gnttab_claim_grant_reference(&gref_head);
- BUG_ON(ref == -ENOSPC);
-
- gnt_list_entry =
- kmalloc(sizeof(struct grant),
- GFP_ATOMIC);
- if (!gnt_list_entry)
- return -ENOMEM;
-
- granted_page = alloc_page(GFP_ATOMIC);
- if (!granted_page) {
- kfree(gnt_list_entry);
- return -ENOMEM;
- }
-
- gnt_list_entry->pfn =
- page_to_pfn(granted_page);
- gnt_list_entry->gref = ref;
-
- buffer_mfn = pfn_to_mfn(page_to_pfn(
- granted_page));
- gnttab_grant_foreign_access_ref(ref,
- info->xbdev->otherend_id,
- buffer_mfn, 0);
- }
+ gnt_list_entry = get_grant(&gref_head, info);
+ ref = gnt_list_entry->gref;
info->shadow[id].grants_used[i] = gnt_list_entry;
kunmap_atomic(shared_data);
}
- info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn);
ring_req->u.rw.seg[i] =
(struct blkif_request_segment) {
.gref = ref,
static void blkif_free(struct blkfront_info *info, int suspend)
{
- struct llist_node *all_gnts;
- struct grant *persistent_gnt, *tmp;
- struct llist_node *n;
+ struct grant *persistent_gnt;
+ struct grant *n;
/* Prevent new requests being issued until we fix things up. */
spin_lock_irq(&info->io_lock);
blk_stop_queue(info->rq);
/* Remove all persistent grants */
- if (info->persistent_gnts_c) {
- all_gnts = llist_del_all(&info->persistent_gnts);
- persistent_gnt = llist_entry(all_gnts, typeof(*(persistent_gnt)), node);
- while (persistent_gnt) {
- gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+ if (!list_empty(&info->persistent_gnts)) {
+ list_for_each_entry_safe(persistent_gnt, n,
+ &info->persistent_gnts, node) {
+ list_del(&persistent_gnt->node);
+ if (persistent_gnt->gref != GRANT_INVALID_REF) {
+ gnttab_end_foreign_access(persistent_gnt->gref,
+ 0, 0UL);
+ info->persistent_gnts_c--;
+ }
__free_page(pfn_to_page(persistent_gnt->pfn));
- tmp = persistent_gnt;
- n = persistent_gnt->node.next;
- if (n)
- persistent_gnt = llist_entry(n, typeof(*(persistent_gnt)), node);
- else
- persistent_gnt = NULL;
- kfree(tmp);
+ kfree(persistent_gnt);
}
- info->persistent_gnts_c = 0;
}
+ BUG_ON(info->persistent_gnts_c != 0);
/* No more gnttab callback work. */
gnttab_cancel_free_callback(&info->callback);
}
/* Add the persistent grant into the list of free grants */
for (i = 0; i < s->req.u.rw.nr_segments; i++) {
- llist_add(&s->grants_used[i]->node, &info->persistent_gnts);
+ list_add(&s->grants_used[i]->node, &info->persistent_gnts);
info->persistent_gnts_c++;
}
}
sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+ /* Allocate memory for grants */
+ err = fill_grant_buffer(info, BLK_RING_SIZE *
+ BLKIF_MAX_SEGMENTS_PER_REQUEST);
+ if (err)
+ goto fail;
+
err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring));
if (err < 0) {
free_page((unsigned long)sring);
spin_lock_init(&info->io_lock);
info->xbdev = dev;
info->vdevice = vdevice;
- init_llist_head(&info->persistent_gnts);
+ INIT_LIST_HEAD(&info->persistent_gnts);
info->persistent_gnts_c = 0;
info->connected = BLKIF_STATE_DISCONNECTED;
INIT_WORK(&info->work, blkif_restart_queue);
int j;
/* Stage 1: Make a safe copy of the shadow state. */
- copy = kmalloc(sizeof(info->shadow),
+ copy = kmemdup(info->shadow, sizeof(info->shadow),
GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
if (!copy)
return -ENOMEM;
- memcpy(copy, info->shadow, sizeof(info->shadow));
/* Stage 2: Set up free list. */
memset(&info->shadow, 0, sizeof(info->shadow));
gnttab_grant_foreign_access_ref(
req->u.rw.seg[j].gref,
info->xbdev->otherend_id,
- pfn_to_mfn(info->shadow[req->u.rw.id].frame[j]),
+ pfn_to_mfn(copy[i].grants_used[j]->pfn),
0);
}
info->shadow[req->u.rw.id].req = *req;
{ USB_DEVICE(0x03F0, 0x311D) },
/* Atheros AR3012 with sflash firmware*/
+ { USB_DEVICE(0x0CF3, 0x0036) },
{ USB_DEVICE(0x0CF3, 0x3004) },
{ USB_DEVICE(0x0CF3, 0x3008) },
{ USB_DEVICE(0x0CF3, 0x311D) },
+ { USB_DEVICE(0x0CF3, 0x817a) },
{ USB_DEVICE(0x13d3, 0x3375) },
{ USB_DEVICE(0x04CA, 0x3004) },
{ USB_DEVICE(0x04CA, 0x3005) },
static struct usb_device_id ath3k_blist_tbl[] = {
/* Atheros AR3012 with sflash firmware*/
+ { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0CF3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE },
/* Atheros 3012 with sflash firmware */
+ { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
struct hpet_dev *devp;
unsigned long addr;
- if (((vma->vm_end - vma->vm_start) != PAGE_SIZE) || vma->vm_pgoff)
- return -EINVAL;
-
devp = file->private_data;
addr = devp->hd_hpets->hp_hpet_phys;
if (addr & (PAGE_SIZE - 1))
return -ENOSYS;
- vma->vm_flags |= VM_IO;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
-
- if (io_remap_pfn_range(vma, vma->vm_start, addr >> PAGE_SHIFT,
- PAGE_SIZE, vma->vm_page_prot)) {
- printk(KERN_ERR "%s: io_remap_pfn_range failed\n",
- __func__);
- return -EAGAIN;
- }
-
- return 0;
+ return vm_iomap_memory(vma, addr, PAGE_SIZE);
#else
return -ENOSYS;
#endif
}
EXPORT_SYMBOL_GPL(hwrng_unregister);
+static void __exit hwrng_exit(void)
+{
+ mutex_lock(&rng_mutex);
+ BUG_ON(current_rng);
+ kfree(rng_buffer);
+ mutex_unlock(&rng_mutex);
+}
+
+module_exit(hwrng_exit);
MODULE_DESCRIPTION("H/W Random Number Generator (RNG) driver");
MODULE_LICENSE("GPL");
spinlock_t ports_lock;
/* To protect the vq operations for the control channel */
- spinlock_t cvq_lock;
+ spinlock_t c_ivq_lock;
+ spinlock_t c_ovq_lock;
/* The current config space is stored here */
struct virtio_console_config config;
vq = portdev->c_ovq;
sg_init_one(sg, &cpkt, sizeof(cpkt));
+
+ spin_lock(&portdev->c_ovq_lock);
if (virtqueue_add_buf(vq, sg, 1, 0, &cpkt, GFP_ATOMIC) == 0) {
virtqueue_kick(vq);
while (!virtqueue_get_buf(vq, &len))
cpu_relax();
}
+ spin_unlock(&portdev->c_ovq_lock);
return 0;
}
* rproc_serial does not want the console port, only
* the generic port implementation.
*/
- port->host_connected = port->guest_connected = true;
+ port->host_connected = true;
else if (!use_multiport(port->portdev)) {
/*
* If we're not using multiport support,
portdev = container_of(work, struct ports_device, control_work);
vq = portdev->c_ivq;
- spin_lock(&portdev->cvq_lock);
+ spin_lock(&portdev->c_ivq_lock);
while ((buf = virtqueue_get_buf(vq, &len))) {
- spin_unlock(&portdev->cvq_lock);
+ spin_unlock(&portdev->c_ivq_lock);
buf->len = len;
buf->offset = 0;
handle_control_message(portdev, buf);
- spin_lock(&portdev->cvq_lock);
+ spin_lock(&portdev->c_ivq_lock);
if (add_inbuf(portdev->c_ivq, buf) < 0) {
dev_warn(&portdev->vdev->dev,
"Error adding buffer to queue\n");
free_buf(buf, false);
}
}
- spin_unlock(&portdev->cvq_lock);
+ spin_unlock(&portdev->c_ivq_lock);
}
static void out_intr(struct virtqueue *vq)
port->inbuf = get_inbuf(port);
/*
- * Don't queue up data when port is closed. This condition
+ * Normally the port should not accept data when the port is
+ * closed. For generic serial ports, the host won't (shouldn't)
+ * send data till the guest is connected. But this condition
* can be reached when a console port is not yet connected (no
- * tty is spawned) and the host sends out data to console
- * ports. For generic serial ports, the host won't
- * (shouldn't) send data till the guest is connected.
+ * tty is spawned) and the other side sends out data over the
+ * vring, or when a remote devices start sending data before
+ * the ports are opened.
+ *
+ * A generic serial port will discard data if not connected,
+ * while console ports and rproc-serial ports accepts data at
+ * any time. rproc-serial is initiated with guest_connected to
+ * false because port_fops_open expects this. Console ports are
+ * hooked up with an HVC console and is initialized with
+ * guest_connected to true.
*/
- if (!port->guest_connected)
+
+ if (!port->guest_connected && !is_rproc_serial(port->portdev->vdev))
discard_port_data(port);
spin_unlock_irqrestore(&port->inbuf_lock, flags);
if (multiport) {
unsigned int nr_added_bufs;
- spin_lock_init(&portdev->cvq_lock);
+ spin_lock_init(&portdev->c_ivq_lock);
+ spin_lock_init(&portdev->c_ovq_lock);
INIT_WORK(&portdev->control_work, &control_work_handler);
- nr_added_bufs = fill_queue(portdev->c_ivq, &portdev->cvq_lock);
+ nr_added_bufs = fill_queue(portdev->c_ivq,
+ &portdev->c_ivq_lock);
if (!nr_added_bufs) {
dev_err(&vdev->dev,
"Error allocating buffers for control queue\n");
return ret;
if (use_multiport(portdev))
- fill_queue(portdev->c_ivq, &portdev->cvq_lock);
+ fill_queue(portdev->c_ivq, &portdev->c_ivq_lock);
list_for_each_entry(port, &portdev->ports, list) {
port->in_vq = portdev->in_vqs[port->id];
clks[pll_a_out0] = clk;
/* PLLE */
- clk = tegra_clk_register_plle("pll_e", "pll_ref", clk_base, NULL,
+ clk = tegra_clk_register_plle("pll_e", "pll_ref", clk_base, pmc_base,
0, 100000000, &pll_e_params,
0, pll_e_freq_table, NULL);
clk_register_clkdev(clk, "pll_e", NULL);
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
cpumask_copy(policy->cpus, perf->shared_cpu_map);
}
- cpumask_copy(policy->related_cpus, perf->shared_cpu_map);
#ifdef CONFIG_SMP
dmi_check_system(sw_any_bug_dmi_table);
if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) {
cpumask_clear(policy->cpus);
cpumask_set_cpu(cpu, policy->cpus);
- cpumask_copy(policy->related_cpus, cpu_sibling_mask(cpu));
policy->shared_type = CPUFREQ_SHARED_TYPE_HW;
pr_info_once(PFX "overriding BIOS provided _PSD data\n");
}
static int cpu0_cpufreq_probe(struct platform_device *pdev)
{
- struct device_node *np;
+ struct device_node *np, *parent;
int ret;
- for_each_child_of_node(of_find_node_by_path("/cpus"), np) {
+ parent = of_find_node_by_path("/cpus");
+ if (!parent) {
+ pr_err("failed to find OF /cpus\n");
+ return -ENOENT;
+ }
+
+ for_each_child_of_node(parent, np) {
if (of_get_property(np, "operating-points", NULL))
break;
}
* published by the Free Software Foundation.
*/
-#ifndef _CPUFREQ_GOVERNER_H
-#define _CPUFREQ_GOVERNER_H
+#ifndef _CPUFREQ_GOVERNOR_H
+#define _CPUFREQ_GOVERNOR_H
#include <linux/cpufreq.h>
#include <linux/kobject.h>
unsigned int sampling_rate);
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event);
-#endif /* _CPUFREQ_GOVERNER_H */
+#endif /* _CPUFREQ_GOVERNOR_H */
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
- if (!cpufreq_frequency_get_table(cpu))
+ if (!policy)
return;
- if (policy && !policy_is_shared(policy)) {
+ if (!cpufreq_frequency_get_table(cpu))
+ goto put_ref;
+
+ if (!policy_is_shared(policy)) {
pr_debug("%s: Free sysfs stat\n", __func__);
sysfs_remove_group(&policy->kobj, &stats_attr_group);
}
- if (policy)
- cpufreq_cpu_put(policy);
+
+put_ref:
+ cpufreq_cpu_put(policy);
}
static int cpufreq_stats_create_table(struct cpufreq_policy *policy,
static int intel_pstate_min_pstate(void)
{
u64 value;
- rdmsrl(0xCE, value);
+ rdmsrl(MSR_PLATFORM_INFO, value);
return (value >> 40) & 0xFF;
}
static int intel_pstate_max_pstate(void)
{
u64 value;
- rdmsrl(0xCE, value);
+ rdmsrl(MSR_PLATFORM_INFO, value);
return (value >> 8) & 0xFF;
}
{
u64 value;
int nont, ret;
- rdmsrl(0x1AD, value);
+ rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value);
nont = intel_pstate_max_pstate();
ret = ((value) & 255);
if (ret <= nont)
sample->idletime_us * 100,
sample->duration_us);
core_pct = div64_u64(sample->aperf * 100, sample->mperf);
- sample->freq = cpu->pstate.turbo_pstate * core_pct * 1000;
+ sample->freq = cpu->pstate.max_pstate * core_pct * 1000;
sample->core_pct_busy = div_s64((sample->pstate_pct_busy * core_pct),
100);
sample_time = cpu->pstate_policy->sample_rate_ms;
delay = msecs_to_jiffies(sample_time);
- delay -= jiffies % delay;
mod_timer_pinned(&cpu->timer, jiffies + delay);
}
static int __initdata no_load;
+static int intel_pstate_msrs_not_valid(void)
+{
+ /* Check that all the msr's we are using are valid. */
+ u64 aperf, mperf, tmp;
+
+ rdmsrl(MSR_IA32_APERF, aperf);
+ rdmsrl(MSR_IA32_MPERF, mperf);
+
+ if (!intel_pstate_min_pstate() ||
+ !intel_pstate_max_pstate() ||
+ !intel_pstate_turbo_pstate())
+ return -ENODEV;
+
+ rdmsrl(MSR_IA32_APERF, tmp);
+ if (!(tmp - aperf))
+ return -ENODEV;
+
+ rdmsrl(MSR_IA32_MPERF, tmp);
+ if (!(tmp - mperf))
+ return -ENODEV;
+
+ return 0;
+}
static int __init intel_pstate_init(void)
{
int cpu, rc = 0;
if (!id)
return -ENODEV;
+ if (intel_pstate_msrs_not_valid())
+ return -ENODEV;
+
pr_info("Intel P-state driver initializing.\n");
all_cpu_data = vmalloc(sizeof(void *) * num_possible_cpus());
};
static struct caam_alg_template driver_algs[] = {
- /*
- * single-pass ipsec_esp descriptor
- * authencesn(*,*) is also registered, although not present
- * explicitly here.
- */
+ /* single-pass ipsec_esp descriptor */
{
.name = "authenc(hmac(md5),cbc(aes))",
.driver_name = "authenc-hmac-md5-cbc-aes-caam",
for (i = 0; i < ARRAY_SIZE(driver_algs); i++) {
/* TODO: check if h/w supports alg */
struct caam_crypto_alg *t_alg;
- bool done = false;
-authencesn:
t_alg = caam_alg_alloc(ctrldev, &driver_algs[i]);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
dev_warn(ctrldev, "%s alg registration failed\n",
t_alg->crypto_alg.cra_driver_name);
kfree(t_alg);
- } else {
+ } else
list_add_tail(&t_alg->entry, &priv->alg_list);
- if (driver_algs[i].type == CRYPTO_ALG_TYPE_AEAD &&
- !memcmp(driver_algs[i].name, "authenc", 7) &&
- !done) {
- char *name;
-
- name = driver_algs[i].name;
- memmove(name + 10, name + 7, strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- name = driver_algs[i].driver_name;
- memmove(name + 10, name + 7, strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- done = true;
- goto authencesn;
- }
- }
}
if (!list_empty(&priv->alg_list))
dev_info(ctrldev, "%s algorithms registered in /proc/crypto\n",
#include <linux/types.h>
#include <linux/debugfs.h>
#include <linux/circ_buf.h>
-#include <linux/string.h>
#include <net/xfrm.h>
#include <crypto/algapi.h>
#include <linux/spinlock.h>
#include <linux/rtnetlink.h>
#include <linux/slab.h>
-#include <linux/string.h>
#include <crypto/algapi.h>
#include <crypto/aes.h>
};
static struct talitos_alg_template driver_algs[] = {
- /*
- * AEAD algorithms. These use a single-pass ipsec_esp descriptor.
- * authencesn(*,*) is also registered, although not present
- * explicitly here.
- */
+ /* AEAD algorithms. These use a single-pass ipsec_esp descriptor */
{ .type = CRYPTO_ALG_TYPE_AEAD,
.alg.crypto = {
.cra_name = "authenc(hmac(sha1),cbc(aes))",
if (hw_supports(dev, driver_algs[i].desc_hdr_template)) {
struct talitos_crypto_alg *t_alg;
char *name = NULL;
- bool authenc = false;
-authencesn:
t_alg = talitos_alg_alloc(dev, &driver_algs[i]);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
err = crypto_register_alg(
&t_alg->algt.alg.crypto);
name = t_alg->algt.alg.crypto.cra_driver_name;
- authenc = authenc ? !authenc :
- !(bool)memcmp(name, "authenc", 7);
break;
case CRYPTO_ALG_TYPE_AHASH:
err = crypto_register_ahash(
dev_err(dev, "%s alg registration failed\n",
name);
kfree(t_alg);
- } else {
+ } else
list_add_tail(&t_alg->entry, &priv->alg_list);
- if (authenc) {
- struct crypto_alg *alg =
- &driver_algs[i].alg.crypto;
-
- name = alg->cra_name;
- memmove(name + 10, name + 7,
- strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- name = alg->cra_driver_name;
- memmove(name + 10, name + 7,
- strlen(name) - 7);
- memcpy(name + 7, "esn", 3);
-
- goto authencesn;
- }
- }
}
}
if (!list_empty(&priv->alg_list))
.shutdown = ux500_cryp_shutdown,
.driver = {
.owner = THIS_MODULE,
- .name = "cryp1"
+ .name = "cryp1",
.pm = &ux500_cryp_pm,
}
};
config DW_DMAC
tristate "Synopsys DesignWare AHB DMA support"
+ depends on GENERIC_HARDIRQS
select DMA_ENGINE
default y if CPU_AT32AP7000
help
dev_vdbg(chan2dev(&atchan->chan_common), "complete all\n");
- BUG_ON(atc_chan_is_enabled(atchan));
-
/*
* Submit queued descriptors ASAP, i.e. before we go through
* the completed ones.
{
dev_vdbg(chan2dev(&atchan->chan_common), "advance_work\n");
+ if (atc_chan_is_enabled(atchan))
+ return;
+
if (list_empty(&atchan->active_list) ||
list_is_singular(&atchan->active_list)) {
atc_complete_all(atchan);
return;
spin_lock_irqsave(&atchan->lock, flags);
- if (!atc_chan_is_enabled(atchan)) {
- atc_advance_work(atchan);
- }
+ atc_advance_work(atchan);
spin_unlock_irqrestore(&atchan->lock, flags);
}
*maxburst = 0;
}
+static inline void convert_slave_id(struct dw_dma_chan *dwc)
+{
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+
+ dwc->dma_sconfig.slave_id -= dw->request_line_base;
+}
+
static int
set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
{
convert_burst(&dwc->dma_sconfig.src_maxburst);
convert_burst(&dwc->dma_sconfig.dst_maxburst);
+ convert_slave_id(dwc);
return 0;
}
if (dma_spec->args_count != 3)
return NULL;
- fargs.req = be32_to_cpup(dma_spec->args+0);
- fargs.src = be32_to_cpup(dma_spec->args+1);
- fargs.dst = be32_to_cpup(dma_spec->args+2);
+ fargs.req = dma_spec->args[0];
+ fargs.src = dma_spec->args[1];
+ fargs.dst = dma_spec->args[2];
if (WARN_ON(fargs.req >= DW_DMA_MAX_NR_REQUESTS ||
fargs.src >= dw->nr_masters ||
static int dw_probe(struct platform_device *pdev)
{
+ const struct platform_device_id *match;
struct dw_dma_platform_data *pdata;
struct resource *io;
struct dw_dma *dw;
memcpy(dw->data_width, pdata->data_width, 4);
}
+ /* Get the base request line if set */
+ match = platform_get_device_id(pdev);
+ if (match)
+ dw->request_line_base = (unsigned int)match->driver_data;
+
/* Calculate all channel mask before DMA setup */
dw->all_chan_mask = (1 << nr_channels) - 1;
#endif
static const struct platform_device_id dw_dma_ids[] = {
- { "INTL9C60", 0 },
+ /* Name, Request Line Base */
+ { "INTL9C60", (kernel_ulong_t)16 },
{ }
};
/* hardware configuration */
unsigned char nr_masters;
unsigned char data_width[4];
+ unsigned int request_line_base;
struct dw_dma_chan chan[0];
};
spin_lock_irqsave(&c->vc.lock, flags);
if (vchan_issue_pending(&c->vc) && !c->desc) {
- struct omap_dmadev *d = to_omap_dma_dev(chan->device);
- spin_lock(&d->lock);
- if (list_empty(&c->node))
- list_add_tail(&c->node, &d->pending);
- spin_unlock(&d->lock);
- tasklet_schedule(&d->task);
+ /*
+ * c->cyclic is used only by audio and in this case the DMA need
+ * to be started without delay.
+ */
+ if (!c->cyclic) {
+ struct omap_dmadev *d = to_omap_dma_dev(chan->device);
+ spin_lock(&d->lock);
+ if (list_empty(&c->node))
+ list_add_tail(&c->node, &d->pending);
+ spin_unlock(&d->lock);
+ tasklet_schedule(&d->task);
+ } else {
+ omap_dma_start_desc(c);
+ }
}
spin_unlock_irqrestore(&c->vc.lock, flags);
}
{
struct dma_pl330_platdata *pdat;
struct dma_pl330_dmac *pdmac;
- struct dma_pl330_chan *pch;
+ struct dma_pl330_chan *pch, *_p;
struct pl330_info *pi;
struct dma_device *pd;
struct resource *res;
ret = dma_async_device_register(pd);
if (ret) {
dev_err(&adev->dev, "unable to register DMAC\n");
- goto probe_err2;
+ goto probe_err3;
+ }
+
+ if (adev->dev.of_node) {
+ ret = of_dma_controller_register(adev->dev.of_node,
+ of_dma_pl330_xlate, pdmac);
+ if (ret) {
+ dev_err(&adev->dev,
+ "unable to register DMA to the generic DT DMA helpers\n");
+ }
}
dev_info(&adev->dev,
pi->pcfg.data_bus_width / 8, pi->pcfg.num_chan,
pi->pcfg.num_peri, pi->pcfg.num_events);
- ret = of_dma_controller_register(adev->dev.of_node,
- of_dma_pl330_xlate, pdmac);
- if (ret) {
- dev_err(&adev->dev,
- "unable to register DMA to the generic DT DMA helpers\n");
- goto probe_err2;
- }
-
return 0;
+probe_err3:
+ amba_set_drvdata(adev, NULL);
+ /* Idle the DMAC */
+ list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels,
+ chan.device_node) {
+
+ /* Remove the channel */
+ list_del(&pch->chan.device_node);
+
+ /* Flush the channel */
+ pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
+ pl330_free_chan_resources(&pch->chan);
+ }
probe_err2:
pl330_del(pi);
probe_err1:
if (!pdmac)
return 0;
- of_dma_controller_free(adev->dev.of_node);
+ if (adev->dev.of_node)
+ of_dma_controller_free(adev->dev.of_node);
+ dma_async_device_unregister(&pdmac->ddma);
amba_set_drvdata(adev, NULL);
/* Idle the DMAC */
edac_dbg(1, "MC node: %d, csrow: %d\n",
pvt->mc_node_id, i);
- if (row_dct0)
+ if (row_dct0) {
nr_pages = amd64_csrow_nr_pages(pvt, 0, i);
+ csrow->channels[0]->dimm->nr_pages = nr_pages;
+ }
/* K8 has only one DCT */
- if (boot_cpu_data.x86 != 0xf && row_dct1)
- nr_pages += amd64_csrow_nr_pages(pvt, 1, i);
+ if (boot_cpu_data.x86 != 0xf && row_dct1) {
+ int row_dct1_pages = amd64_csrow_nr_pages(pvt, 1, i);
+
+ csrow->channels[1]->dimm->nr_pages = row_dct1_pages;
+ nr_pages += row_dct1_pages;
+ }
mtype = amd64_determine_memory_type(pvt, i);
dimm = csrow->channels[j]->dimm;
dimm->mtype = mtype;
dimm->edac_mode = edac_mode;
- dimm->nr_pages = nr_pages;
}
- csrow->nr_pages = nr_pages;
}
return empty;
mci->pvt_info = pvt;
mci->pdev = &pvt->F2->dev;
- mci->csbased = 1;
setup_mci_misc_attrs(mci, fam_type);
edac_dimm_info_location(dimm, location, sizeof(location));
edac_dbg(4, "%s%i: %smapped as virtual row %d, chan %d\n",
- dimm->mci->mem_is_per_rank ? "rank" : "dimm",
+ dimm->mci->csbased ? "rank" : "dimm",
number, location, dimm->csrow, dimm->cschannel);
edac_dbg(4, " dimm = %p\n", dimm);
edac_dbg(4, " dimm->label = '%s'\n", dimm->label);
memcpy(mci->layers, layers, sizeof(*layer) * n_layers);
mci->nr_csrows = tot_csrows;
mci->num_cschannel = tot_channels;
- mci->mem_is_per_rank = per_rank;
+ mci->csbased = per_rank;
/*
* Alocate and fill the csrow/channels structs
* incrementing the compat API counters
*/
edac_dbg(4, "%s csrows map: (%d,%d)\n",
- mci->mem_is_per_rank ? "rank" : "dimm",
+ mci->csbased ? "rank" : "dimm",
dimm->csrow, dimm->cschannel);
if (row == -1)
row = dimm->csrow;
* and the per-dimm/per-rank one
*/
#define DEVICE_ATTR_LEGACY(_name, _mode, _show, _store) \
- struct device_attribute dev_attr_legacy_##_name = __ATTR(_name, _mode, _show, _store)
+ static struct device_attribute dev_attr_legacy_##_name = __ATTR(_name, _mode, _show, _store)
struct dev_ch_attribute {
struct device_attribute attr;
int i;
u32 nr_pages = 0;
- if (csrow->mci->csbased)
- return sprintf(data, "%u\n", PAGES_TO_MiB(csrow->nr_pages));
-
for (i = 0; i < csrow->nr_channels; i++)
nr_pages += csrow->channels[i]->dimm->nr_pages;
return sprintf(data, "%u\n", PAGES_TO_MiB(nr_pages));
device_initialize(&dimm->dev);
dimm->dev.parent = &mci->dev;
- if (mci->mem_is_per_rank)
+ if (mci->csbased)
dev_set_name(&dimm->dev, "rank%d", index);
else
dev_set_name(&dimm->dev, "dimm%d", index);
for (csrow_idx = 0; csrow_idx < mci->nr_csrows; csrow_idx++) {
struct csrow_info *csrow = mci->csrows[csrow_idx];
- if (csrow->mci->csbased) {
- total_pages += csrow->nr_pages;
- } else {
- for (j = 0; j < csrow->nr_channels; j++) {
- struct dimm_info *dimm = csrow->channels[j]->dimm;
+ for (j = 0; j < csrow->nr_channels; j++) {
+ struct dimm_info *dimm = csrow->channels[j]->dimm;
- total_pages += dimm->nr_pages;
- }
+ total_pages += dimm->nr_pages;
}
}
/* There is only *one* pci_eisa device per machine, right ? */
static struct eisa_root_device pci_eisa_root;
-static int __init pci_eisa_init(struct pci_dev *pdev,
- const struct pci_device_id *ent)
+static int __init pci_eisa_init(struct pci_dev *pdev)
{
- int rc;
+ int rc, i;
+ struct resource *res, *bus_res = NULL;
if ((rc = pci_enable_device (pdev))) {
printk (KERN_ERR "pci_eisa : Could not enable device %s\n",
return rc;
}
+ /*
+ * The Intel 82375 PCI-EISA bridge is a subtractive-decode PCI
+ * device, so the resources available on EISA are the same as those
+ * available on the 82375 bus. This works the same as a PCI-PCI
+ * bridge in subtractive-decode mode (see pci_read_bridge_bases()).
+ * We assume other PCI-EISA bridges are similar.
+ *
+ * eisa_root_register() can only deal with a single io port resource,
+ * so we use the first valid io port resource.
+ */
+ pci_bus_for_each_resource(pdev->bus, res, i)
+ if (res && (res->flags & IORESOURCE_IO)) {
+ bus_res = res;
+ break;
+ }
+
+ if (!bus_res) {
+ dev_err(&pdev->dev, "No resources available\n");
+ return -1;
+ }
+
pci_eisa_root.dev = &pdev->dev;
- pci_eisa_root.res = pdev->bus->resource[0];
- pci_eisa_root.bus_base_addr = pdev->bus->resource[0]->start;
+ pci_eisa_root.res = bus_res;
+ pci_eisa_root.bus_base_addr = bus_res->start;
pci_eisa_root.slots = EISA_MAX_SLOTS;
pci_eisa_root.dma_mask = pdev->dma_mask;
dev_set_drvdata(pci_eisa_root.dev, &pci_eisa_root);
return 0;
}
-static struct pci_device_id pci_eisa_pci_tbl[] = {
- { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
- PCI_CLASS_BRIDGE_EISA << 8, 0xffff00, 0 },
- { 0, }
-};
+/*
+ * We have to call pci_eisa_init_early() before pnpacpi_init()/isapnp_init().
+ * Otherwise pnp resource will get enabled early and could prevent eisa
+ * to be initialized.
+ * Also need to make sure pci_eisa_init_early() is called after
+ * x86/pci_subsys_init().
+ * So need to use subsys_initcall_sync with it.
+ */
+static int __init pci_eisa_init_early(void)
+{
+ struct pci_dev *dev = NULL;
+ int ret;
-static struct pci_driver __refdata pci_eisa_driver = {
- .name = "pci_eisa",
- .id_table = pci_eisa_pci_tbl,
- .probe = pci_eisa_init,
-};
+ for_each_pci_dev(dev)
+ if ((dev->class >> 8) == PCI_CLASS_BRIDGE_EISA) {
+ ret = pci_eisa_init(dev);
+ if (ret)
+ return ret;
+ }
-static int __init pci_eisa_init_module (void)
-{
- return pci_register_driver (&pci_eisa_driver);
+ return 0;
}
-
-device_initcall(pci_eisa_init_module);
-MODULE_DEVICE_TABLE(pci, pci_eisa_pci_tbl);
+subsys_initcall_sync(pci_eisa_init_early);
#define DEV_NAME "max77693-muic"
#define DELAY_MS_DEFAULT 20000 /* unit: millisecond */
+/*
+ * Default value of MAX77693 register to bring up MUIC device.
+ * If user don't set some initial value for MUIC device through platform data,
+ * extcon-max77693 driver use 'default_init_data' to bring up base operation
+ * of MAX77693 MUIC device.
+ */
+struct max77693_reg_data default_init_data[] = {
+ {
+ /* STATUS2 - [3]ChgDetRun */
+ .addr = MAX77693_MUIC_REG_STATUS2,
+ .data = STATUS2_CHGDETRUN_MASK,
+ }, {
+ /* INTMASK1 - Unmask [3]ADC1KM,[0]ADCM */
+ .addr = MAX77693_MUIC_REG_INTMASK1,
+ .data = INTMASK1_ADC1K_MASK
+ | INTMASK1_ADC_MASK,
+ }, {
+ /* INTMASK2 - Unmask [0]ChgTypM */
+ .addr = MAX77693_MUIC_REG_INTMASK2,
+ .data = INTMASK2_CHGTYP_MASK,
+ }, {
+ /* INTMASK3 - Mask all of interrupts */
+ .addr = MAX77693_MUIC_REG_INTMASK3,
+ .data = 0x0,
+ }, {
+ /* CDETCTRL2 */
+ .addr = MAX77693_MUIC_REG_CDETCTRL2,
+ .data = CDETCTRL2_VIDRMEN_MASK
+ | CDETCTRL2_DXOVPEN_MASK,
+ },
+};
+
enum max77693_muic_adc_debounce_time {
ADC_DEBOUNCE_TIME_5MS = 0,
ADC_DEBOUNCE_TIME_10MS,
{
struct max77693_dev *max77693 = dev_get_drvdata(pdev->dev.parent);
struct max77693_platform_data *pdata = dev_get_platdata(max77693->dev);
- struct max77693_muic_platform_data *muic_pdata = pdata->muic_data;
struct max77693_muic_info *info;
+ struct max77693_reg_data *init_data;
+ int num_init_data;
int delay_jiffies;
int ret;
int i;
goto err_irq;
}
- /* Initialize MUIC register by using platform data */
- for (i = 0 ; i < muic_pdata->num_init_data ; i++) {
- enum max77693_irq_source irq_src = MAX77693_IRQ_GROUP_NR;
+
+ /* Initialize MUIC register by using platform data or default data */
+ if (pdata->muic_data) {
+ init_data = pdata->muic_data->init_data;
+ num_init_data = pdata->muic_data->num_init_data;
+ } else {
+ init_data = default_init_data;
+ num_init_data = ARRAY_SIZE(default_init_data);
+ }
+
+ for (i = 0 ; i < num_init_data ; i++) {
+ enum max77693_irq_source irq_src
+ = MAX77693_IRQ_GROUP_NR;
max77693_write_reg(info->max77693->regmap_muic,
- muic_pdata->init_data[i].addr,
- muic_pdata->init_data[i].data);
+ init_data[i].addr,
+ init_data[i].data);
- switch (muic_pdata->init_data[i].addr) {
+ switch (init_data[i].addr) {
case MAX77693_MUIC_REG_INTMASK1:
irq_src = MUIC_INT1;
break;
if (irq_src < MAX77693_IRQ_GROUP_NR)
info->max77693->irq_masks_cur[irq_src]
- = muic_pdata->init_data[i].data;
+ = init_data[i].data;
}
- /*
- * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
- * h/w path of COMP2/COMN1 on CONTROL1 register.
- */
- if (muic_pdata->path_uart)
- info->path_uart = muic_pdata->path_uart;
- else
- info->path_uart = CONTROL1_SW_UART;
+ if (pdata->muic_data) {
+ struct max77693_muic_platform_data *muic_pdata = pdata->muic_data;
- if (muic_pdata->path_usb)
- info->path_usb = muic_pdata->path_usb;
- else
+ /*
+ * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
+ * h/w path of COMP2/COMN1 on CONTROL1 register.
+ */
+ if (muic_pdata->path_uart)
+ info->path_uart = muic_pdata->path_uart;
+ else
+ info->path_uart = CONTROL1_SW_UART;
+
+ if (muic_pdata->path_usb)
+ info->path_usb = muic_pdata->path_usb;
+ else
+ info->path_usb = CONTROL1_SW_USB;
+
+ /*
+ * Default delay time for detecting cable state
+ * after certain time.
+ */
+ if (muic_pdata->detcable_delay_ms)
+ delay_jiffies =
+ msecs_to_jiffies(muic_pdata->detcable_delay_ms);
+ else
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ } else {
info->path_usb = CONTROL1_SW_USB;
+ info->path_uart = CONTROL1_SW_UART;
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ }
/* Set initial path for UART */
max77693_muic_set_path(info, info->path_uart, true);
* driver should notify cable state to upper layer.
*/
INIT_DELAYED_WORK(&info->wq_detcable, max77693_muic_detect_cable_wq);
- if (muic_pdata->detcable_delay_ms)
- delay_jiffies = msecs_to_jiffies(muic_pdata->detcable_delay_ms);
- else
- delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
schedule_delayed_work(&info->wq_detcable, delay_jiffies);
return ret;
goto err_irq;
}
- /* Initialize registers according to platform data */
if (pdata->muic_pdata) {
- struct max8997_muic_platform_data *mdata = info->muic_pdata;
-
- for (i = 0; i < mdata->num_init_data; i++) {
- max8997_write_reg(info->muic, mdata->init_data[i].addr,
- mdata->init_data[i].data);
+ struct max8997_muic_platform_data *muic_pdata
+ = pdata->muic_pdata;
+
+ /* Initialize registers according to platform data */
+ for (i = 0; i < muic_pdata->num_init_data; i++) {
+ max8997_write_reg(info->muic,
+ muic_pdata->init_data[i].addr,
+ muic_pdata->init_data[i].data);
}
- }
- /*
- * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
- * h/w path of COMP2/COMN1 on CONTROL1 register.
- */
- if (pdata->muic_pdata->path_uart)
- info->path_uart = pdata->muic_pdata->path_uart;
- else
- info->path_uart = CONTROL1_SW_UART;
+ /*
+ * Default usb/uart path whether UART/USB or AUX_UART/AUX_USB
+ * h/w path of COMP2/COMN1 on CONTROL1 register.
+ */
+ if (muic_pdata->path_uart)
+ info->path_uart = muic_pdata->path_uart;
+ else
+ info->path_uart = CONTROL1_SW_UART;
- if (pdata->muic_pdata->path_usb)
- info->path_usb = pdata->muic_pdata->path_usb;
- else
+ if (muic_pdata->path_usb)
+ info->path_usb = muic_pdata->path_usb;
+ else
+ info->path_usb = CONTROL1_SW_USB;
+
+ /*
+ * Default delay time for detecting cable state
+ * after certain time.
+ */
+ if (muic_pdata->detcable_delay_ms)
+ delay_jiffies =
+ msecs_to_jiffies(muic_pdata->detcable_delay_ms);
+ else
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ } else {
+ info->path_uart = CONTROL1_SW_UART;
info->path_usb = CONTROL1_SW_USB;
+ delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
+ }
/* Set initial path for UART */
max8997_muic_set_path(info, info->path_uart, true);
* driver should notify cable state to upper layer.
*/
INIT_DELAYED_WORK(&info->wq_detcable, max8997_muic_detect_cable_wq);
- if (pdata->muic_pdata->detcable_delay_ms)
- delay_jiffies = msecs_to_jiffies(pdata->muic_pdata->detcable_delay_ms);
- else
- delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
schedule_delayed_work(&info->wq_detcable, delay_jiffies);
return 0;
config EFI_VARS
tristate "EFI Variable Support via sysfs"
depends on EFI
+ select UCS2_STRING
default n
help
If you say Y here, you are able to get EFI (Extensible Firmware
Subsequent efibootmgr releases may be found at:
<http://linux.dell.com/efibootmgr>
+config EFI_VARS_PSTORE
+ bool "Register efivars backend for pstore"
+ depends on EFI_VARS && PSTORE
+ default y
+ help
+ Say Y here to enable use efivars as a backend to pstore. This
+ will allow writing console messages, crash dumps, or anything
+ else supported by pstore to EFI variables.
+
+config EFI_VARS_PSTORE_DEFAULT_DISABLE
+ bool "Disable using efivars as a pstore backend by default"
+ depends on EFI_VARS_PSTORE
+ default n
+ help
+ Saying Y here will disable the use of efivars as a storage
+ backend for pstore by default. This setting can be overridden
+ using the efivars module's pstore_disable parameter.
+
config EFI_PCDP
bool "Console device selection via EFI PCDP or HCDP table"
depends on ACPI && EFI && IA64
#include <linux/slab.h>
#include <linux/pstore.h>
#include <linux/ctype.h>
+#include <linux/ucs2_string.h>
#include <linux/fs.h>
#include <linux/ramfs.h>
*/
#define GUID_LEN 36
+static bool efivars_pstore_disable =
+ IS_ENABLED(CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE);
+
+module_param_named(pstore_disable, efivars_pstore_disable, bool, 0644);
+
/*
* The maximum size of VariableName + Data = 1024
* Therefore, it's reasonable to save that much
static void efivar_update_sysfs_entries(struct work_struct *);
static DECLARE_WORK(efivar_work, efivar_update_sysfs_entries);
-
-/* Return the number of unicode characters in data */
-static unsigned long
-utf16_strnlen(efi_char16_t *s, size_t maxlength)
-{
- unsigned long length = 0;
-
- while (*s++ != 0 && length < maxlength)
- length++;
- return length;
-}
-
-static inline unsigned long
-utf16_strlen(efi_char16_t *s)
-{
- return utf16_strnlen(s, ~0UL);
-}
-
-/*
- * Return the number of bytes is the length of this string
- * Note: this is NOT the same as the number of unicode characters
- */
-static inline unsigned long
-utf16_strsize(efi_char16_t *data, unsigned long maxlength)
-{
- return utf16_strnlen(data, maxlength/sizeof(efi_char16_t)) * sizeof(efi_char16_t);
-}
-
-static inline int
-utf16_strncmp(const efi_char16_t *a, const efi_char16_t *b, size_t len)
-{
- while (1) {
- if (len == 0)
- return 0;
- if (*a < *b)
- return -1;
- if (*a > *b)
- return 1;
- if (*a == 0) /* implies *b == 0 */
- return 0;
- a++;
- b++;
- len--;
- }
-}
+static bool efivar_wq_enabled = true;
static bool
validate_device_path(struct efi_variable *var, int match, u8 *buffer,
u16 filepathlength;
int i, desclength = 0, namelen;
- namelen = utf16_strnlen(var->VariableName, sizeof(var->VariableName));
+ namelen = ucs2_strnlen(var->VariableName, sizeof(var->VariableName));
/* Either "Boot" or "Driver" followed by four digits of hex */
for (i = match; i < match+4; i++) {
* There's no stored length for the description, so it has to be
* found by hand
*/
- desclength = utf16_strsize((efi_char16_t *)(buffer + 6), len - 6) + 2;
+ desclength = ucs2_strsize((efi_char16_t *)(buffer + 6), len - 6) + 2;
/* Each boot entry must have a descriptor */
if (!desclength)
check_var_size_locked(struct efivars *efivars, u32 attributes,
unsigned long size)
{
- u64 storage_size, remaining_size, max_size;
- efi_status_t status;
const struct efivar_operations *fops = efivars->ops;
- if (!efivars->ops->query_variable_info)
+ if (!efivars->ops->query_variable_store)
return EFI_UNSUPPORTED;
- status = fops->query_variable_info(attributes, &storage_size,
- &remaining_size, &max_size);
-
- if (status != EFI_SUCCESS)
- return status;
-
- if (!storage_size || size > remaining_size || size > max_size ||
- (remaining_size - size) < (storage_size / 2))
- return EFI_OUT_OF_RESOURCES;
-
- return status;
+ return fops->query_variable_store(attributes, size);
}
spin_lock_irq(&efivars->lock);
status = check_var_size_locked(efivars, new_var->Attributes,
- new_var->DataSize + utf16_strsize(new_var->VariableName, 1024));
+ new_var->DataSize + ucs2_strsize(new_var->VariableName, 1024));
if (status == EFI_SUCCESS || status == EFI_UNSUPPORTED)
status = efivars->ops->set_variable(new_var->VariableName,
* QueryVariableInfo() isn't supported by the firmware.
*/
- varsize = datasize + utf16_strsize(var->var.VariableName, 1024);
+ varsize = datasize + ucs2_strsize(var->var.VariableName, 1024);
status = check_var_size(efivars, attributes, varsize);
if (status != EFI_SUCCESS) {
inode = NULL;
- len = utf16_strlen(entry->var.VariableName);
+ len = ucs2_strlen(entry->var.VariableName);
/* name, plus '-', plus GUID, plus NUL*/
name = kmalloc(len + 1 + GUID_LEN + 1, GFP_ATOMIC);
.create = efivarfs_create,
};
-static struct pstore_info efi_pstore_info;
-
-#ifdef CONFIG_PSTORE
+#ifdef CONFIG_EFI_VARS_PSTORE
static int efi_pstore_open(struct pstore_info *psi)
{
spin_unlock_irqrestore(&efivars->lock, flags);
- if (reason == KMSG_DUMP_OOPS)
+ if (reason == KMSG_DUMP_OOPS && efivar_wq_enabled)
schedule_work(&efivar_work);
*id = part;
if (efi_guidcmp(entry->var.VendorGuid, vendor))
continue;
- if (utf16_strncmp(entry->var.VariableName, efi_name,
- utf16_strlen(efi_name))) {
+ if (ucs2_strncmp(entry->var.VariableName, efi_name,
+ ucs2_strlen(efi_name))) {
/*
* Check if an old format,
* which doesn't support holding
for (i = 0; i < DUMP_NAME_LEN; i++)
efi_name_old[i] = name_old[i];
- if (utf16_strncmp(entry->var.VariableName, efi_name_old,
- utf16_strlen(efi_name_old)))
+ if (ucs2_strncmp(entry->var.VariableName, efi_name_old,
+ ucs2_strlen(efi_name_old)))
continue;
}
return 0;
}
-#else
-static int efi_pstore_open(struct pstore_info *psi)
-{
- return 0;
-}
-
-static int efi_pstore_close(struct pstore_info *psi)
-{
- return 0;
-}
-
-static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, int *count,
- struct timespec *timespec,
- char **buf, struct pstore_info *psi)
-{
- return -1;
-}
-
-static int efi_pstore_write(enum pstore_type_id type,
- enum kmsg_dump_reason reason, u64 *id,
- unsigned int part, int count, size_t size,
- struct pstore_info *psi)
-{
- return 0;
-}
-
-static int efi_pstore_erase(enum pstore_type_id type, u64 id, int count,
- struct timespec time, struct pstore_info *psi)
-{
- return 0;
-}
-#endif
static struct pstore_info efi_pstore_info = {
.owner = THIS_MODULE,
.erase = efi_pstore_erase,
};
+static void efivar_pstore_register(struct efivars *efivars)
+{
+ efivars->efi_pstore_info = efi_pstore_info;
+ efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL);
+ if (efivars->efi_pstore_info.buf) {
+ efivars->efi_pstore_info.bufsize = 1024;
+ efivars->efi_pstore_info.data = efivars;
+ spin_lock_init(&efivars->efi_pstore_info.buf_lock);
+ pstore_register(&efivars->efi_pstore_info);
+ }
+}
+#else
+static void efivar_pstore_register(struct efivars *efivars)
+{
+ return;
+}
+#endif
+
static ssize_t efivar_create(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
* Does this variable already exist?
*/
list_for_each_entry_safe(search_efivar, n, &efivars->list, list) {
- strsize1 = utf16_strsize(search_efivar->var.VariableName, 1024);
- strsize2 = utf16_strsize(new_var->VariableName, 1024);
+ strsize1 = ucs2_strsize(search_efivar->var.VariableName, 1024);
+ strsize2 = ucs2_strsize(new_var->VariableName, 1024);
if (strsize1 == strsize2 &&
!memcmp(&(search_efivar->var.VariableName),
new_var->VariableName, strsize1) &&
}
status = check_var_size_locked(efivars, new_var->Attributes,
- new_var->DataSize + utf16_strsize(new_var->VariableName, 1024));
+ new_var->DataSize + ucs2_strsize(new_var->VariableName, 1024));
if (status && status != EFI_UNSUPPORTED) {
spin_unlock_irq(&efivars->lock);
/* Create the entry in sysfs. Locking is not required here */
status = efivar_create_sysfs_entry(efivars,
- utf16_strsize(new_var->VariableName,
+ ucs2_strsize(new_var->VariableName,
1024),
new_var->VariableName,
&new_var->VendorGuid);
* Does this variable already exist?
*/
list_for_each_entry_safe(search_efivar, n, &efivars->list, list) {
- strsize1 = utf16_strsize(search_efivar->var.VariableName, 1024);
- strsize2 = utf16_strsize(del_var->VariableName, 1024);
+ strsize1 = ucs2_strsize(search_efivar->var.VariableName, 1024);
+ strsize2 = ucs2_strsize(del_var->VariableName, 1024);
if (strsize1 == strsize2 &&
!memcmp(&(search_efivar->var.VariableName),
del_var->VariableName, strsize1) &&
unsigned long strsize1, strsize2;
bool found = false;
- strsize1 = utf16_strsize(variable_name, 1024);
+ strsize1 = ucs2_strsize(variable_name, 1024);
list_for_each_entry_safe(entry, n, &efivars->list, list) {
- strsize2 = utf16_strsize(entry->var.VariableName, 1024);
+ strsize2 = ucs2_strsize(entry->var.VariableName, 1024);
if (strsize1 == strsize2 &&
!memcmp(variable_name, &(entry->var.VariableName),
strsize2) &&
return found;
}
+/*
+ * Returns the size of variable_name, in bytes, including the
+ * terminating NULL character, or variable_name_size if no NULL
+ * character is found among the first variable_name_size bytes.
+ */
+static unsigned long var_name_strnsize(efi_char16_t *variable_name,
+ unsigned long variable_name_size)
+{
+ unsigned long len;
+ efi_char16_t c;
+
+ /*
+ * The variable name is, by definition, a NULL-terminated
+ * string, so make absolutely sure that variable_name_size is
+ * the value we expect it to be. If not, return the real size.
+ */
+ for (len = 2; len <= variable_name_size; len += sizeof(c)) {
+ c = variable_name[(len / sizeof(c)) - 1];
+ if (!c)
+ break;
+ }
+
+ return min(len, variable_name_size);
+}
+
static void efivar_update_sysfs_entries(struct work_struct *work)
{
struct efivars *efivars = &__efivars;
if (!found) {
kfree(variable_name);
break;
- } else
+ } else {
+ variable_name_size = var_name_strnsize(variable_name,
+ variable_name_size);
efivar_create_sysfs_entry(efivars,
variable_name_size,
variable_name, &vendor);
+ }
}
}
}
EXPORT_SYMBOL_GPL(unregister_efivars);
+/*
+ * Print a warning when duplicate EFI variables are encountered and
+ * disable the sysfs workqueue since the firmware is buggy.
+ */
+static void dup_variable_bug(efi_char16_t *s16, efi_guid_t *vendor_guid,
+ unsigned long len16)
+{
+ size_t i, len8 = len16 / sizeof(efi_char16_t);
+ char *s8;
+
+ /*
+ * Disable the workqueue since the algorithm it uses for
+ * detecting new variables won't work with this buggy
+ * implementation of GetNextVariableName().
+ */
+ efivar_wq_enabled = false;
+
+ s8 = kzalloc(len8, GFP_KERNEL);
+ if (!s8)
+ return;
+
+ for (i = 0; i < len8; i++)
+ s8[i] = s16[i];
+
+ printk(KERN_WARNING "efivars: duplicate variable: %s-%pUl\n",
+ s8, vendor_guid);
+ kfree(s8);
+}
+
int register_efivars(struct efivars *efivars,
const struct efivar_operations *ops,
struct kobject *parent_kobj)
&vendor_guid);
switch (status) {
case EFI_SUCCESS:
+ variable_name_size = var_name_strnsize(variable_name,
+ variable_name_size);
+
+ /*
+ * Some firmware implementations return the
+ * same variable name on multiple calls to
+ * get_next_variable(). Terminate the loop
+ * immediately as there is no guarantee that
+ * we'll ever see a different variable name,
+ * and may end up looping here forever.
+ */
+ if (variable_is_present(variable_name, &vendor_guid)) {
+ dup_variable_bug(variable_name, &vendor_guid,
+ variable_name_size);
+ status = EFI_NOT_FOUND;
+ break;
+ }
+
efivar_create_sysfs_entry(efivars,
variable_name_size,
variable_name,
if (error)
unregister_efivars(efivars);
- efivars->efi_pstore_info = efi_pstore_info;
-
- efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL);
- if (efivars->efi_pstore_info.buf) {
- efivars->efi_pstore_info.bufsize = 1024;
- efivars->efi_pstore_info.data = efivars;
- spin_lock_init(&efivars->efi_pstore_info.buf_lock);
- pstore_register(&efivars->efi_pstore_info);
- }
+ if (!efivars_pstore_disable)
+ efivar_pstore_register(efivars);
register_filesystem(&efivarfs_type);
ops.get_variable = efi.get_variable;
ops.set_variable = efi.set_variable;
ops.get_next_variable = efi.get_next_variable;
- ops.query_variable_info = efi.query_variable_info;
+ ops.query_variable_store = efi_query_variable_store;
error = register_efivars(&__efivars, &ops, efi_kobj);
if (error)
* If it can't be trusted, assume that the pin can be used as a GPIO.
*/
if (ichx_priv.desc->use_sel_ignore[nr / 32] & (1 << (nr & 0x1f)))
- return 1;
+ return 0;
return ichx_read_bit(GPIO_USE_SEL, nr) ? 0 : -ENODEV;
}
chip->gpio_chip.ngpio,
irq_base,
&pca953x_irq_simple_ops,
- NULL);
+ chip);
if (!chip->domain)
return -ENODEV;
.xlate = irq_domain_xlate_twocell,
};
-static int stmpe_gpio_irq_init(struct stmpe_gpio *stmpe_gpio)
+static int stmpe_gpio_irq_init(struct stmpe_gpio *stmpe_gpio,
+ struct device_node *np)
{
- int base = stmpe_gpio->irq_base;
+ int base = 0;
- stmpe_gpio->domain = irq_domain_add_simple(NULL,
+ if (!np)
+ base = stmpe_gpio->irq_base;
+
+ stmpe_gpio->domain = irq_domain_add_simple(np,
stmpe_gpio->chip.ngpio, base,
&stmpe_gpio_irq_simple_ops, stmpe_gpio);
if (!stmpe_gpio->domain) {
stmpe_gpio->chip = template_chip;
stmpe_gpio->chip.ngpio = stmpe->num_gpios;
stmpe_gpio->chip.dev = &pdev->dev;
+#ifdef CONFIG_OF
+ stmpe_gpio->chip.of_node = np;
+#endif
stmpe_gpio->chip.base = pdata ? pdata->gpio_base : -1;
if (pdata)
goto out_free;
if (irq >= 0) {
- ret = stmpe_gpio_irq_init(stmpe_gpio);
+ ret = stmpe_gpio_irq_init(stmpe_gpio, np);
if (ret)
goto out_disable;
if (!np)
return;
- do {
+ for (;; index++) {
ret = of_parse_phandle_with_args(np, "gpio-ranges",
"#gpio-range-cells", index, &pinspec);
if (ret)
if (ret)
break;
-
- } while (index++);
+ }
}
#else
fb = dev->mode_config.funcs->fb_create(dev, file_priv, &r);
if (IS_ERR(fb)) {
DRM_DEBUG_KMS("could not create framebuffer\n");
- drm_modeset_unlock_all(dev);
return PTR_ERR(fb);
}
fb = dev->mode_config.funcs->fb_create(dev, file_priv, r);
if (IS_ERR(fb)) {
DRM_DEBUG_KMS("could not create framebuffer\n");
- drm_modeset_unlock_all(dev);
return PTR_ERR(fb);
}
unsigned vblank = (pt->vactive_vblank_hi & 0xf) << 8 | pt->vblank_lo;
unsigned hsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc0) << 2 | pt->hsync_offset_lo;
unsigned hsync_pulse_width = (pt->hsync_vsync_offset_pulse_width_hi & 0x30) << 4 | pt->hsync_pulse_width_lo;
- unsigned vsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc) >> 2 | pt->vsync_offset_pulse_width_lo >> 4;
+ unsigned vsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc) << 2 | pt->vsync_offset_pulse_width_lo >> 4;
unsigned vsync_pulse_width = (pt->hsync_vsync_offset_pulse_width_hi & 0x3) << 4 | (pt->vsync_offset_pulse_width_lo & 0xf);
/* ignore tiny modes */
}
mode->type = DRM_MODE_TYPE_DRIVER;
+ mode->vrefresh = drm_mode_vrefresh(mode);
drm_mode_set_name(mode);
return mode;
if (!fb_helper->fb)
return 0;
- drm_modeset_lock_all(dev);
+ mutex_lock(&fb_helper->dev->mode_config.mutex);
if (!drm_fb_helper_is_bound(fb_helper)) {
fb_helper->delayed_hotplug = true;
- drm_modeset_unlock_all(dev);
+ mutex_unlock(&fb_helper->dev->mode_config.mutex);
return 0;
}
DRM_DEBUG_KMS("\n");
count = drm_fb_helper_probe_connector_modes(fb_helper, max_width,
max_height);
+ mutex_unlock(&fb_helper->dev->mode_config.mutex);
+
+ drm_modeset_lock_all(dev);
drm_setup_crtcs(fb_helper);
drm_modeset_unlock_all(dev);
-
drm_fb_helper_set_par(fb_helper->fbdev);
return 0;
int retcode = 0;
int need_setup = 0;
struct address_space *old_mapping;
+ struct address_space *old_imapping;
minor = idr_find(&drm_minors_idr, minor_id);
if (!minor)
if (!dev->open_count++)
need_setup = 1;
mutex_lock(&dev->struct_mutex);
+ old_imapping = inode->i_mapping;
old_mapping = dev->dev_mapping;
if (old_mapping == NULL)
dev->dev_mapping = &inode->i_data;
err_undo:
mutex_lock(&dev->struct_mutex);
- filp->f_mapping = old_mapping;
- inode->i_mapping = old_mapping;
+ filp->f_mapping = old_imapping;
+ inode->i_mapping = old_imapping;
iput(container_of(dev->dev_mapping, struct inode, i_data));
dev->dev_mapping = old_mapping;
mutex_unlock(&dev->struct_mutex);
/* position control register for hardware window 0, 2 ~ 4.*/
#define VIDOSD_A(win) (VIDOSD_BASE + 0x00 + (win) * 16)
#define VIDOSD_B(win) (VIDOSD_BASE + 0x04 + (win) * 16)
-/* size control register for hardware window 0. */
-#define VIDOSD_C_SIZE_W0 (VIDOSD_BASE + 0x08)
-/* alpha control register for hardware window 1 ~ 4. */
-#define VIDOSD_C(win) (VIDOSD_BASE + 0x18 + (win) * 16)
-/* size control register for hardware window 1 ~ 4. */
+/*
+ * size control register for hardware windows 0 and alpha control register
+ * for hardware windows 1 ~ 4
+ */
+#define VIDOSD_C(win) (VIDOSD_BASE + 0x08 + (win) * 16)
+/* size control register for hardware windows 1 ~ 2. */
#define VIDOSD_D(win) (VIDOSD_BASE + 0x0C + (win) * 16)
#define VIDWx_BUF_START(win, buf) (VIDW_BUF_START(buf) + (win) * 8)
#define VIDWx_BUF_SIZE(win, buf) (VIDW_BUF_SIZE(buf) + (win) * 4)
/* color key control register for hardware window 1 ~ 4. */
-#define WKEYCON0_BASE(x) ((WKEYCON0 + 0x140) + (x * 8))
+#define WKEYCON0_BASE(x) ((WKEYCON0 + 0x140) + ((x - 1) * 8))
/* color key value register for hardware window 1 ~ 4. */
-#define WKEYCON1_BASE(x) ((WKEYCON1 + 0x140) + (x * 8))
+#define WKEYCON1_BASE(x) ((WKEYCON1 + 0x140) + ((x - 1) * 8))
/* FIMD has totally five hardware windows. */
#define WINDOWS_NR 5
#ifdef CONFIG_OF
static const struct of_device_id fimd_driver_dt_match[] = {
- { .compatible = "samsung,exynos4-fimd",
+ { .compatible = "samsung,exynos4210-fimd",
.data = &exynos4_fimd_driver_data },
- { .compatible = "samsung,exynos5-fimd",
+ { .compatible = "samsung,exynos5250-fimd",
.data = &exynos5_fimd_driver_data },
{},
};
if (win != 3 && win != 4) {
u32 offset = VIDOSD_D(win);
if (win == 0)
- offset = VIDOSD_C_SIZE_W0;
+ offset = VIDOSD_C(win);
val = win_data->ovl_width * win_data->ovl_height;
writel(val, ctx->regs + offset);
/* registers for base address */
#define G2D_SRC_BASE_ADDR 0x0304
+#define G2D_SRC_COLOR_MODE 0x030C
+#define G2D_SRC_LEFT_TOP 0x0310
+#define G2D_SRC_RIGHT_BOTTOM 0x0314
#define G2D_SRC_PLANE2_BASE_ADDR 0x0318
#define G2D_DST_BASE_ADDR 0x0404
+#define G2D_DST_COLOR_MODE 0x040C
+#define G2D_DST_LEFT_TOP 0x0410
+#define G2D_DST_RIGHT_BOTTOM 0x0414
#define G2D_DST_PLANE2_BASE_ADDR 0x0418
#define G2D_PAT_BASE_ADDR 0x0500
#define G2D_MSK_BASE_ADDR 0x0520
#define G2D_DMA_LIST_DONE_COUNT_OFFSET 17
/* G2D_DMA_HOLD_CMD */
-#define G2D_USET_HOLD (1 << 2)
+#define G2D_USER_HOLD (1 << 2)
#define G2D_LIST_HOLD (1 << 1)
#define G2D_BITBLT_HOLD (1 << 0)
#define G2D_START_NHOLT (1 << 1)
#define G2D_START_BITBLT (1 << 0)
+/* buffer color format */
+#define G2D_FMT_XRGB8888 0
+#define G2D_FMT_ARGB8888 1
+#define G2D_FMT_RGB565 2
+#define G2D_FMT_XRGB1555 3
+#define G2D_FMT_ARGB1555 4
+#define G2D_FMT_XRGB4444 5
+#define G2D_FMT_ARGB4444 6
+#define G2D_FMT_PACKED_RGB888 7
+#define G2D_FMT_A8 11
+#define G2D_FMT_L8 12
+
+/* buffer valid length */
+#define G2D_LEN_MIN 1
+#define G2D_LEN_MAX 8000
+
#define G2D_CMDLIST_SIZE (PAGE_SIZE / 4)
#define G2D_CMDLIST_NUM 64
#define G2D_CMDLIST_POOL_SIZE (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM)
#define G2D_CMDLIST_DATA_NUM (G2D_CMDLIST_SIZE / sizeof(u32) - 2)
-#define MAX_BUF_ADDR_NR 6
-
/* maximum buffer pool size of userptr is 64MB as default */
#define MAX_POOL (64 * 1024 * 1024)
BUF_TYPE_USERPTR,
};
+enum g2d_reg_type {
+ REG_TYPE_NONE = -1,
+ REG_TYPE_SRC,
+ REG_TYPE_SRC_PLANE2,
+ REG_TYPE_DST,
+ REG_TYPE_DST_PLANE2,
+ REG_TYPE_PAT,
+ REG_TYPE_MSK,
+ MAX_REG_TYPE_NR
+};
+
/* cmdlist data structure */
struct g2d_cmdlist {
u32 head;
u32 last; /* last data offset */
};
+/*
+ * A structure of buffer description
+ *
+ * @format: color format
+ * @left_x: the x coordinates of left top corner
+ * @top_y: the y coordinates of left top corner
+ * @right_x: the x coordinates of right bottom corner
+ * @bottom_y: the y coordinates of right bottom corner
+ *
+ */
+struct g2d_buf_desc {
+ unsigned int format;
+ unsigned int left_x;
+ unsigned int top_y;
+ unsigned int right_x;
+ unsigned int bottom_y;
+};
+
+/*
+ * A structure of buffer information
+ *
+ * @map_nr: manages the number of mapped buffers
+ * @reg_types: stores regitster type in the order of requested command
+ * @handles: stores buffer handle in its reg_type position
+ * @types: stores buffer type in its reg_type position
+ * @descs: stores buffer description in its reg_type position
+ *
+ */
+struct g2d_buf_info {
+ unsigned int map_nr;
+ enum g2d_reg_type reg_types[MAX_REG_TYPE_NR];
+ unsigned long handles[MAX_REG_TYPE_NR];
+ unsigned int types[MAX_REG_TYPE_NR];
+ struct g2d_buf_desc descs[MAX_REG_TYPE_NR];
+};
+
struct drm_exynos_pending_g2d_event {
struct drm_pending_event base;
struct drm_exynos_g2d_event event;
bool in_pool;
bool out_of_list;
};
-
struct g2d_cmdlist_node {
struct list_head list;
struct g2d_cmdlist *cmdlist;
- unsigned int map_nr;
- unsigned long handles[MAX_BUF_ADDR_NR];
- unsigned int obj_type[MAX_BUF_ADDR_NR];
dma_addr_t dma_addr;
+ struct g2d_buf_info buf_info;
struct drm_exynos_pending_g2d_event *event;
};
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
int nr;
int ret;
+ struct g2d_buf_info *buf_info;
init_dma_attrs(&g2d->cmdlist_dma_attrs);
dma_set_attr(DMA_ATTR_WRITE_COMBINE, &g2d->cmdlist_dma_attrs);
}
for (nr = 0; nr < G2D_CMDLIST_NUM; nr++) {
+ unsigned int i;
+
node[nr].cmdlist =
g2d->cmdlist_pool_virt + nr * G2D_CMDLIST_SIZE;
node[nr].dma_addr =
g2d->cmdlist_pool + nr * G2D_CMDLIST_SIZE;
+ buf_info = &node[nr].buf_info;
+ for (i = 0; i < MAX_REG_TYPE_NR; i++)
+ buf_info->reg_types[i] = REG_TYPE_NONE;
+
list_add_tail(&node[nr].list, &g2d->free_cmdlist);
}
DMA_BIDIRECTIONAL);
if (ret < 0) {
DRM_ERROR("failed to map sgt with dma region.\n");
- goto err_free_sgt;
+ goto err_sg_free_table;
}
g2d_userptr->dma_addr = sgt->sgl[0].dma_address;
return &g2d_userptr->dma_addr;
-err_free_sgt:
+err_sg_free_table:
sg_free_table(sgt);
+
+err_free_sgt:
kfree(sgt);
sgt = NULL;
g2d->current_pool = 0;
}
+static enum g2d_reg_type g2d_get_reg_type(int reg_offset)
+{
+ enum g2d_reg_type reg_type;
+
+ switch (reg_offset) {
+ case G2D_SRC_BASE_ADDR:
+ case G2D_SRC_COLOR_MODE:
+ case G2D_SRC_LEFT_TOP:
+ case G2D_SRC_RIGHT_BOTTOM:
+ reg_type = REG_TYPE_SRC;
+ break;
+ case G2D_SRC_PLANE2_BASE_ADDR:
+ reg_type = REG_TYPE_SRC_PLANE2;
+ break;
+ case G2D_DST_BASE_ADDR:
+ case G2D_DST_COLOR_MODE:
+ case G2D_DST_LEFT_TOP:
+ case G2D_DST_RIGHT_BOTTOM:
+ reg_type = REG_TYPE_DST;
+ break;
+ case G2D_DST_PLANE2_BASE_ADDR:
+ reg_type = REG_TYPE_DST_PLANE2;
+ break;
+ case G2D_PAT_BASE_ADDR:
+ reg_type = REG_TYPE_PAT;
+ break;
+ case G2D_MSK_BASE_ADDR:
+ reg_type = REG_TYPE_MSK;
+ break;
+ default:
+ reg_type = REG_TYPE_NONE;
+ DRM_ERROR("Unknown register offset![%d]\n", reg_offset);
+ break;
+ };
+
+ return reg_type;
+}
+
+static unsigned long g2d_get_buf_bpp(unsigned int format)
+{
+ unsigned long bpp;
+
+ switch (format) {
+ case G2D_FMT_XRGB8888:
+ case G2D_FMT_ARGB8888:
+ bpp = 4;
+ break;
+ case G2D_FMT_RGB565:
+ case G2D_FMT_XRGB1555:
+ case G2D_FMT_ARGB1555:
+ case G2D_FMT_XRGB4444:
+ case G2D_FMT_ARGB4444:
+ bpp = 2;
+ break;
+ case G2D_FMT_PACKED_RGB888:
+ bpp = 3;
+ break;
+ default:
+ bpp = 1;
+ break;
+ }
+
+ return bpp;
+}
+
+static bool g2d_check_buf_desc_is_valid(struct g2d_buf_desc *buf_desc,
+ enum g2d_reg_type reg_type,
+ unsigned long size)
+{
+ unsigned int width, height;
+ unsigned long area;
+
+ /*
+ * check source and destination buffers only.
+ * so the others are always valid.
+ */
+ if (reg_type != REG_TYPE_SRC && reg_type != REG_TYPE_DST)
+ return true;
+
+ width = buf_desc->right_x - buf_desc->left_x;
+ if (width < G2D_LEN_MIN || width > G2D_LEN_MAX) {
+ DRM_ERROR("width[%u] is out of range!\n", width);
+ return false;
+ }
+
+ height = buf_desc->bottom_y - buf_desc->top_y;
+ if (height < G2D_LEN_MIN || height > G2D_LEN_MAX) {
+ DRM_ERROR("height[%u] is out of range!\n", height);
+ return false;
+ }
+
+ area = (unsigned long)width * (unsigned long)height *
+ g2d_get_buf_bpp(buf_desc->format);
+ if (area > size) {
+ DRM_ERROR("area[%lu] is out of range[%lu]!\n", area, size);
+ return false;
+ }
+
+ return true;
+}
+
static int g2d_map_cmdlist_gem(struct g2d_data *g2d,
struct g2d_cmdlist_node *node,
struct drm_device *drm_dev,
struct drm_file *file)
{
struct g2d_cmdlist *cmdlist = node->cmdlist;
+ struct g2d_buf_info *buf_info = &node->buf_info;
int offset;
+ int ret;
int i;
- for (i = 0; i < node->map_nr; i++) {
+ for (i = 0; i < buf_info->map_nr; i++) {
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ int reg_pos;
unsigned long handle;
dma_addr_t *addr;
- offset = cmdlist->last - (i * 2 + 1);
- handle = cmdlist->data[offset];
+ reg_pos = cmdlist->last - 2 * (i + 1);
+
+ offset = cmdlist->data[reg_pos];
+ handle = cmdlist->data[reg_pos + 1];
+
+ reg_type = g2d_get_reg_type(offset);
+ if (reg_type == REG_TYPE_NONE) {
+ ret = -EFAULT;
+ goto err;
+ }
+
+ buf_desc = &buf_info->descs[reg_type];
+
+ if (buf_info->types[reg_type] == BUF_TYPE_GEM) {
+ unsigned long size;
+
+ size = exynos_drm_gem_get_size(drm_dev, handle, file);
+ if (!size) {
+ ret = -EFAULT;
+ goto err;
+ }
+
+ if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type,
+ size)) {
+ ret = -EFAULT;
+ goto err;
+ }
- if (node->obj_type[i] == BUF_TYPE_GEM) {
addr = exynos_drm_gem_get_dma_addr(drm_dev, handle,
file);
if (IS_ERR(addr)) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
}
} else {
struct drm_exynos_g2d_userptr g2d_userptr;
if (copy_from_user(&g2d_userptr, (void __user *)handle,
sizeof(struct drm_exynos_g2d_userptr))) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
+ }
+
+ if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type,
+ g2d_userptr.size)) {
+ ret = -EFAULT;
+ goto err;
}
addr = g2d_userptr_get_dma_addr(drm_dev,
file,
&handle);
if (IS_ERR(addr)) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
}
}
- cmdlist->data[offset] = *addr;
- node->handles[i] = handle;
+ cmdlist->data[reg_pos + 1] = *addr;
+ buf_info->reg_types[i] = reg_type;
+ buf_info->handles[reg_type] = handle;
}
return 0;
+
+err:
+ buf_info->map_nr = i;
+ return ret;
}
static void g2d_unmap_cmdlist_gem(struct g2d_data *g2d,
struct drm_file *filp)
{
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
+ struct g2d_buf_info *buf_info = &node->buf_info;
int i;
- for (i = 0; i < node->map_nr; i++) {
- unsigned long handle = node->handles[i];
+ for (i = 0; i < buf_info->map_nr; i++) {
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ unsigned long handle;
+
+ reg_type = buf_info->reg_types[i];
+
+ buf_desc = &buf_info->descs[reg_type];
+ handle = buf_info->handles[reg_type];
- if (node->obj_type[i] == BUF_TYPE_GEM)
+ if (buf_info->types[reg_type] == BUF_TYPE_GEM)
exynos_drm_gem_put_dma_addr(subdrv->drm_dev, handle,
filp);
else
g2d_userptr_put_dma_addr(subdrv->drm_dev, handle,
false);
- node->handles[i] = 0;
+ buf_info->reg_types[i] = REG_TYPE_NONE;
+ buf_info->handles[reg_type] = 0;
+ buf_info->types[reg_type] = 0;
+ memset(buf_desc, 0x00, sizeof(*buf_desc));
}
- node->map_nr = 0;
+ buf_info->map_nr = 0;
}
static void g2d_dma_start(struct g2d_data *g2d,
pm_runtime_get_sync(g2d->dev);
clk_enable(g2d->gate_clk);
- /* interrupt enable */
- writel_relaxed(G2D_INTEN_ACF | G2D_INTEN_UCF | G2D_INTEN_GCF,
- g2d->regs + G2D_INTEN);
-
writel_relaxed(node->dma_addr, g2d->regs + G2D_DMA_SFR_BASE_ADDR);
writel_relaxed(G2D_DMA_START, g2d->regs + G2D_DMA_COMMAND);
}
struct g2d_data *g2d = container_of(work, struct g2d_data,
runqueue_work);
-
mutex_lock(&g2d->runqueue_mutex);
clk_disable(g2d->gate_clk);
pm_runtime_put_sync(g2d->dev);
int i;
for (i = 0; i < nr; i++) {
- index = cmdlist->last - 2 * (i + 1);
+ struct g2d_buf_info *buf_info = &node->buf_info;
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ unsigned long value;
- if (for_addr) {
- /* check userptr buffer type. */
- reg_offset = (cmdlist->data[index] &
- ~0x7fffffff) >> 31;
- if (reg_offset) {
- node->obj_type[i] = BUF_TYPE_USERPTR;
- cmdlist->data[index] &= ~G2D_BUF_USERPTR;
- }
- }
+ index = cmdlist->last - 2 * (i + 1);
reg_offset = cmdlist->data[index] & ~0xfffff000;
-
if (reg_offset < G2D_VALID_START || reg_offset > G2D_VALID_END)
goto err;
if (reg_offset % 4)
if (!for_addr)
goto err;
- if (node->obj_type[i] != BUF_TYPE_USERPTR)
- node->obj_type[i] = BUF_TYPE_GEM;
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ /* check userptr buffer type. */
+ if ((cmdlist->data[index] & ~0x7fffffff) >> 31) {
+ buf_info->types[reg_type] = BUF_TYPE_USERPTR;
+ cmdlist->data[index] &= ~G2D_BUF_USERPTR;
+ } else
+ buf_info->types[reg_type] = BUF_TYPE_GEM;
+ break;
+ case G2D_SRC_COLOR_MODE:
+ case G2D_DST_COLOR_MODE:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->format = value & 0xf;
+ break;
+ case G2D_SRC_LEFT_TOP:
+ case G2D_DST_LEFT_TOP:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->left_x = value & 0x1fff;
+ buf_desc->top_y = (value & 0x1fff0000) >> 16;
+ break;
+ case G2D_SRC_RIGHT_BOTTOM:
+ case G2D_DST_RIGHT_BOTTOM:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->right_x = value & 0x1fff;
+ buf_desc->bottom_y = (value & 0x1fff0000) >> 16;
break;
default:
if (for_addr)
cmdlist->data[cmdlist->last++] = G2D_SRC_BASE_ADDR;
cmdlist->data[cmdlist->last++] = 0;
+ /*
+ * 'LIST_HOLD' command should be set to the DMA_HOLD_CMD_REG
+ * and GCF bit should be set to INTEN register if user wants
+ * G2D interrupt event once current command list execution is
+ * finished.
+ * Otherwise only ACF bit should be set to INTEN register so
+ * that one interrupt is occured after all command lists
+ * have been completed.
+ */
if (node->event) {
+ cmdlist->data[cmdlist->last++] = G2D_INTEN;
+ cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF | G2D_INTEN_GCF;
cmdlist->data[cmdlist->last++] = G2D_DMA_HOLD_CMD;
cmdlist->data[cmdlist->last++] = G2D_LIST_HOLD;
+ } else {
+ cmdlist->data[cmdlist->last++] = G2D_INTEN;
+ cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF;
}
/* Check size of cmdlist: last 2 is about G2D_BITBLT_START */
if (ret < 0)
goto err_free_event;
- node->map_nr = req->cmd_buf_nr;
+ node->buf_info.map_nr = req->cmd_buf_nr;
if (req->cmd_buf_nr) {
struct drm_exynos_g2d_cmd *cmd_buf;
exynos_gem_obj = NULL;
}
+unsigned long exynos_drm_gem_get_size(struct drm_device *dev,
+ unsigned int gem_handle,
+ struct drm_file *file_priv)
+{
+ struct exynos_drm_gem_obj *exynos_gem_obj;
+ struct drm_gem_object *obj;
+
+ obj = drm_gem_object_lookup(dev, file_priv, gem_handle);
+ if (!obj) {
+ DRM_ERROR("failed to lookup gem object.\n");
+ return 0;
+ }
+
+ exynos_gem_obj = to_exynos_gem_obj(obj);
+
+ drm_gem_object_unreference_unlocked(obj);
+
+ return exynos_gem_obj->buffer->size;
+}
+
+
struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,
unsigned long size)
{
int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
+/* get buffer size to gem handle. */
+unsigned long exynos_drm_gem_get_size(struct drm_device *dev,
+ unsigned int gem_handle,
+ struct drm_file *file_priv);
+
/* initialize gem object. */
int exynos_drm_gem_init_object(struct drm_gem_object *obj);
}
edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
- edid = kzalloc(edid_len, GFP_KERNEL);
+ edid = kmemdup(ctx->raw_edid, edid_len, GFP_KERNEL);
if (!edid) {
DRM_DEBUG_KMS("failed to allocate edid\n");
return ERR_PTR(-ENOMEM);
}
- memcpy(edid, ctx->raw_edid, edid_len);
return edid;
}
return -EINVAL;
}
edid_len = (1 + raw_edid->extensions) * EDID_LENGTH;
- ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL);
+ ctx->raw_edid = kmemdup(raw_edid, edid_len, GFP_KERNEL);
if (!ctx->raw_edid) {
DRM_DEBUG_KMS("failed to allocate raw_edid.\n");
return -ENOMEM;
}
- memcpy(ctx->raw_edid, raw_edid, edid_len);
} else {
/*
* with connection = 0, free raw_edid
mixer_ctx->win_data[win].enabled = false;
}
-int mixer_check_timing(void *ctx, struct fb_videomode *timing)
+static int mixer_check_timing(void *ctx, struct fb_videomode *timing)
{
struct mixer_context *mixer_ctx = ctx;
u32 w, h;
"Enable Haswell and ValleyView Support. "
"(default: false)");
+int i915_disable_power_well __read_mostly = 0;
+module_param_named(disable_power_well, i915_disable_power_well, int, 0600);
+MODULE_PARM_DESC(disable_power_well,
+ "Disable the power well when possible (default: false)");
+
static struct drm_driver driver;
extern int intel_agp_enabled;
extern bool i915_enable_hangcheck __read_mostly;
extern int i915_enable_ppgtt __read_mostly;
extern unsigned int i915_preliminary_hw_support __read_mostly;
+extern int i915_disable_power_well __read_mostly;
extern int i915_suspend(struct drm_device *dev, pm_message_t state);
extern int i915_resume(struct drm_device *dev);
if (eb == NULL) {
int size = args->buffer_count;
int count = PAGE_SIZE / sizeof(struct hlist_head) / 2;
- BUILD_BUG_ON(!is_power_of_2(PAGE_SIZE / sizeof(struct hlist_head)));
+ BUILD_BUG_ON_NOT_POWER_OF_2(PAGE_SIZE / sizeof(struct hlist_head));
while (count > 2*size)
count >>= 1;
eb = kzalloc(count*sizeof(struct hlist_head) +
struct intel_crt {
struct intel_encoder base;
+ /* DPMS state is stored in the connector, which we need in the
+ * encoder's enable/disable callbacks */
+ struct intel_connector *connector;
bool force_hotplug_required;
u32 adpa_reg;
};
return true;
}
-static void intel_disable_crt(struct intel_encoder *encoder)
-{
- struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
- struct intel_crt *crt = intel_encoder_to_crt(encoder);
- u32 temp;
-
- temp = I915_READ(crt->adpa_reg);
- temp |= ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE;
- temp &= ~ADPA_DAC_ENABLE;
- I915_WRITE(crt->adpa_reg, temp);
-}
-
-static void intel_enable_crt(struct intel_encoder *encoder)
-{
- struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
- struct intel_crt *crt = intel_encoder_to_crt(encoder);
- u32 temp;
-
- temp = I915_READ(crt->adpa_reg);
- temp |= ADPA_DAC_ENABLE;
- I915_WRITE(crt->adpa_reg, temp);
-}
-
/* Note: The caller is required to filter out dpms modes not supported by the
* platform. */
static void intel_crt_set_dpms(struct intel_encoder *encoder, int mode)
I915_WRITE(crt->adpa_reg, temp);
}
+static void intel_disable_crt(struct intel_encoder *encoder)
+{
+ intel_crt_set_dpms(encoder, DRM_MODE_DPMS_OFF);
+}
+
+static void intel_enable_crt(struct intel_encoder *encoder)
+{
+ struct intel_crt *crt = intel_encoder_to_crt(encoder);
+
+ intel_crt_set_dpms(encoder, crt->connector->base.dpms);
+}
+
+
static void intel_crt_dpms(struct drm_connector *connector, int mode)
{
struct drm_device *dev = connector->dev;
}
connector = &intel_connector->base;
+ crt->connector = intel_connector;
drm_connector_init(dev, &intel_connector->base,
&intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
num_connectors++;
}
+ if (is_cpu_edp)
+ intel_crtc->cpu_transcoder = TRANSCODER_EDP;
+ else
+ intel_crtc->cpu_transcoder = pipe;
+
/* We are not sure yet this won't happen. */
WARN(!HAS_PCH_LPT(dev), "Unexpected PCH type %d\n",
INTEL_PCH_TYPE(dev));
int pipe = intel_crtc->pipe;
int ret;
- if (IS_HASWELL(dev) && intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))
- intel_crtc->cpu_transcoder = TRANSCODER_EDP;
- else
- intel_crtc->cpu_transcoder = pipe;
-
drm_vblank_pre_modeset(dev, pipe);
ret = dev_priv->display.crtc_mode_set(crtc, mode, adjusted_mode,
{
struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder);
struct intel_dp *intel_dp = &intel_dig_port->dp;
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
i2c_del_adapter(&intel_dp->adapter);
drm_encoder_cleanup(encoder);
if (is_edp(intel_dp)) {
cancel_delayed_work_sync(&intel_dp->panel_vdd_work);
+ mutex_lock(&dev->mode_config.mutex);
ironlake_panel_vdd_off_sync(intel_dp);
+ mutex_unlock(&dev->mode_config.mutex);
}
kfree(intel_dig_port);
}
if (dev_priv->backlight_level == 0)
dev_priv->backlight_level = intel_panel_get_max_backlight(dev);
- dev_priv->backlight_enabled = true;
- intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
-
if (INTEL_INFO(dev)->gen >= 4) {
uint32_t reg, tmp;
}
set_level:
- /* Check the current backlight level and try to set again if it's zero.
- * On some machines, BLC_PWM_CPU_CTL is cleared to zero automatically
- * when BLC_PWM_CPU_CTL2 and BLC_PWM_PCH_CTL1 are written.
+ /* Call below after setting BLC_PWM_CPU_CTL2 and BLC_PWM_PCH_CTL1.
+ * BLC_PWM_CPU_CTL may be cleared to zero automatically when these
+ * registers are set.
*/
- if (!intel_panel_get_backlight(dev))
- intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
+ dev_priv->backlight_enabled = true;
+ intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
}
static void intel_panel_init_backlight(struct drm_device *dev)
if (!IS_HASWELL(dev))
return;
+ if (!i915_disable_power_well && !enable)
+ return;
+
tmp = I915_READ(HSW_PWR_WELL_DRIVER);
is_enabled = tmp & HSW_PWR_WELL_STATE;
enable_requested = tmp & HSW_PWR_WELL_ENABLE;
int i;
unsigned char misc = 0;
unsigned char ext_vga[6];
- unsigned char ext_vga_index24;
- unsigned char dac_index90 = 0;
u8 bppshift;
static unsigned char dacvalue[] = {
option2 = 0x0000b000;
break;
case G200_ER:
- dac_index90 = 0;
break;
}
WREG_DAC(i, dacvalue[i]);
}
- if (mdev->type == G200_ER) {
- WREG_DAC(0x90, dac_index90);
- }
-
+ if (mdev->type == G200_ER)
+ WREG_DAC(0x90, 0);
if (option)
pci_write_config_dword(dev->pdev, PCI_MGA_OPTION, option);
if (mdev->type == G200_WB)
ext_vga[1] |= 0x88;
- ext_vga_index24 = 0x05;
-
/* Set pixel clocks */
misc = 0x2d;
WREG8(MGA_MISC_OUT, misc);
}
if (mdev->type == G200_ER)
- WREG_ECRT(24, ext_vga_index24);
+ WREG_ECRT(0x24, 0x5);
if (mdev->type == G200_EV) {
WREG_ECRT(6, 0);
}
}
+static void
+nouveau_bios_shadow_platform(struct nouveau_bios *bios)
+{
+ struct pci_dev *pdev = nv_device(bios)->pdev;
+ size_t size;
+
+ void __iomem *rom = pci_platform_rom(pdev, &size);
+ if (rom && size) {
+ bios->data = kmalloc(size, GFP_KERNEL);
+ if (bios->data) {
+ memcpy_fromio(bios->data, rom, size);
+ bios->size = size;
+ }
+ }
+}
+
static int
nouveau_bios_score(struct nouveau_bios *bios, const bool writeable)
{
{ "PROM", nouveau_bios_shadow_prom, false, 0, 0, NULL },
{ "ACPI", nouveau_bios_shadow_acpi, true, 0, 0, NULL },
{ "PCIROM", nouveau_bios_shadow_pci, true, 0, 0, NULL },
+ { "PLATFORM", nouveau_bios_shadow_platform, true, 0, 0, NULL },
{}
};
struct methods *mthd, *best;
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_device *device = nv_device(drm->device);
struct nouveau_abi16 *abi16 = nouveau_abi16_get(file_priv, dev);
- struct nouveau_abi16_chan *chan, *temp;
+ struct nouveau_abi16_chan *chan = NULL, *temp;
struct nouveau_abi16_ntfy *ntfy;
struct nouveau_object *object;
struct nv_dma_class args = {};
if (unlikely(nv_device(abi16->device)->card_type >= NV_C0))
return nouveau_abi16_put(abi16, -EINVAL);
- list_for_each_entry_safe(chan, temp, &abi16->channels, head) {
- if (chan->chan->handle == (NVDRM_CHAN | info->channel))
+ list_for_each_entry(temp, &abi16->channels, head) {
+ if (temp->chan->handle == (NVDRM_CHAN | info->channel)) {
+ chan = temp;
break;
- chan = NULL;
+ }
}
if (!chan)
{
struct drm_nouveau_gpuobj_free *fini = data;
struct nouveau_abi16 *abi16 = nouveau_abi16_get(file_priv, dev);
- struct nouveau_abi16_chan *chan, *temp;
+ struct nouveau_abi16_chan *chan = NULL, *temp;
struct nouveau_abi16_ntfy *ntfy;
int ret;
if (unlikely(!abi16))
return -ENOMEM;
- list_for_each_entry_safe(chan, temp, &abi16->channels, head) {
- if (chan->chan->handle == (NVDRM_CHAN | fini->channel))
+ list_for_each_entry(temp, &abi16->channels, head) {
+ if (temp->chan->handle == (NVDRM_CHAN | fini->channel)) {
+ chan = temp;
break;
- chan = NULL;
+ }
}
if (!chan)
static struct drm_driver driver;
+static int
+nouveau_drm_vblank_handler(struct nouveau_eventh *event, int head)
+{
+ struct nouveau_drm *drm =
+ container_of(event, struct nouveau_drm, vblank[head]);
+ drm_handle_vblank(drm->dev, head);
+ return NVKM_EVENT_KEEP;
+}
+
static int
nouveau_drm_vblank_enable(struct drm_device *dev, int head)
{
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_disp *pdisp = nouveau_disp(drm->device);
- nouveau_event_get(pdisp->vblank, head, &drm->vblank);
+
+ if (WARN_ON_ONCE(head > ARRAY_SIZE(drm->vblank)))
+ return -EIO;
+ WARN_ON_ONCE(drm->vblank[head].func);
+ drm->vblank[head].func = nouveau_drm_vblank_handler;
+ nouveau_event_get(pdisp->vblank, head, &drm->vblank[head]);
return 0;
}
{
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_disp *pdisp = nouveau_disp(drm->device);
- nouveau_event_put(pdisp->vblank, head, &drm->vblank);
-}
-
-static int
-nouveau_drm_vblank_handler(struct nouveau_eventh *event, int head)
-{
- struct nouveau_drm *drm =
- container_of(event, struct nouveau_drm, vblank);
- drm_handle_vblank(drm->dev, head);
- return NVKM_EVENT_KEEP;
+ if (drm->vblank[head].func)
+ nouveau_event_put(pdisp->vblank, head, &drm->vblank[head]);
+ else
+ WARN_ON_ONCE(1);
+ drm->vblank[head].func = NULL;
}
static u64
dev->dev_private = drm;
drm->dev = dev;
- drm->vblank.func = nouveau_drm_vblank_handler;
INIT_LIST_HEAD(&drm->clients);
spin_lock_init(&drm->tile.lock);
struct nvbios vbios;
struct nouveau_display *display;
struct backlight_device *backlight;
- struct nouveau_eventh vblank;
+ struct nouveau_eventh vblank[4];
/* power management */
struct nouveau_pm *pm;
{
struct nv50_display_flip *flip = data;
if (nouveau_bo_rd32(flip->disp->sync, flip->chan->addr / 4) ==
- flip->chan->data);
+ flip->chan->data)
return true;
usleep_range(1, 2);
return false;
return true;
}
+static bool radeon_read_platform_bios(struct radeon_device *rdev)
+{
+ uint8_t __iomem *bios;
+ size_t size;
+
+ rdev->bios = NULL;
+
+ bios = pci_platform_rom(rdev->pdev, &size);
+ if (!bios) {
+ return false;
+ }
+
+ if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
+ return false;
+ }
+ rdev->bios = kmemdup(bios, size, GFP_KERNEL);
+ if (rdev->bios == NULL) {
+ return false;
+ }
+
+ return true;
+}
+
#ifdef CONFIG_ACPI
/* ATRM is used to get the BIOS on the discrete cards in
* dual-gpu systems.
if (r == false) {
r = radeon_read_disabled_bios(rdev);
}
+ if (r == false) {
+ r = radeon_read_platform_bios(rdev);
+ }
if (r == false || rdev->bios == NULL) {
DRM_ERROR("Unable to locate a BIOS ROM\n");
rdev->bios = NULL;
int ret;
edid = (struct edid *)udl_get_edid(udl);
+ if (!edid) {
+ drm_mode_connector_update_edid_property(connector, NULL);
+ return 0;
+ }
/*
* We only read the main block, but if the monitor reports extension
{ HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_HYBRID) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_HEATCONTROL) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_BEATPAD) },
- { HID_USB_DEVICE(USB_VENDOR_ID_MASTERKIT, USB_DEVICE_ID_MASTERKIT_MA901RADIO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MCC, USB_DEVICE_ID_MCC_PMD1024LS) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MCC, USB_DEVICE_ID_MCC_PMD1208LS) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROCHIP, USB_DEVICE_ID_PICKIT1) },
hdev->product <= USB_DEVICE_ID_VELLEMAN_K8061_LAST))
return true;
break;
+ case USB_VENDOR_ID_ATMEL_V_USB:
+ /* Masterkit MA901 usb radio based on Atmel tiny85 chip and
+ * it has the same USB ID as many Atmel V-USB devices. This
+ * usb radio is handled by radio-ma901.c driver so we want
+ * ignore the hid. Check the name, bus, product and ignore
+ * if we have MA901 usb radio.
+ */
+ if (hdev->product == USB_DEVICE_ID_ATMEL_V_USB &&
+ hdev->bus == BUS_USB &&
+ strncmp(hdev->name, "www.masterkit.ru MA901", 22) == 0)
+ return true;
+ break;
}
if (hdev->type == HID_TYPE_USBMOUSE &&
#define USB_VENDOR_ID_ATMEL 0x03eb
#define USB_DEVICE_ID_ATMEL_MULTITOUCH 0x211c
#define USB_DEVICE_ID_ATMEL_MXT_DIGITIZER 0x2118
+#define USB_VENDOR_ID_ATMEL_V_USB 0x16c0
+#define USB_DEVICE_ID_ATMEL_V_USB 0x05df
#define USB_VENDOR_ID_AUREAL 0x0755
#define USB_DEVICE_ID_AUREAL_W01RN 0x2626
#define USB_VENDOR_ID_MADCATZ 0x0738
#define USB_DEVICE_ID_MADCATZ_BEATPAD 0x4540
-#define USB_VENDOR_ID_MASTERKIT 0x16c0
-#define USB_DEVICE_ID_MASTERKIT_MA901RADIO 0x05df
-
#define USB_VENDOR_ID_MCC 0x09db
#define USB_DEVICE_ID_MCC_PMD1024LS 0x0076
#define USB_DEVICE_ID_MCC_PMD1208LS 0x007a
#define USB_VENDOR_ID_MONTEREY 0x0566
#define USB_DEVICE_ID_GENIUS_KB29E 0x3004
+#define USB_VENDOR_ID_MSI 0x1770
+#define USB_DEVICE_ID_MSI_GX680R_LED_PANEL 0xff00
+
#define USB_VENDOR_ID_NATIONAL_SEMICONDUCTOR 0x0400
#define USB_DEVICE_ID_N_S_HARMONY 0xc359
#define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001 0x3001
#define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008
+#define USB_VENDOR_ID_REALTEK 0x0bda
+#define USB_DEVICE_ID_REALTEK_READER 0x0152
+
#define USB_VENDOR_ID_ROCCAT 0x1e7d
#define USB_DEVICE_ID_ROCCAT_ARVO 0x30d4
#define USB_DEVICE_ID_ROCCAT_ISKU 0x319c
return 0;
}
+static void magicmouse_input_configured(struct hid_device *hdev,
+ struct hid_input *hi)
+
+{
+ struct magicmouse_sc *msc = hid_get_drvdata(hdev);
+
+ int ret = magicmouse_setup_input(msc->input, hdev);
+ if (ret) {
+ hid_err(hdev, "magicmouse setup input failed (%d)\n", ret);
+ /* clean msc->input to notify probe() of the failure */
+ msc->input = NULL;
+ }
+}
+
+
static int magicmouse_probe(struct hid_device *hdev,
const struct hid_device_id *id)
{
goto err_free;
}
- /* We do this after hid-input is done parsing reports so that
- * hid-input uses the most natural button and axis IDs.
- */
- if (msc->input) {
- ret = magicmouse_setup_input(msc->input, hdev);
- if (ret) {
- hid_err(hdev, "magicmouse setup input failed (%d)\n", ret);
- goto err_stop_hw;
- }
+ if (!msc->input) {
+ hid_err(hdev, "magicmouse input not registered\n");
+ ret = -ENOMEM;
+ goto err_stop_hw;
}
if (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE)
.remove = magicmouse_remove,
.raw_event = magicmouse_raw_event,
.input_mapping = magicmouse_input_mapping,
+ .input_configured = magicmouse_input_configured,
};
module_hid_driver(magicmouse_driver);
{
struct mt_device *td = hid_get_drvdata(hid);
__s32 quirks = td->mtclass.quirks;
+ struct input_dev *input = field->hidinput->input;
if (hid->claimed & HID_CLAIMED_INPUT) {
switch (usage->hid) {
break;
default:
+ if (usage->type)
+ input_event(input, usage->type, usage->code,
+ value);
return;
}
if (usage->usage_index + 1 == field->report_count) {
/* we only take into account the last report. */
if (usage->hid == td->last_slot_field)
- mt_complete_slot(td, field->hidinput->input);
+ mt_complete_slot(td, input);
if (field->index == td->last_field_index
&& td->num_received >= td->num_expected)
{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_REALTEK, USB_DEVICE_ID_REALTEK_READER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_SIGMATEL, USB_DEVICE_ID_SIGMATEL_STMP3780, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_SUN, USB_DEVICE_ID_RARITAN_KVM_DONGLE, HID_QUIRK_NOGET },
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
dev_err(dev, "%s: can't power on device\n", __func__);
+ pm_runtime_put_noidle(dev);
+ module_put(dev->driver->owner);
return ret;
}
adap->algo = &i2c_dw_algo;
adap->dev.parent = &pdev->dev;
adap->dev.of_node = pdev->dev.of_node;
- ACPI_HANDLE_SET(&adap->dev, ACPI_HANDLE(&pdev->dev));
r = i2c_add_numbered_adapter(adap);
if (r) {
/* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */
#define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59
#define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a
+#define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15
#define ISMT_DESC_ENTRIES 32 /* number of descriptor entries */
#define ISMT_MAX_RETRIES 3 /* number of SMBus retries to attempt */
static const DEFINE_PCI_DEVICE_TABLE(ismt_ids) = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) },
{ 0, }
};
int clk_multiplier = I2C_CLK_MULTIPLIER_STD_FAST_MODE;
u32 clk_divisor;
- tegra_i2c_clock_enable(i2c_dev);
+ err = tegra_i2c_clock_enable(i2c_dev);
+ if (err < 0) {
+ dev_err(i2c_dev->dev, "Clock enable failed %d\n", err);
+ return err;
+ }
tegra_periph_reset_assert(i2c_dev->div_clk);
udelay(2);
if (i2c_dev->is_suspended)
return -EBUSY;
- tegra_i2c_clock_enable(i2c_dev);
+ ret = tegra_i2c_clock_enable(i2c_dev);
+ if (ret < 0) {
+ dev_err(i2c_dev->dev, "Clock enable failed %d\n", ret);
+ return ret;
+ }
+
for (i = 0; i < num; i++) {
enum msg_end_type end_type = MSG_END_STOP;
if (i < (num - 1)) {
*
* Copyright (c) 2010 Ericsson AB.
*
- * Author: Guenter Roeck <guenter.roeck@ericsson.com>
+ * Author: Guenter Roeck <linux@roeck-us.net>
*
* Derived from:
* pca954x.c
ICPU(0x3c, idle_cpu_hsw),
ICPU(0x3f, idle_cpu_hsw),
ICPU(0x45, idle_cpu_hsw),
+ ICPU(0x46, idle_cpu_hsw),
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids);
wq->rq.queue = dma_alloc_coherent(&(rdev->lldi.pdev->dev),
wq->rq.memsize, &(wq->rq.dma_addr),
GFP_KERNEL);
- if (!wq->rq.queue)
+ if (!wq->rq.queue) {
+ ret = -ENOMEM;
goto free_sq;
+ }
PDBG("%s sq base va 0x%p pa 0x%llx rq base va 0x%p pa 0x%llx\n",
__func__, wq->sq.queue,
(unsigned long long)virt_to_phys(wq->sq.queue),
goto bail;
}
- opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
+ opcode = (be32_to_cpu(ohdr->bth[0]) >> 24) & 0x7f;
dev->opstats[opcode].n_bytes += tlen;
dev->opstats[opcode].n_packets++;
config INFINIBAND_QIB
- tristate "QLogic PCIe HCA support"
+ tristate "Intel PCIe HCA support"
depends on 64BIT
---help---
- This is a low-level driver for QLogic PCIe QLE InfiniBand host
- channel adapters. This driver does not support the QLogic
+ This is a low-level driver for Intel PCIe QLE InfiniBand host
+ channel adapters. This driver does not support the Intel
HyperTransport card (model QHT7140).
/*
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006, 2007, 2008, 2009 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
"Attempt pre-IBTA 1.2 DDR speed negotiation");
MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("QLogic <support@qlogic.com>");
-MODULE_DESCRIPTION("QLogic IB driver");
+MODULE_AUTHOR("Intel <ibsupport@intel.com>");
+MODULE_DESCRIPTION("Intel IB driver");
MODULE_VERSION(QIB_DRIVER_VERSION);
/*
/*
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006, 2007, 2008, 2009, 2010 QLogic Corporation.
* All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
/*
* This file contains all the chip-specific register information and
- * access functions for the QLogic QLogic_IB PCI-Express chip.
+ * access functions for the Intel Intel_IB PCI-Express chip.
*
*/
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2012, 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
static void qib_remove_one(struct pci_dev *);
static int qib_init_one(struct pci_dev *, const struct pci_device_id *);
-#define DRIVER_LOAD_MSG "QLogic " QIB_DRV_NAME " loaded: "
+#define DRIVER_LOAD_MSG "Intel " QIB_DRV_NAME " loaded: "
#define PFX QIB_DRV_NAME ": "
static DEFINE_PCI_DEVICE_TABLE(qib_pci_tbl) = {
dd = qib_init_iba6120_funcs(pdev, ent);
#else
qib_early_err(&pdev->dev,
- "QLogic PCIE device 0x%x cannot work if CONFIG_PCI_MSI is not enabled\n",
+ "Intel PCIE device 0x%x cannot work if CONFIG_PCI_MSI is not enabled\n",
ent->device);
dd = ERR_PTR(-ENODEV);
#endif
default:
qib_early_err(&pdev->dev,
- "Failing on unknown QLogic deviceid 0x%x\n",
+ "Failing on unknown Intel deviceid 0x%x\n",
ent->device);
ret = -ENODEV;
}
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2012, 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2005, 2006 PathScale, Inc. All rights reserved.
*
ibdev->dma_ops = &qib_dma_mapping_ops;
snprintf(ibdev->node_desc, sizeof(ibdev->node_desc),
- "QLogic Infiniband HCA %s", init_utsname()->nodename);
+ "Intel Infiniband HCA %s", init_utsname()->nodename);
ret = ib_register_device(ibdev, qib_create_port_files);
if (ret)
if (++priv->tx_outstanding == ipoib_sendq_size) {
ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
tx->qp->qp_num);
- if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP))
- ipoib_warn(priv, "request notify on send CQ failed\n");
netif_stop_queue(dev);
+ rc = ib_req_notify_cq(priv->send_cq,
+ IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
+ if (rc < 0)
+ ipoib_warn(priv, "request notify on send CQ failed\n");
+ else if (rc)
+ ipoib_send_comp_handler(priv->send_cq, dev);
}
}
}
case 0x802: /* Intuos4 General Pen */
case 0x804: /* Intuos4 Marker Pen */
case 0x40802: /* Intuos4 Classic Pen */
- case 0x18803: /* DTH2242 Grip Pen */
+ case 0x18802: /* DTH2242 Grip Pen */
case 0x022:
wacom->tool[idx] = BTN_TOOL_PEN;
break;
{ "Wacom Intuos4 12x19", WACOM_PKGLEN_INTUOS, 97536, 60960, 2047,
63, INTUOS4L, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES };
static const struct wacom_features wacom_features_0xBC =
- { "Wacom Intuos4 WL", WACOM_PKGLEN_INTUOS, 40840, 25400, 2047,
+ { "Wacom Intuos4 WL", WACOM_PKGLEN_INTUOS, 40640, 25400, 2047,
63, INTUOS4, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES };
static const struct wacom_features wacom_features_0x26 =
{ "Wacom Intuos5 touch S", WACOM_PKGLEN_INTUOS, 31496, 19685, 2047,
{ USB_DEVICE_WACOM(0x44) },
{ USB_DEVICE_WACOM(0x45) },
{ USB_DEVICE_WACOM(0x59) },
- { USB_DEVICE_WACOM(0x5D) },
+ { USB_DEVICE_DETAILED(0x5D, USB_CLASS_HID, 0, 0) },
{ USB_DEVICE_WACOM(0xB0) },
{ USB_DEVICE_WACOM(0xB1) },
{ USB_DEVICE_WACOM(0xB2) },
{ USB_DEVICE_WACOM(0x47) },
{ USB_DEVICE_WACOM(0xF4) },
{ USB_DEVICE_WACOM(0xF8) },
- { USB_DEVICE_WACOM(0xF6) },
+ { USB_DEVICE_DETAILED(0xF6, USB_CLASS_HID, 0, 0) },
{ USB_DEVICE_WACOM(0xFA) },
{ USB_DEVICE_LENOVO(0x6004) },
{ }
# OMAP IOMMU support
config OMAP_IOMMU
bool "OMAP IOMMU Support"
- depends on ARCH_OMAP
+ depends on ARCH_OMAP2PLUS
select IOMMU_API
config OMAP_IOVMM
/* allocate a protection domain if a device is added */
dma_domain = find_protection_domain(devid);
- if (dma_domain)
- goto out;
- dma_domain = dma_ops_domain_alloc();
- if (!dma_domain)
- goto out;
- dma_domain->target_dev = devid;
-
- spin_lock_irqsave(&iommu_pd_list_lock, flags);
- list_add_tail(&dma_domain->list, &iommu_pd_list);
- spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
-
- dev_data = get_dev_data(dev);
+ if (!dma_domain) {
+ dma_domain = dma_ops_domain_alloc();
+ if (!dma_domain)
+ goto out;
+ dma_domain->target_dev = devid;
+
+ spin_lock_irqsave(&iommu_pd_list_lock, flags);
+ list_add_tail(&dma_domain->list, &iommu_pd_list);
+ spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
+ }
dev->archdata.dma_ops = &amd_iommu_dma_ops;
* BIOS should disable L2B micellaneous clock gating by setting
* L2_L2B_CK_GATE_CONTROL[CKGateL2BMiscDisable](D0F2xF4_x90[2]) = 1b
*/
-static void __init amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
+static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
{
u32 value;
#include <linux/cpumask.h>
#include <linux/kernel.h>
#include <linux/string.h>
-#include <linux/cpumask.h>
#include <linux/errno.h>
#include <linux/msi.h>
#include <linux/irq.h>
if (gic_arch_extn.irq_retrigger)
return gic_arch_extn.irq_retrigger(d);
- return -ENXIO;
+ /* the genirq layer expects 0 if we can't retrigger in hardware */
+ return 0;
}
#ifdef CONFIG_SMP
#include "dm.h"
#include "dm-bio-prison.h"
+#include "dm-bio-record.h"
#include "dm-cache-metadata.h"
#include <linux/dm-io.h>
unsigned req_nr:2;
struct dm_deferred_entry *all_io_entry;
- /* writethrough fields */
+ /*
+ * writethrough fields. These MUST remain at the end of this
+ * structure and the 'cache' member must be the first as it
+ * is used to determine the offsetof the writethrough fields.
+ */
struct cache *cache;
dm_cblock_t cblock;
bio_end_io_t *saved_bi_end_io;
+ struct dm_bio_details bio_details;
};
struct dm_cache_migration {
/*----------------------------------------------------------------
* Per bio data
*--------------------------------------------------------------*/
-static struct per_bio_data *get_per_bio_data(struct bio *bio)
+
+/*
+ * If using writeback, leave out struct per_bio_data's writethrough fields.
+ */
+#define PB_DATA_SIZE_WB (offsetof(struct per_bio_data, cache))
+#define PB_DATA_SIZE_WT (sizeof(struct per_bio_data))
+
+static size_t get_per_bio_data_size(struct cache *cache)
+{
+ return cache->features.write_through ? PB_DATA_SIZE_WT : PB_DATA_SIZE_WB;
+}
+
+static struct per_bio_data *get_per_bio_data(struct bio *bio, size_t data_size)
{
- struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
+ struct per_bio_data *pb = dm_per_bio_data(bio, data_size);
BUG_ON(!pb);
return pb;
}
-static struct per_bio_data *init_per_bio_data(struct bio *bio)
+static struct per_bio_data *init_per_bio_data(struct bio *bio, size_t data_size)
{
- struct per_bio_data *pb = get_per_bio_data(bio);
+ struct per_bio_data *pb = get_per_bio_data(bio, data_size);
pb->tick = false;
pb->req_nr = dm_bio_get_target_bio_nr(bio);
static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
{
unsigned long flags;
- struct per_bio_data *pb = get_per_bio_data(bio);
+ size_t pb_data_size = get_per_bio_data_size(cache);
+ struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size);
spin_lock_irqsave(&cache->lock, flags);
if (cache->need_tick_bio &&
static void writethrough_endio(struct bio *bio, int err)
{
- struct per_bio_data *pb = get_per_bio_data(bio);
+ struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
bio->bi_end_io = pb->saved_bi_end_io;
if (err) {
return;
}
+ dm_bio_restore(&pb->bio_details, bio);
remap_to_cache(pb->cache, bio, pb->cblock);
/*
static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio,
dm_oblock_t oblock, dm_cblock_t cblock)
{
- struct per_bio_data *pb = get_per_bio_data(bio);
+ struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
pb->cache = cache;
pb->cblock = cblock;
pb->saved_bi_end_io = bio->bi_end_io;
+ dm_bio_record(&pb->bio_details, bio);
bio->bi_end_io = writethrough_endio;
remap_to_origin_clear_discard(pb->cache, bio, oblock);
static void process_flush_bio(struct cache *cache, struct bio *bio)
{
- struct per_bio_data *pb = get_per_bio_data(bio);
+ size_t pb_data_size = get_per_bio_data_size(cache);
+ struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size);
BUG_ON(bio->bi_size);
if (!pb->req_nr)
dm_oblock_t block = get_bio_block(cache, bio);
struct dm_bio_prison_cell *cell_prealloc, *old_ocell, *new_ocell;
struct policy_result lookup_result;
- struct per_bio_data *pb = get_per_bio_data(bio);
+ size_t pb_data_size = get_per_bio_data_size(cache);
+ struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size);
bool discarded_block = is_discarded_oblock(cache, block);
bool can_migrate = discarded_block || spare_migration_bandwidth(cache);
cache->ti = ca->ti;
ti->private = cache;
- ti->per_bio_data_size = sizeof(struct per_bio_data);
ti->num_flush_bios = 2;
ti->flush_supported = true;
ti->discard_zeroes_data_unsupported = true;
memcpy(&cache->features, &ca->features, sizeof(cache->features));
+ ti->per_bio_data_size = get_per_bio_data_size(cache);
cache->callbacks.congested_fn = cache_is_congested;
dm_table_add_target_callbacks(ti->table, &cache->callbacks);
int r;
dm_oblock_t block = get_bio_block(cache, bio);
+ size_t pb_data_size = get_per_bio_data_size(cache);
bool can_migrate = false;
bool discarded_block;
struct dm_bio_prison_cell *cell;
return DM_MAPIO_REMAPPED;
}
- pb = init_per_bio_data(bio);
+ pb = init_per_bio_data(bio, pb_data_size);
if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) {
defer_bio(cache, bio);
{
struct cache *cache = ti->private;
unsigned long flags;
- struct per_bio_data *pb = get_per_bio_data(bio);
+ size_t pb_data_size = get_per_bio_data_size(cache);
+ struct per_bio_data *pb = get_per_bio_data(bio, pb_data_size);
if (pb->tick) {
policy_tick(cache->policy);
queue_io(md, bio);
} else {
/* done with normal IO or empty flush */
+ trace_block_bio_complete(md->queue, bio, io_error);
bio_endio(bio, io_error);
}
}
removed++;
}
}
- if (removed)
- sysfs_notify(&mddev->kobj, NULL,
- "degraded");
-
+ if (removed && mddev->kobj.sd)
+ sysfs_notify(&mddev->kobj, NULL, "degraded");
rdev_for_each(rdev, mddev) {
if (rdev->raid_disk >= 0 &&
static inline int sysfs_link_rdev(struct mddev *mddev, struct md_rdev *rdev)
{
char nm[20];
- if (!test_bit(Replacement, &rdev->flags)) {
+ if (!test_bit(Replacement, &rdev->flags) && mddev->kobj.sd) {
sprintf(nm, "rd%d", rdev->raid_disk);
return sysfs_create_link(&mddev->kobj, &rdev->kobj, nm);
} else
static inline void sysfs_unlink_rdev(struct mddev *mddev, struct md_rdev *rdev)
{
char nm[20];
- if (!test_bit(Replacement, &rdev->flags)) {
+ if (!test_bit(Replacement, &rdev->flags) && mddev->kobj.sd) {
sprintf(nm, "rd%d", rdev->raid_disk);
sysfs_remove_link(&mddev->kobj, nm);
}
return_bi = bi->bi_next;
bi->bi_next = NULL;
bi->bi_size = 0;
+ trace_block_bio_complete(bdev_get_queue(bi->bi_bdev),
+ bi, 0);
bio_endio(bi, 0);
bi = return_bi;
}
bi->bi_next = NULL;
if (rrdev)
set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags);
- trace_block_bio_remap(bdev_get_queue(bi->bi_bdev),
- bi, disk_devt(conf->mddev->gendisk),
- sh->dev[i].sector);
+
+ if (conf->mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(bi->bi_bdev),
+ bi, disk_devt(conf->mddev->gendisk),
+ sh->dev[i].sector);
generic_make_request(bi);
}
if (rrdev) {
rbi->bi_io_vec[0].bv_offset = 0;
rbi->bi_size = STRIPE_SIZE;
rbi->bi_next = NULL;
- trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
- rbi, disk_devt(conf->mddev->gendisk),
- sh->dev[i].sector);
+ if (conf->mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
+ rbi, disk_devt(conf->mddev->gendisk),
+ sh->dev[i].sector);
generic_make_request(rbi);
}
if (!rdev && !rrdev) {
int level = conf->level;
if (rcw) {
- /* if we are not expanding this is a proper write request, and
- * there will be bios with new data to be drained into the
- * stripe cache
- */
- if (!expand) {
- sh->reconstruct_state = reconstruct_state_drain_run;
- set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
- } else
- sh->reconstruct_state = reconstruct_state_run;
-
- set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
s->locked++;
}
}
+ /* if we are not expanding this is a proper write request, and
+ * there will be bios with new data to be drained into the
+ * stripe cache
+ */
+ if (!expand) {
+ if (!s->locked)
+ /* False alarm, nothing to do */
+ return;
+ sh->reconstruct_state = reconstruct_state_drain_run;
+ set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+ } else
+ sh->reconstruct_state = reconstruct_state_run;
+
+ set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
+
if (s->locked + conf->max_degraded == disks)
if (!test_and_set_bit(STRIPE_FULL_WRITE, &sh->state))
atomic_inc(&conf->pending_full_writes);
BUG_ON(!(test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags) ||
test_bit(R5_Wantcompute, &sh->dev[pd_idx].flags)));
- sh->reconstruct_state = reconstruct_state_prexor_drain_run;
- set_bit(STRIPE_OP_PREXOR, &s->ops_request);
- set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
- set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
-
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if (i == pd_idx)
s->locked++;
}
}
+ if (!s->locked)
+ /* False alarm - nothing to do */
+ return;
+ sh->reconstruct_state = reconstruct_state_prexor_drain_run;
+ set_bit(STRIPE_OP_PREXOR, &s->ops_request);
+ set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+ set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
}
/* keep the parity disk(s) locked while asynchronous operations
int i;
clear_bit(STRIPE_SYNCING, &sh->state);
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags))
+ wake_up(&conf->wait_for_overlap);
s->syncing = 0;
s->replacing = 0;
/* There is nothing more to do for sync/check/repair.
{
int i;
struct r5dev *dev;
+ int discard_pending = 0;
for (i = disks; i--; )
if (sh->dev[i].written) {
STRIPE_SECTORS,
!test_bit(STRIPE_DEGRADED, &sh->state),
0);
- }
- } else if (test_bit(R5_Discard, &sh->dev[i].flags))
- clear_bit(R5_Discard, &sh->dev[i].flags);
+ } else if (test_bit(R5_Discard, &dev->flags))
+ discard_pending = 1;
+ }
+ if (!discard_pending &&
+ test_bit(R5_Discard, &sh->dev[sh->pd_idx].flags)) {
+ clear_bit(R5_Discard, &sh->dev[sh->pd_idx].flags);
+ clear_bit(R5_UPTODATE, &sh->dev[sh->pd_idx].flags);
+ if (sh->qd_idx >= 0) {
+ clear_bit(R5_Discard, &sh->dev[sh->qd_idx].flags);
+ clear_bit(R5_UPTODATE, &sh->dev[sh->qd_idx].flags);
+ }
+ /* now that discard is done we can proceed with any sync */
+ clear_bit(STRIPE_DISCARD, &sh->state);
+ if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state))
+ set_bit(STRIPE_HANDLE, &sh->state);
+
+ }
if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state))
if (atomic_dec_and_test(&conf->pending_full_writes))
set_bit(STRIPE_HANDLE, &sh->state);
if (rmw < rcw && rmw > 0) {
/* prefer read-modify-write, but need to get some data */
- blk_add_trace_msg(conf->mddev->queue, "raid5 rmw %llu %d",
- (unsigned long long)sh->sector, rmw);
+ if (conf->mddev->queue)
+ blk_add_trace_msg(conf->mddev->queue,
+ "raid5 rmw %llu %d",
+ (unsigned long long)sh->sector, rmw);
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if ((dev->towrite || i == sh->pd_idx) &&
}
}
}
- if (rcw)
+ if (rcw && conf->mddev->queue)
blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d",
(unsigned long long)sh->sector,
rcw, qread, test_bit(STRIPE_DELAYED, &sh->state));
return;
}
- if (test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
- set_bit(STRIPE_SYNCING, &sh->state);
- clear_bit(STRIPE_INSYNC, &sh->state);
+ if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
+ spin_lock(&sh->stripe_lock);
+ /* Cannot process 'sync' concurrently with 'discard' */
+ if (!test_bit(STRIPE_DISCARD, &sh->state) &&
+ test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
+ set_bit(STRIPE_SYNCING, &sh->state);
+ clear_bit(STRIPE_INSYNC, &sh->state);
+ }
+ spin_unlock(&sh->stripe_lock);
}
clear_bit(STRIPE_DELAYED, &sh->state);
test_bit(STRIPE_INSYNC, &sh->state)) {
md_done_sync(conf->mddev, STRIPE_SECTORS, 1);
clear_bit(STRIPE_SYNCING, &sh->state);
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags))
+ wake_up(&conf->wait_for_overlap);
}
/* If the failed drives are just a ReadError, then we might need
rdev_dec_pending(rdev, conf->mddev);
if (!error && uptodate) {
+ trace_block_bio_complete(bdev_get_queue(raid_bi->bi_bdev),
+ raid_bi, 0);
bio_endio(raid_bi, 0);
if (atomic_dec_and_test(&conf->active_aligned_reads))
wake_up(&conf->wait_for_stripe);
atomic_inc(&conf->active_aligned_reads);
spin_unlock_irq(&conf->device_lock);
- trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev),
- align_bi, disk_devt(mddev->gendisk),
- raid_bio->bi_sector);
+ if (mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev),
+ align_bi, disk_devt(mddev->gendisk),
+ raid_bio->bi_sector);
generic_make_request(align_bi);
return 1;
} else {
}
spin_unlock_irq(&conf->device_lock);
}
- trace_block_unplug(mddev->queue, cnt, !from_schedule);
+ if (mddev->queue)
+ trace_block_unplug(mddev->queue, cnt, !from_schedule);
kfree(cb);
}
sh = get_active_stripe(conf, logical_sector, 0, 0, 0);
prepare_to_wait(&conf->wait_for_overlap, &w,
TASK_UNINTERRUPTIBLE);
+ set_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
+ if (test_bit(STRIPE_SYNCING, &sh->state)) {
+ release_stripe(sh);
+ schedule();
+ goto again;
+ }
+ clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
spin_lock_irq(&sh->stripe_lock);
for (d = 0; d < conf->raid_disks; d++) {
if (d == sh->pd_idx || d == sh->qd_idx)
goto again;
}
}
+ set_bit(STRIPE_DISCARD, &sh->state);
finish_wait(&conf->wait_for_overlap, &w);
for (d = 0; d < conf->raid_disks; d++) {
if (d == sh->pd_idx || d == sh->qd_idx)
if ( rw == WRITE )
md_write_end(mddev);
+ trace_block_bio_complete(bdev_get_queue(bi->bi_bdev),
+ bi, 0);
bio_endio(bi, 0);
}
}
handled++;
}
remaining = raid5_dec_bi_active_stripes(raid_bio);
- if (remaining == 0)
+ if (remaining == 0) {
+ trace_block_bio_complete(bdev_get_queue(raid_bio->bi_bdev),
+ raid_bio, 0);
bio_endio(raid_bio, 0);
+ }
if (atomic_dec_and_test(&conf->active_aligned_reads))
wake_up(&conf->wait_for_stripe);
return handled;
struct stripe_operations {
int target, target2;
enum sum_check_flags zero_sum_result;
- #ifdef CONFIG_MULTICORE_RAID456
- unsigned long request;
- wait_queue_head_t wait_for_ops;
- #endif
} ops;
struct r5dev {
/* rreq and rvec are used for the replacement device when
STRIPE_COMPUTE_RUN,
STRIPE_OPS_REQ_PENDING,
STRIPE_ON_UNPLUG_LIST,
+ STRIPE_DISCARD,
};
/*
if (enable) {
if (is_code(code, M5MOLS_RESTYPE_MONITOR))
ret = m5mols_start_monitor(info);
- if (is_code(code, M5MOLS_RESTYPE_CAPTURE))
+ else if (is_code(code, M5MOLS_RESTYPE_CAPTURE))
ret = m5mols_start_capture(info);
else
ret = -EINVAL;
vdelay start of active video in 2 * field lines relative to
trailing edge of /VRESET pulse (VDELAY register).
sheight height of active video in 2 * field lines.
+ extraheight Added to sheight for cropcap.bounds.height only
videostart0 ITU-R frame line number of the line corresponding
to vdelay in the first field. */
#define CROPCAP(minhdelayx1, hdelayx1, swidth, totalwidth, sqwidth, \
- vdelay, sheight, videostart0) \
+ vdelay, sheight, extraheight, videostart0) \
.cropcap.bounds.left = minhdelayx1, \
/* * 2 because vertically we count field lines times two, */ \
/* e.g. 23 * 2 to 23 * 2 + 576 in PAL-BGHI defrect. */ \
.cropcap.bounds.top = (videostart0) * 2 - (vdelay) + MIN_VDELAY, \
/* 4 is a safety margin at the end of the line. */ \
.cropcap.bounds.width = (totalwidth) - (minhdelayx1) - 4, \
- .cropcap.bounds.height = (sheight) + (vdelay) - MIN_VDELAY, \
+ .cropcap.bounds.height = (sheight) + (extraheight) + (vdelay) - \
+ MIN_VDELAY, \
.cropcap.defrect.left = hdelayx1, \
.cropcap.defrect.top = (videostart0) * 2, \
.cropcap.defrect.width = swidth, \
/* totalwidth */ 1135,
/* sqwidth */ 944,
/* vdelay */ 0x20,
- /* bt878 (and bt848?) can capture another
- line below active video. */
- /* sheight */ (576 + 2) + 0x20 - 2,
+ /* sheight */ 576,
+ /* bt878 (and bt848?) can capture another
+ line below active video. */
+ /* extraheight */ 2,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_NTSC_M | V4L2_STD_NTSC_M_KR,
/* sqwidth */ 780,
/* vdelay */ 0x1a,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_SECAM,
/* sqwidth */ 944,
/* vdelay */ 0x20,
/* sheight */ 576,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_PAL_Nc,
/* sqwidth */ 780,
/* vdelay */ 0x1a,
/* sheight */ 576,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_PAL_M,
/* sqwidth */ 780,
/* vdelay */ 0x1a,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_PAL_N,
/* sqwidth */ 944,
/* vdelay */ 0x20,
/* sheight */ 576,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
.v4l2_id = V4L2_STD_NTSC_M_JP,
/* sqwidth */ 780,
/* vdelay */ 0x16,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
},{
/* that one hopefully works with the strange timing
/* sqwidth */ 944,
/* vdelay */ 0x1a,
/* sheight */ 480,
+ /* extraheight */ 0,
/* videostart0 */ 23)
}
};
config VIDEO_SH_VEU
tristate "SuperH VEU mem2mem video processing driver"
- depends on VIDEO_DEV && VIDEO_V4L2
+ depends on VIDEO_DEV && VIDEO_V4L2 && GENERIC_HARDIRQS
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
help
static int gsc_m2m_resume(struct gsc_dev *gsc)
{
+ struct gsc_ctx *ctx;
unsigned long flags;
spin_lock_irqsave(&gsc->slock, flags);
/* Clear for full H/W setup in first run after resume */
+ ctx = gsc->m2m.ctx;
gsc->m2m.ctx = NULL;
spin_unlock_irqrestore(&gsc->slock, flags);
if (test_and_clear_bit(ST_M2M_SUSPENDED, &gsc->state))
- gsc_m2m_job_finish(gsc->m2m.ctx,
- VB2_BUF_STATE_ERROR);
+ gsc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
+
return 0;
}
/* Do not resume if the device was idle before system suspend */
spin_lock_irqsave(&gsc->slock, flags);
if (!test_and_clear_bit(ST_SUSPEND, &gsc->state) ||
- !gsc_m2m_active(gsc)) {
+ !gsc_m2m_opened(gsc)) {
spin_unlock_irqrestore(&gsc->slock, flags);
return 0;
}
static int fimc_m2m_resume(struct fimc_dev *fimc)
{
+ struct fimc_ctx *ctx;
unsigned long flags;
spin_lock_irqsave(&fimc->slock, flags);
/* Clear for full H/W setup in first run after resume */
+ ctx = fimc->m2m.ctx;
fimc->m2m.ctx = NULL;
spin_unlock_irqrestore(&fimc->slock, flags);
if (test_and_clear_bit(ST_M2M_SUSPENDED, &fimc->state))
- fimc_m2m_job_finish(fimc->m2m.ctx,
- VB2_BUF_STATE_ERROR);
+ fimc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
+
return 0;
}
void flite_hw_set_source_format(struct fimc_lite *dev, struct flite_frame *f)
{
enum v4l2_mbus_pixelcode pixelcode = dev->fmt->mbus_code;
- unsigned int i = ARRAY_SIZE(src_pixfmt_map);
+ int i = ARRAY_SIZE(src_pixfmt_map);
u32 cfg;
- while (i-- >= 0) {
+ while (--i >= 0) {
if (src_pixfmt_map[i][0] == pixelcode)
break;
}
{ V4L2_MBUS_FMT_VYUY8_2X8, FLITE_REG_CIODMAFMT_CRYCBY },
};
u32 cfg = readl(dev->regs + FLITE_REG_CIODMAFMT);
- unsigned int i = ARRAY_SIZE(pixcode);
+ int i = ARRAY_SIZE(pixcode);
- while (i-- >= 0)
+ while (--i >= 0)
if (pixcode[i][0] == dev->fmt->mbus_code)
break;
cfg &= ~FLITE_REG_CIODMAFMT_YCBCR_ORDER_MASK;
.id = V4L2_CTRL_CLASS_USER | 0x1001,
.type = V4L2_CTRL_TYPE_BOOLEAN,
.name = "Test Pattern 640x480",
+ .step = 1,
};
static int fimc_lite_create_capture_subdev(struct fimc_lite *fimc)
struct fimc_pipeline *pipeline;
struct v4l2_subdev *sd;
struct mutex *lock;
- int ret = 0;
+ int i, ret = 0;
int ref_count;
if (media_entity_type(sink->entity) != MEDIA_ENT_T_V4L2_SUBDEV)
return 0;
}
+ mutex_lock(lock);
+ ref_count = fimc ? fimc->vid_cap.refcnt : fimc_lite->ref_count;
+
if (!(flags & MEDIA_LNK_FL_ENABLED)) {
- int i;
- mutex_lock(lock);
- ret = __fimc_pipeline_close(pipeline);
+ if (ref_count > 0) {
+ ret = __fimc_pipeline_close(pipeline);
+ if (!ret && fimc)
+ fimc_ctrls_delete(fimc->vid_cap.ctx);
+ }
for (i = 0; i < IDX_MAX; i++)
pipeline->subdevs[i] = NULL;
- if (fimc)
- fimc_ctrls_delete(fimc->vid_cap.ctx);
- mutex_unlock(lock);
- return ret;
+ } else if (ref_count > 0) {
+ /*
+ * Link activation. Enable power of pipeline elements only if
+ * the pipeline is already in use, i.e. its video node is open.
+ * Recreate the controls destroyed during the link deactivation.
+ */
+ ret = __fimc_pipeline_open(pipeline,
+ source->entity, true);
+ if (!ret && fimc)
+ ret = fimc_capture_ctrls_create(fimc);
}
- /*
- * Link activation. Enable power of pipeline elements only if the
- * pipeline is already in use, i.e. its video node is opened.
- * Recreate the controls destroyed during the link deactivation.
- */
- mutex_lock(lock);
-
- ref_count = fimc ? fimc->vid_cap.refcnt : fimc_lite->ref_count;
- if (ref_count > 0)
- ret = __fimc_pipeline_open(pipeline, source->entity, true);
- if (!ret && fimc)
- ret = fimc_capture_ctrls_create(fimc);
mutex_unlock(lock);
return ret ? -EPIPE : ret;
unsigned int frame_type;
dspl_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
- frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
+ frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_disp_frame_type, ctx);
/* If frame is same as previous then skip and do not dequeue */
if (frame_type == S5P_FIMV_DECODE_FRAME_SKIPPED) {
.minimum = 0,
.maximum = 1,
.default_value = 0,
+ .step = 1,
.menu_skip_mask = 0,
},
{
static int usb_ma901radio_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
+ struct usb_device *dev = interface_to_usbdev(intf);
struct ma901radio_device *radio;
int retval = 0;
+ /* Masterkit MA901 usb radio has the same USB ID as many others
+ * Atmel V-USB devices. Let's make additional checks to be sure
+ * that this is our device.
+ */
+
+ if (dev->product && dev->manufacturer &&
+ (strncmp(dev->product, "MA901", 5) != 0
+ || strncmp(dev->manufacturer, "www.masterkit.ru", 16) != 0))
+ return -ENODEV;
+
radio = kzalloc(sizeof(struct ma901radio_device), GFP_KERNEL);
if (!radio) {
dev_err(&intf->dev, "kzalloc for ma901radio_device failed\n");
config IR_RX51
tristate "Nokia N900 IR transmitter diode"
- depends on OMAP_DM_TIMER && LIRC && !ARCH_MULTIPLATFORM
+ depends on OMAP_DM_TIMER && ARCH_OMAP2PLUS && LIRC && !ARCH_MULTIPLATFORM
---help---
Say Y or M here if you want to enable support for the IR
transmitter diode built in the Nokia N900 (RX51) device.
videodev-objs += v4l2-compat-ioctl32.o
endif
-obj-$(CONFIG_VIDEO_DEV) += videodev.o
+obj-$(CONFIG_VIDEO_V4L2) += videodev.o
obj-$(CONFIG_VIDEO_V4L2_INT_DEVICE) += v4l2-int-device.o
obj-$(CONFIG_VIDEO_V4L2) += v4l2-common.o
mei_hcsr_set(hw, hcsr);
}
+/**
+ * mei_me_hw_reset_release - release device from the reset
+ *
+ * @dev: the device structure
+ */
+static void mei_me_hw_reset_release(struct mei_device *dev)
+{
+ struct mei_me_hw *hw = to_me_hw(dev);
+ u32 hcsr = mei_hcsr_read(hw);
+
+ hcsr |= H_IG;
+ hcsr &= ~H_RST;
+ mei_hcsr_set(hw, hcsr);
+}
/**
* mei_me_hw_reset - resets fw via mei csr register.
*
if (intr_enable)
hcsr |= H_IE;
else
- hcsr &= ~H_IE;
-
- mei_hcsr_set(hw, hcsr);
-
- hcsr = mei_hcsr_read(hw) | H_IG;
- hcsr &= ~H_RST;
+ hcsr |= ~H_IE;
mei_hcsr_set(hw, hcsr);
- hcsr = mei_hcsr_read(hw);
+ if (dev->dev_state == MEI_DEV_POWER_DOWN)
+ mei_me_hw_reset_release(dev);
- dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", hcsr);
+ dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", mei_hcsr_read(hw));
}
/**
mutex_unlock(&dev->device_lock);
return IRQ_HANDLED;
} else {
- dev_dbg(&dev->pdev->dev, "FW not ready.\n");
+ dev_dbg(&dev->pdev->dev, "Reset Completed.\n");
+ mei_me_hw_reset_release(dev);
mutex_unlock(&dev->device_lock);
return IRQ_HANDLED;
}
mei_cl_all_write_clear(dev);
}
+void mei_stop(struct mei_device *dev)
+{
+ dev_dbg(&dev->pdev->dev, "stopping the device.\n");
+
+ mutex_lock(&dev->device_lock);
+
+ cancel_delayed_work(&dev->timer_work);
+
+ mei_wd_stop(dev);
+
+ dev->dev_state = MEI_DEV_POWER_DOWN;
+ mei_reset(dev, 0);
+
+ mutex_unlock(&dev->device_lock);
+
+ flush_scheduled_work();
+}
+
void mei_device_init(struct mei_device *dev);
void mei_reset(struct mei_device *dev, int interrupts);
int mei_hw_init(struct mei_device *dev);
+void mei_stop(struct mei_device *dev);
/*
* MEI interrupt functions prototype
hw = to_me_hw(dev);
- mutex_lock(&dev->device_lock);
-
- cancel_delayed_work(&dev->timer_work);
- mei_wd_stop(dev);
+ dev_err(&pdev->dev, "stop\n");
+ mei_stop(dev);
mei_pdev = NULL;
- if (dev->iamthif_cl.state == MEI_FILE_CONNECTED) {
- dev->iamthif_cl.state = MEI_FILE_DISCONNECTING;
- mei_cl_disconnect(&dev->iamthif_cl);
- }
- if (dev->wd_cl.state == MEI_FILE_CONNECTED) {
- dev->wd_cl.state = MEI_FILE_DISCONNECTING;
- mei_cl_disconnect(&dev->wd_cl);
- }
-
- /* Unregistering watchdog device */
mei_watchdog_unregister(dev);
- /* remove entry if already in list */
- dev_dbg(&pdev->dev, "list del iamthif and wd file list.\n");
-
- if (dev->open_handle_count > 0)
- dev->open_handle_count--;
- mei_cl_unlink(&dev->wd_cl);
-
- if (dev->open_handle_count > 0)
- dev->open_handle_count--;
- mei_cl_unlink(&dev->iamthif_cl);
-
- dev->iamthif_current_cb = NULL;
- dev->me_clients_num = 0;
-
- mutex_unlock(&dev->device_lock);
-
- flush_scheduled_work();
-
/* disable interrupts */
mei_disable_interrupts(dev);
{
struct pci_dev *pdev = to_pci_dev(device);
struct mei_device *dev = pci_get_drvdata(pdev);
- int err;
if (!dev)
return -ENODEV;
- mutex_lock(&dev->device_lock);
- cancel_delayed_work(&dev->timer_work);
+ dev_err(&pdev->dev, "suspend\n");
- /* Stop watchdog if exists */
- err = mei_wd_stop(dev);
- /* Set new mei state */
- if (dev->dev_state == MEI_DEV_ENABLED ||
- dev->dev_state == MEI_DEV_RECOVERING_FROM_RESET) {
- dev->dev_state = MEI_DEV_POWER_DOWN;
- mei_reset(dev, 0);
- }
- mutex_unlock(&dev->device_lock);
+ mei_stop(dev);
+
+ mei_disable_interrupts(dev);
free_irq(pdev->irq, dev);
pci_disable_msi(pdev);
- return err;
+ return 0;
}
static int mei_pci_resume(struct device *device)
config VMWARE_VMCI
tristate "VMware VMCI Driver"
- depends on X86 && PCI
+ depends on X86 && PCI && NET
help
This is VMware's Virtual Machine Communication Interface. It enables
high-speed communication between host and guest in a virtual
struct delayed_datagram_info {
struct datagram_entry *entry;
- struct vmci_datagram msg;
struct work_struct work;
bool in_dg_host_queue;
+ /* msg and msg_payload must be together. */
+ struct vmci_datagram msg;
+ u8 msg_payload[];
};
/* Number of in-flight host->host datagrams */
}
#endif
-static inline unsigned long get_vm_size(struct vm_area_struct *vma)
-{
- return vma->vm_end - vma->vm_start;
-}
-
-static inline resource_size_t get_vm_offset(struct vm_area_struct *vma)
-{
- return (resource_size_t) vma->vm_pgoff << PAGE_SHIFT;
-}
-
-/*
- * Set a new vm offset.
- *
- * Verify that the incoming offset really works as a page offset,
- * and that the offset and size fit in a resource_size_t.
- */
-static inline int set_vm_offset(struct vm_area_struct *vma, resource_size_t off)
-{
- pgoff_t pgoff = off >> PAGE_SHIFT;
- if (off != (resource_size_t) pgoff << PAGE_SHIFT)
- return -EINVAL;
- if (off + get_vm_size(vma) - 1 < off)
- return -EINVAL;
- vma->vm_pgoff = pgoff;
- return 0;
-}
-
/*
* set up a mapping for shared memory segments
*/
struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd;
struct map_info *map = mtd->priv;
- resource_size_t start, off;
- unsigned long len, vma_len;
/* This is broken because it assumes the MTD device is map-based
and that mtd->priv is a valid struct map_info. It should be
replaced with something that uses the mtd_get_unmapped_area()
operation properly. */
if (0 /*mtd->type == MTD_RAM || mtd->type == MTD_ROM*/) {
- off = get_vm_offset(vma);
- start = map->phys;
- len = PAGE_ALIGN((start & ~PAGE_MASK) + map->size);
- start &= PAGE_MASK;
- vma_len = get_vm_size(vma);
-
- /* Overflow in off+len? */
- if (vma_len + off < off)
- return -EINVAL;
- /* Does it fit in the mapping? */
- if (vma_len + off > len)
- return -EINVAL;
-
- off += start;
- /* Did that overflow? */
- if (off < start)
- return -EINVAL;
- if (set_vm_offset(vma, off) < 0)
- return -EINVAL;
- vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
-
#ifdef pgprot_noncached
- if (file->f_flags & O_DSYNC || off >= __pa(high_memory))
+ if (file->f_flags & O_DSYNC || map->phys >= __pa(high_memory))
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
#endif
- if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
- vma->vm_end - vma->vm_start,
- vma->vm_page_prot))
- return -EAGAIN;
-
- return 0;
+ return vm_iomap_memory(vma, map->phys, map->size);
}
return -ENOSYS;
#else
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(old_active->dev, -1);
+ netif_addr_lock_bh(bond->dev);
netdev_for_each_mc_addr(ha, bond->dev)
dev_mc_del(old_active->dev, ha->addr);
+ netif_addr_unlock_bh(bond->dev);
}
if (new_active) {
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(new_active->dev, 1);
+ netif_addr_lock_bh(bond->dev);
netdev_for_each_mc_addr(ha, bond->dev)
dev_mc_add(new_active->dev, ha->addr);
+ netif_addr_unlock_bh(bond->dev);
}
}
bond_destroy_slave_symlinks(bond_dev, slave_dev);
err_detach:
+ if (!USES_PRIMARY(bond->params.mode)) {
+ netif_addr_lock_bh(bond_dev);
+ bond_mc_list_flush(bond_dev, slave_dev);
+ netif_addr_unlock_bh(bond_dev);
+ }
+ bond_del_vlans_from_slave(bond, slave_dev);
write_lock_bh(&bond->lock);
bond_detach_slave(bond, new_slave);
+ if (bond->primary_slave == new_slave)
+ bond->primary_slave = NULL;
write_unlock_bh(&bond->lock);
+ if (bond->curr_active_slave == new_slave) {
+ read_lock(&bond->lock);
+ write_lock_bh(&bond->curr_slave_lock);
+ bond_change_active_slave(bond, NULL);
+ bond_select_active_slave(bond);
+ write_unlock_bh(&bond->curr_slave_lock);
+ read_unlock(&bond->lock);
+ }
+ slave_disable_netpoll(new_slave);
err_close:
+ slave_dev->priv_flags &= ~IFF_BONDING;
dev_close(slave_dev);
err_unset_master:
return -EINVAL;
}
+ write_unlock_bh(&bond->lock);
/* unregister rx_handler early so bond_handle_frame wouldn't be called
* for this slave anymore.
*/
netdev_rx_handler_unregister(slave_dev);
- write_unlock_bh(&bond->lock);
- synchronize_net();
write_lock_bh(&bond->lock);
if (!all && !bond->params.fail_over_mac) {
struct net_device *slave_dev)
{
struct slave *slave = bond_slave_get_rtnl(slave_dev);
- struct bonding *bond = slave->bond;
- struct net_device *bond_dev = slave->bond->dev;
+ struct bonding *bond;
+ struct net_device *bond_dev;
u32 old_speed;
u8 old_duplex;
+ /* A netdev event can be generated while enslaving a device
+ * before netdev_rx_handler_register is called in which case
+ * slave will be NULL
+ */
+ if (!slave)
+ return NOTIFY_DONE;
+ bond_dev = slave->bond->dev;
+ bond = slave->bond;
+
switch (event) {
case NETDEV_UNREGISTER:
if (bond->setup_by_slave)
*/
static int bond_xmit_hash_policy_l23(struct sk_buff *skb, int count)
{
- struct ethhdr *data = (struct ethhdr *)skb->data;
- struct iphdr *iph;
- struct ipv6hdr *ipv6h;
+ const struct ethhdr *data;
+ const struct iphdr *iph;
+ const struct ipv6hdr *ipv6h;
u32 v6hash;
- __be32 *s, *d;
+ const __be32 *s, *d;
if (skb->protocol == htons(ETH_P_IP) &&
- skb_network_header_len(skb) >= sizeof(*iph)) {
+ pskb_network_may_pull(skb, sizeof(*iph))) {
iph = ip_hdr(skb);
+ data = (struct ethhdr *)skb->data;
return ((ntohl(iph->saddr ^ iph->daddr) & 0xffff) ^
(data->h_dest[5] ^ data->h_source[5])) % count;
} else if (skb->protocol == htons(ETH_P_IPV6) &&
- skb_network_header_len(skb) >= sizeof(*ipv6h)) {
+ pskb_network_may_pull(skb, sizeof(*ipv6h))) {
ipv6h = ipv6_hdr(skb);
+ data = (struct ethhdr *)skb->data;
s = &ipv6h->saddr.s6_addr32[0];
d = &ipv6h->daddr.s6_addr32[0];
v6hash = (s[1] ^ d[1]) ^ (s[2] ^ d[2]) ^ (s[3] ^ d[3]);
static int bond_xmit_hash_policy_l34(struct sk_buff *skb, int count)
{
u32 layer4_xor = 0;
- struct iphdr *iph;
- struct ipv6hdr *ipv6h;
- __be32 *s, *d;
- __be16 *layer4hdr;
+ const struct iphdr *iph;
+ const struct ipv6hdr *ipv6h;
+ const __be32 *s, *d;
+ const __be16 *l4 = NULL;
+ __be16 _l4[2];
+ int noff = skb_network_offset(skb);
+ int poff;
if (skb->protocol == htons(ETH_P_IP) &&
- skb_network_header_len(skb) >= sizeof(*iph)) {
+ pskb_may_pull(skb, noff + sizeof(*iph))) {
iph = ip_hdr(skb);
- if (!ip_is_fragment(iph) &&
- (iph->protocol == IPPROTO_TCP ||
- iph->protocol == IPPROTO_UDP) &&
- (skb_headlen(skb) - skb_network_offset(skb) >=
- iph->ihl * sizeof(u32) + sizeof(*layer4hdr) * 2)) {
- layer4hdr = (__be16 *)((u32 *)iph + iph->ihl);
- layer4_xor = ntohs(*layer4hdr ^ *(layer4hdr + 1));
+ poff = proto_ports_offset(iph->protocol);
+
+ if (!ip_is_fragment(iph) && poff >= 0) {
+ l4 = skb_header_pointer(skb, noff + (iph->ihl << 2) + poff,
+ sizeof(_l4), &_l4);
+ if (l4)
+ layer4_xor = ntohs(l4[0] ^ l4[1]);
}
return (layer4_xor ^
((ntohl(iph->saddr ^ iph->daddr)) & 0xffff)) % count;
} else if (skb->protocol == htons(ETH_P_IPV6) &&
- skb_network_header_len(skb) >= sizeof(*ipv6h)) {
+ pskb_may_pull(skb, noff + sizeof(*ipv6h))) {
ipv6h = ipv6_hdr(skb);
- if ((ipv6h->nexthdr == IPPROTO_TCP ||
- ipv6h->nexthdr == IPPROTO_UDP) &&
- (skb_headlen(skb) - skb_network_offset(skb) >=
- sizeof(*ipv6h) + sizeof(*layer4hdr) * 2)) {
- layer4hdr = (__be16 *)(ipv6h + 1);
- layer4_xor = ntohs(*layer4hdr ^ *(layer4hdr + 1));
+ poff = proto_ports_offset(ipv6h->nexthdr);
+ if (poff >= 0) {
+ l4 = skb_header_pointer(skb, noff + sizeof(*ipv6h) + poff,
+ sizeof(_l4), &_l4);
+ if (l4)
+ layer4_xor = ntohs(l4[0] ^ l4[1]);
}
s = &ipv6h->saddr.s6_addr32[0];
d = &ipv6h->daddr.s6_addr32[0];
static void __net_exit bond_net_exit(struct net *net)
{
struct bond_net *bn = net_generic(net, bond_net_id);
+ struct bonding *bond, *tmp_bond;
+ LIST_HEAD(list);
bond_destroy_sysfs(bn);
bond_destroy_proc_dir(bn);
+
+ /* Kill off any bonds created after unregistering bond rtnl ops */
+ rtnl_lock();
+ list_for_each_entry_safe(bond, tmp_bond, &bn->dev_list, bond_list)
+ unregister_netdevice_queue(bond->dev, &list);
+ unregister_netdevice_many(&list);
+ rtnl_unlock();
}
static struct pernet_operations bond_net_ops = {
sprintf(linkname, "slave_%s", slave->name);
ret = sysfs_create_link(&(master->dev.kobj), &(slave->dev.kobj),
linkname);
+
+ /* free the master link created earlier in case of error */
+ if (ret)
+ sysfs_remove_link(&(slave->dev.kobj), "master");
+
return ret;
}
goto out;
}
if (new_value < 0) {
- pr_err("%s: Invalid arp_interval value %d not in range 1-%d; rejected.\n",
+ pr_err("%s: Invalid arp_interval value %d not in range 0-%d; rejected.\n",
bond->dev->name, new_value, INT_MAX);
ret = -EINVAL;
goto out;
pr_info("%s: Setting ARP monitoring interval to %d.\n",
bond->dev->name, new_value);
bond->params.arp_interval = new_value;
- if (bond->params.miimon) {
- pr_info("%s: ARP monitoring cannot be used with MII monitoring. %s Disabling MII monitoring.\n",
- bond->dev->name, bond->dev->name);
- bond->params.miimon = 0;
- }
- if (!bond->params.arp_targets[0]) {
- pr_info("%s: ARP monitoring has been set up, but no ARP targets have been specified.\n",
- bond->dev->name);
+ if (new_value) {
+ if (bond->params.miimon) {
+ pr_info("%s: ARP monitoring cannot be used with MII monitoring. %s Disabling MII monitoring.\n",
+ bond->dev->name, bond->dev->name);
+ bond->params.miimon = 0;
+ }
+ if (!bond->params.arp_targets[0])
+ pr_info("%s: ARP monitoring has been set up, but no ARP targets have been specified.\n",
+ bond->dev->name);
}
if (bond->dev->flags & IFF_UP) {
/* If the interface is up, we may need to fire off
* timer will get fired off when the open function
* is called.
*/
- cancel_delayed_work_sync(&bond->mii_work);
- queue_delayed_work(bond->wq, &bond->arp_work, 0);
+ if (!new_value) {
+ cancel_delayed_work_sync(&bond->arp_work);
+ } else {
+ cancel_delayed_work_sync(&bond->mii_work);
+ queue_delayed_work(bond->wq, &bond->arp_work, 0);
+ }
}
-
out:
rtnl_unlock();
return ret;
}
if (new_value < 0) {
pr_err("%s: Invalid down delay value %d not in range %d-%d; rejected.\n",
- bond->dev->name, new_value, 1, INT_MAX);
+ bond->dev->name, new_value, 0, INT_MAX);
ret = -EINVAL;
goto out;
} else {
goto out;
}
if (new_value < 0) {
- pr_err("%s: Invalid down delay value %d not in range %d-%d; rejected.\n",
- bond->dev->name, new_value, 1, INT_MAX);
+ pr_err("%s: Invalid up delay value %d not in range %d-%d; rejected.\n",
+ bond->dev->name, new_value, 0, INT_MAX);
ret = -EINVAL;
goto out;
} else {
}
if (new_value < 0) {
pr_err("%s: Invalid miimon value %d not in range %d-%d; rejected.\n",
- bond->dev->name, new_value, 1, INT_MAX);
+ bond->dev->name, new_value, 0, INT_MAX);
ret = -EINVAL;
goto out;
- } else {
- pr_info("%s: Setting MII monitoring interval to %d.\n",
- bond->dev->name, new_value);
- bond->params.miimon = new_value;
- if (bond->params.updelay)
- pr_info("%s: Note: Updating updelay (to %d) since it is a multiple of the miimon value.\n",
- bond->dev->name,
- bond->params.updelay * bond->params.miimon);
- if (bond->params.downdelay)
- pr_info("%s: Note: Updating downdelay (to %d) since it is a multiple of the miimon value.\n",
- bond->dev->name,
- bond->params.downdelay * bond->params.miimon);
- if (bond->params.arp_interval) {
- pr_info("%s: MII monitoring cannot be used with ARP monitoring. Disabling ARP monitoring...\n",
- bond->dev->name);
- bond->params.arp_interval = 0;
- if (bond->params.arp_validate) {
- bond->params.arp_validate =
- BOND_ARP_VALIDATE_NONE;
- }
- }
-
- if (bond->dev->flags & IFF_UP) {
- /* If the interface is up, we may need to fire off
- * the MII timer. If the interface is down, the
- * timer will get fired off when the open function
- * is called.
- */
+ }
+ pr_info("%s: Setting MII monitoring interval to %d.\n",
+ bond->dev->name, new_value);
+ bond->params.miimon = new_value;
+ if (bond->params.updelay)
+ pr_info("%s: Note: Updating updelay (to %d) since it is a multiple of the miimon value.\n",
+ bond->dev->name,
+ bond->params.updelay * bond->params.miimon);
+ if (bond->params.downdelay)
+ pr_info("%s: Note: Updating downdelay (to %d) since it is a multiple of the miimon value.\n",
+ bond->dev->name,
+ bond->params.downdelay * bond->params.miimon);
+ if (new_value && bond->params.arp_interval) {
+ pr_info("%s: MII monitoring cannot be used with ARP monitoring. Disabling ARP monitoring...\n",
+ bond->dev->name);
+ bond->params.arp_interval = 0;
+ if (bond->params.arp_validate)
+ bond->params.arp_validate = BOND_ARP_VALIDATE_NONE;
+ }
+ if (bond->dev->flags & IFF_UP) {
+ /* If the interface is up, we may need to fire off
+ * the MII timer. If the interface is down, the
+ * timer will get fired off when the open function
+ * is called.
+ */
+ if (!new_value) {
+ cancel_delayed_work_sync(&bond->mii_work);
+ } else {
cancel_delayed_work_sync(&bond->arp_work);
queue_delayed_work(bond->wq, &bond->mii_work, 0);
}
struct mcp251x_priv *priv = netdev_priv(net);
struct spi_device *spi = priv->spi;
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
+ unsigned long flags;
int ret;
ret = open_candev(net);
priv->tx_skb = NULL;
priv->tx_len = 0;
+ flags = IRQF_ONESHOT;
+ if (pdata->irq_flags)
+ flags |= pdata->irq_flags;
+ else
+ flags |= IRQF_TRIGGER_FALLING;
+
ret = request_threaded_irq(spi->irq, NULL, mcp251x_can_ist,
- pdata->irq_flags ? pdata->irq_flags : IRQF_TRIGGER_FALLING,
- DEVICE_NAME, priv);
+ flags, DEVICE_NAME, priv);
if (ret) {
dev_err(&spi->dev, "failed to acquire irq %d\n", spi->irq);
if (pdata->transceiver_enable)
config CAN_PEAK_PCMCIA
tristate "PEAK PCAN-PC Card"
depends on PCMCIA
+ depends on HAS_IOPORT
---help---
This driver is for the PCAN-PC Card PCMCIA adapter (1 or 2 channels)
from PEAK-System (http://www.peak-system.com). To compile this
*/
if ((priv->read_reg(priv, REG_CR) & REG_CR_BASICCAN_INITIAL_MASK) ==
REG_CR_BASICCAN_INITIAL &&
- (priv->read_reg(priv, REG_SR) == REG_SR_BASICCAN_INITIAL) &&
+ (priv->read_reg(priv, SJA1000_REG_SR) == REG_SR_BASICCAN_INITIAL) &&
(priv->read_reg(priv, REG_IR) == REG_IR_BASICCAN_INITIAL))
flag = 1;
* See states on p. 23 of the Datasheet.
*/
if (priv->read_reg(priv, REG_MOD) == REG_MOD_PELICAN_INITIAL &&
- priv->read_reg(priv, REG_SR) == REG_SR_PELICAN_INITIAL &&
+ priv->read_reg(priv, SJA1000_REG_SR) == REG_SR_PELICAN_INITIAL &&
priv->read_reg(priv, REG_IR) == REG_IR_PELICAN_INITIAL)
return flag;
*/
spin_lock_irqsave(&priv->cmdreg_lock, flags);
priv->write_reg(priv, REG_CMR, val);
- priv->read_reg(priv, REG_SR);
+ priv->read_reg(priv, SJA1000_REG_SR);
spin_unlock_irqrestore(&priv->cmdreg_lock, flags);
}
while ((isrc = priv->read_reg(priv, REG_IR)) && (n < SJA1000_MAX_IRQ)) {
n++;
- status = priv->read_reg(priv, REG_SR);
+ status = priv->read_reg(priv, SJA1000_REG_SR);
/* check for absent controller due to hw unplug */
if (status == 0xFF && sja1000_is_absent(priv))
return IRQ_NONE;
/* receive interrupt */
while (status & SR_RBS) {
sja1000_rx(dev);
- status = priv->read_reg(priv, REG_SR);
+ status = priv->read_reg(priv, SJA1000_REG_SR);
/* check for absent controller */
if (status == 0xFF && sja1000_is_absent(priv))
return IRQ_NONE;
/* SJA1000 registers - manual section 6.4 (Pelican Mode) */
#define REG_MOD 0x00
#define REG_CMR 0x01
-#define REG_SR 0x02
+#define SJA1000_REG_SR 0x02
#define REG_IR 0x03
#define REG_IER 0x04
#define REG_ALC 0x0B
struct net_device *dev;
struct sja1000_priv *priv;
struct resource res;
- const u32 *prop;
- int err, irq, res_size, prop_size;
+ u32 prop;
+ int err, irq, res_size;
void __iomem *base;
err = of_address_to_resource(np, 0, &res);
priv->read_reg = sja1000_ofp_read_reg;
priv->write_reg = sja1000_ofp_write_reg;
- prop = of_get_property(np, "nxp,external-clock-frequency", &prop_size);
- if (prop && (prop_size == sizeof(u32)))
- priv->can.clock.freq = *prop / 2;
+ err = of_property_read_u32(np, "nxp,external-clock-frequency", &prop);
+ if (!err)
+ priv->can.clock.freq = prop / 2;
else
priv->can.clock.freq = SJA1000_OFP_CAN_CLOCK; /* default */
- prop = of_get_property(np, "nxp,tx-output-mode", &prop_size);
- if (prop && (prop_size == sizeof(u32)))
- priv->ocr |= *prop & OCR_MODE_MASK;
+ err = of_property_read_u32(np, "nxp,tx-output-mode", &prop);
+ if (!err)
+ priv->ocr |= prop & OCR_MODE_MASK;
else
priv->ocr |= OCR_MODE_NORMAL; /* default */
- prop = of_get_property(np, "nxp,tx-output-config", &prop_size);
- if (prop && (prop_size == sizeof(u32)))
- priv->ocr |= (*prop << OCR_TX_SHIFT) & OCR_TX_MASK;
+ err = of_property_read_u32(np, "nxp,tx-output-config", &prop);
+ if (!err)
+ priv->ocr |= (prop << OCR_TX_SHIFT) & OCR_TX_MASK;
else
priv->ocr |= OCR_TX0_PULLDOWN; /* default */
- prop = of_get_property(np, "nxp,clock-out-frequency", &prop_size);
- if (prop && (prop_size == sizeof(u32)) && *prop) {
- u32 divider = priv->can.clock.freq * 2 / *prop;
+ err = of_property_read_u32(np, "nxp,clock-out-frequency", &prop);
+ if (!err && prop) {
+ u32 divider = priv->can.clock.freq * 2 / prop;
if (divider > 1)
priv->cdr |= divider / 2 - 1;
priv->cdr |= CDR_CLK_OFF; /* default */
}
- prop = of_get_property(np, "nxp,no-comparator-bypass", NULL);
- if (!prop)
+ if (!of_property_read_bool(np, "nxp,no-comparator-bypass"))
priv->cdr |= CDR_CBP; /* default */
priv->irq_flags = IRQF_SHARED;
struct ei_device *ei_local;
struct ax_device *ax;
struct resource *irq, *mem, *mem2;
- resource_size_t mem_size, mem2_size = 0;
+ unsigned long mem_size, mem2_size = 0;
int ret = 0;
dev = ax__alloc_ei_netdev(sizeof(struct ax_device));
/* how about 0x2000 */
#define MAX_TX_BUF_LEN 0x2000
#define MAX_TX_BUF_SHIFT 13
-/*#define MAX_TX_BUF_LEN 0x3000 */
+#define MAX_TSO_SEG_SIZE 0x3c00
/* rrs word 1 bit 0:31 */
#define RRS_RX_CSUM_MASK 0xFFFF
struct atl1e_hw hw;
struct atl1e_hw_stats hw_stats;
- bool have_msi;
u32 wol;
u16 link_speed;
u16 link_duplex;
struct net_device *netdev = adapter->netdev;
free_irq(adapter->pdev->irq, netdev);
-
- if (adapter->have_msi)
- pci_disable_msi(adapter->pdev);
}
static int atl1e_request_irq(struct atl1e_adapter *adapter)
{
struct pci_dev *pdev = adapter->pdev;
struct net_device *netdev = adapter->netdev;
- int flags = 0;
int err = 0;
- adapter->have_msi = true;
- err = pci_enable_msi(pdev);
- if (err) {
- netdev_dbg(netdev,
- "Unable to allocate MSI interrupt Error: %d\n", err);
- adapter->have_msi = false;
- }
-
- if (!adapter->have_msi)
- flags |= IRQF_SHARED;
- err = request_irq(pdev->irq, atl1e_intr, flags, netdev->name, netdev);
+ err = request_irq(pdev->irq, atl1e_intr, IRQF_SHARED, netdev->name,
+ netdev);
if (err) {
netdev_dbg(adapter->netdev,
"Unable to allocate interrupt Error: %d\n", err);
- if (adapter->have_msi)
- pci_disable_msi(pdev);
return err;
}
netdev_dbg(netdev, "atl1e_request_irq OK\n");
INIT_WORK(&adapter->reset_task, atl1e_reset_task);
INIT_WORK(&adapter->link_chg_task, atl1e_link_chg_task);
+ netif_set_gso_max_size(netdev, MAX_TSO_SEG_SIZE);
err = register_netdev(netdev);
if (err) {
netdev_err(netdev, "register netdevice failed\n");
}
}
+ /* initialize FW coalescing state machines in RAM */
+ bnx2x_update_coalesce(bp);
+
/* setup the leading queue */
rc = bnx2x_setup_leading(bp);
if (rc) {
u32 enable_flag = disable ? 0 : (1 << HC_INDEX_DATA_HC_ENABLED_SHIFT);
u32 addr = BAR_CSTRORM_INTMEM +
CSTORM_STATUS_BLOCK_DATA_FLAGS_OFFSET(fw_sb_id, sb_index);
- u16 flags = REG_RD16(bp, addr);
+ u8 flags = REG_RD8(bp, addr);
/* clear and set */
flags &= ~HC_INDEX_DATA_HC_ENABLED;
flags |= enable_flag;
- REG_WR16(bp, addr, flags);
+ REG_WR8(bp, addr, flags);
DP(NETIF_MSG_IFUP,
"port %x fw_sb_id %d sb_index %d disable %d\n",
port, fw_sb_id, sb_index, disable);
break;
default:
BNX2X_ERR("Non valid capability ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
DP(BNX2X_MSG_DCB, "capid %d:%x\n", capid, *cap);
break;
default:
BNX2X_ERR("Non valid TC-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
return -EINVAL;
}
-static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
+static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
{
struct bnx2x *bp = netdev_priv(netdev);
DP(BNX2X_MSG_DCB, "state = %d\n", bp->dcbx_local_feat.pfc.enabled);
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "dcbnl call not valid\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
{
struct bnx2x *bp = params->bp;
u16 base_page, next_page, not_kr2_device, lane;
- int sigdet = bnx2x_warpcore_get_sigdet(phy, params);
-
- if (!sigdet) {
- if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE))
- bnx2x_kr2_recovery(params, vars, phy);
- return;
- }
+ int sigdet;
/* Once KR2 was disabled, wait 5 seconds before checking KR2 recovery
* since some switches tend to reinit the AN process and clear the
vars->check_kr2_recovery_cnt--;
return;
}
+
+ sigdet = bnx2x_warpcore_get_sigdet(phy, params);
+ if (!sigdet) {
+ if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
+ bnx2x_kr2_recovery(params, vars, phy);
+ DP(NETIF_MSG_LINK, "No sigdet\n");
+ }
+ return;
+ }
+
lane = bnx2x_get_warpcore_lane(phy, params);
CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_AER_BLOCK,
MDIO_AER_BLOCK_AER_REG, lane);
q);
}
- if (!NO_FCOE(bp)) {
+ if (!NO_FCOE(bp) && CNIC_ENABLED(bp)) {
fp = &bp->fp[FCOE_IDX(bp)];
queue_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj;
REG_RD(bp, NIG_REG_NIG_INT_STS_CLR_0);
}
}
+ if (!CHIP_IS_E1x(bp))
+ /* block FW from writing to host */
+ REG_WR(bp, PGLUE_B_REG_INTERNAL_PFID_ENABLE_MASTER, 0);
+
/* wait until BRB is empty */
tmp_reg = REG_RD(bp, BRB1_REG_NUM_OF_FULL_BLOCKS);
while (timer_count) {
RCU_INIT_POINTER(bp->cnic_ops, NULL);
mutex_unlock(&bp->cnic_mutex);
synchronize_rcu();
+ bp->cnic_enabled = false;
kfree(bp->cnic_kwq);
bp->cnic_kwq = NULL;
if (j + len > block_end)
goto partno;
- memcpy(tp->fw_ver, &vpd_data[j], len);
- strncat(tp->fw_ver, " bc ", vpdlen - len - 1);
+ if (len >= sizeof(tp->fw_ver))
+ len = sizeof(tp->fw_ver) - 1;
+ memset(tp->fw_ver, 0, sizeof(tp->fw_ver));
+ snprintf(tp->fw_ver, sizeof(tp->fw_ver), "%.*s bc ", len,
+ &vpd_data[j]);
}
partno:
#define XGMAC_FLOW_CTRL_FCB_BPA 0x00000001 /* Flow Control Busy ... */
/* XGMAC_INT_STAT reg */
+#define XGMAC_INT_STAT_PMTIM 0x00800000 /* PMT Interrupt Mask */
#define XGMAC_INT_STAT_PMT 0x0080 /* PMT Interrupt Status */
#define XGMAC_INT_STAT_LPI 0x0040 /* LPI Interrupt Status */
writel(DMA_INTR_DEFAULT_MASK, ioaddr + XGMAC_DMA_STATUS);
writel(DMA_INTR_DEFAULT_MASK, ioaddr + XGMAC_DMA_INTR_ENA);
+ /* Mask power mgt interrupt */
+ writel(XGMAC_INT_STAT_PMTIM, ioaddr + XGMAC_INT_STAT);
+
/* XGMAC requires AXI bus init. This is a 'magic number' for now */
writel(0x0077000E, ioaddr + XGMAC_DMA_AXI_BUS);
struct sk_buff *skb;
int frame_len;
+ if (!dma_ring_cnt(priv->rx_head, priv->rx_tail, DMA_RX_RING_SZ))
+ break;
+
entry = priv->rx_tail;
p = priv->dma_rx + entry;
if (desc_get_owner(p))
unsigned int pmt = 0;
if (mode & WAKE_MAGIC)
- pmt |= XGMAC_PMT_POWERDOWN | XGMAC_PMT_MAGIC_PKT;
+ pmt |= XGMAC_PMT_POWERDOWN | XGMAC_PMT_MAGIC_PKT_EN;
if (mode & WAKE_UCAST)
pmt |= XGMAC_PMT_POWERDOWN | XGMAC_PMT_GLBL_UNICAST;
tmp = readl(reg);
}
+/*
+ * Sleep, either by using msleep() or if we are suspending, then
+ * use mdelay() to sleep.
+ */
+static void dm9000_msleep(board_info_t *db, unsigned int ms)
+{
+ if (db->in_suspend)
+ mdelay(ms);
+ else
+ msleep(ms);
+}
+
+/* Read a word from phyxcer */
+static int
+dm9000_phy_read(struct net_device *dev, int phy_reg_unused, int reg)
+{
+ board_info_t *db = netdev_priv(dev);
+ unsigned long flags;
+ unsigned int reg_save;
+ int ret;
+
+ mutex_lock(&db->addr_lock);
+
+ spin_lock_irqsave(&db->lock, flags);
+
+ /* Save previous register address */
+ reg_save = readb(db->io_addr);
+
+ /* Fill the phyxcer register into REG_0C */
+ iow(db, DM9000_EPAR, DM9000_PHY | reg);
+
+ /* Issue phyxcer read command */
+ iow(db, DM9000_EPCR, EPCR_ERPRR | EPCR_EPOS);
+
+ writeb(reg_save, db->io_addr);
+ spin_unlock_irqrestore(&db->lock, flags);
+
+ dm9000_msleep(db, 1); /* Wait read complete */
+
+ spin_lock_irqsave(&db->lock, flags);
+ reg_save = readb(db->io_addr);
+
+ iow(db, DM9000_EPCR, 0x0); /* Clear phyxcer read command */
+
+ /* The read data keeps on REG_0D & REG_0E */
+ ret = (ior(db, DM9000_EPDRH) << 8) | ior(db, DM9000_EPDRL);
+
+ /* restore the previous address */
+ writeb(reg_save, db->io_addr);
+ spin_unlock_irqrestore(&db->lock, flags);
+
+ mutex_unlock(&db->addr_lock);
+
+ dm9000_dbg(db, 5, "phy_read[%02x] -> %04x\n", reg, ret);
+ return ret;
+}
+
+/* Write a word to phyxcer */
+static void
+dm9000_phy_write(struct net_device *dev,
+ int phyaddr_unused, int reg, int value)
+{
+ board_info_t *db = netdev_priv(dev);
+ unsigned long flags;
+ unsigned long reg_save;
+
+ dm9000_dbg(db, 5, "phy_write[%02x] = %04x\n", reg, value);
+ mutex_lock(&db->addr_lock);
+
+ spin_lock_irqsave(&db->lock, flags);
+
+ /* Save previous register address */
+ reg_save = readb(db->io_addr);
+
+ /* Fill the phyxcer register into REG_0C */
+ iow(db, DM9000_EPAR, DM9000_PHY | reg);
+
+ /* Fill the written data into REG_0D & REG_0E */
+ iow(db, DM9000_EPDRL, value);
+ iow(db, DM9000_EPDRH, value >> 8);
+
+ /* Issue phyxcer write command */
+ iow(db, DM9000_EPCR, EPCR_EPOS | EPCR_ERPRW);
+
+ writeb(reg_save, db->io_addr);
+ spin_unlock_irqrestore(&db->lock, flags);
+
+ dm9000_msleep(db, 1); /* Wait write complete */
+
+ spin_lock_irqsave(&db->lock, flags);
+ reg_save = readb(db->io_addr);
+
+ iow(db, DM9000_EPCR, 0x0); /* Clear phyxcer write command */
+
+ /* restore the previous address */
+ writeb(reg_save, db->io_addr);
+
+ spin_unlock_irqrestore(&db->lock, flags);
+ mutex_unlock(&db->addr_lock);
+}
+
/* dm9000_set_io
*
* select the specified set of io routines to use with the
iow(db, DM9000_GPCR, GPCR_GEP_CNTL); /* Let GPIO0 output */
+ dm9000_phy_write(dev, 0, MII_BMCR, BMCR_RESET); /* PHY RESET */
+ dm9000_phy_write(dev, 0, MII_DM_DSPCR, DSPCR_INIT_PARAM); /* Init */
+
ncr = (db->flags & DM9000_PLATF_EXT_PHY) ? NCR_EXT_PHY : 0;
/* if wol is needed, then always set NCR_WAKEEN otherwise we end
return 0;
}
-/*
- * Sleep, either by using msleep() or if we are suspending, then
- * use mdelay() to sleep.
- */
-static void dm9000_msleep(board_info_t *db, unsigned int ms)
-{
- if (db->in_suspend)
- mdelay(ms);
- else
- msleep(ms);
-}
-
-/*
- * Read a word from phyxcer
- */
-static int
-dm9000_phy_read(struct net_device *dev, int phy_reg_unused, int reg)
-{
- board_info_t *db = netdev_priv(dev);
- unsigned long flags;
- unsigned int reg_save;
- int ret;
-
- mutex_lock(&db->addr_lock);
-
- spin_lock_irqsave(&db->lock,flags);
-
- /* Save previous register address */
- reg_save = readb(db->io_addr);
-
- /* Fill the phyxcer register into REG_0C */
- iow(db, DM9000_EPAR, DM9000_PHY | reg);
-
- iow(db, DM9000_EPCR, EPCR_ERPRR | EPCR_EPOS); /* Issue phyxcer read command */
-
- writeb(reg_save, db->io_addr);
- spin_unlock_irqrestore(&db->lock,flags);
-
- dm9000_msleep(db, 1); /* Wait read complete */
-
- spin_lock_irqsave(&db->lock,flags);
- reg_save = readb(db->io_addr);
-
- iow(db, DM9000_EPCR, 0x0); /* Clear phyxcer read command */
-
- /* The read data keeps on REG_0D & REG_0E */
- ret = (ior(db, DM9000_EPDRH) << 8) | ior(db, DM9000_EPDRL);
-
- /* restore the previous address */
- writeb(reg_save, db->io_addr);
- spin_unlock_irqrestore(&db->lock,flags);
-
- mutex_unlock(&db->addr_lock);
-
- dm9000_dbg(db, 5, "phy_read[%02x] -> %04x\n", reg, ret);
- return ret;
-}
-
-/*
- * Write a word to phyxcer
- */
-static void
-dm9000_phy_write(struct net_device *dev,
- int phyaddr_unused, int reg, int value)
-{
- board_info_t *db = netdev_priv(dev);
- unsigned long flags;
- unsigned long reg_save;
-
- dm9000_dbg(db, 5, "phy_write[%02x] = %04x\n", reg, value);
- mutex_lock(&db->addr_lock);
-
- spin_lock_irqsave(&db->lock,flags);
-
- /* Save previous register address */
- reg_save = readb(db->io_addr);
-
- /* Fill the phyxcer register into REG_0C */
- iow(db, DM9000_EPAR, DM9000_PHY | reg);
-
- /* Fill the written data into REG_0D & REG_0E */
- iow(db, DM9000_EPDRL, value);
- iow(db, DM9000_EPDRH, value >> 8);
-
- iow(db, DM9000_EPCR, EPCR_EPOS | EPCR_ERPRW); /* Issue phyxcer write command */
-
- writeb(reg_save, db->io_addr);
- spin_unlock_irqrestore(&db->lock, flags);
-
- dm9000_msleep(db, 1); /* Wait write complete */
-
- spin_lock_irqsave(&db->lock,flags);
- reg_save = readb(db->io_addr);
-
- iow(db, DM9000_EPCR, 0x0); /* Clear phyxcer write command */
-
- /* restore the previous address */
- writeb(reg_save, db->io_addr);
-
- spin_unlock_irqrestore(&db->lock, flags);
- mutex_unlock(&db->addr_lock);
-}
-
static void
dm9000_shutdown(struct net_device *dev)
{
db->flags |= DM9000_PLATF_SIMPLE_PHY;
#endif
- dm9000_reset(db);
+ /* Fixing bug on dm9000_probe, takeover dm9000_reset(db),
+ * Need 'NCR_MAC_LBK' bit to indeed stable our DM9000 fifo
+ * while probe stage.
+ */
+
+ iow(db, DM9000_NCR, NCR_MAC_LBK | NCR_RST);
/* try multiple times, DM9000 sometimes gets the read wrong */
for (i = 0; i < 8; i++) {
#define NCR_WAKEEN (1<<6)
#define NCR_FCOL (1<<4)
#define NCR_FDX (1<<3)
-#define NCR_LBK (3<<1)
+
+#define NCR_RESERVED (3<<1)
+#define NCR_MAC_LBK (1<<1)
#define NCR_RST (1<<0)
#define NSR_SPEED (1<<7)
#define ISR_LNKCHNG (1<<5)
#define ISR_UNDERRUN (1<<4)
+/* Davicom MII registers.
+ */
+
+#define MII_DM_DSPCR 0x1b /* DSP Control Register */
+
+#define DSPCR_INIT_PARAM 0xE100 /* DSP init parameter */
+
#endif /* _DM9000X_H_ */
if (vlan_tx_tag_present(skb)) {
vlan_tag = be_get_tx_vlan_tag(adapter, skb);
- __vlan_put_tag(skb, vlan_tag);
- skb->vlan_tci = 0;
+ skb = __vlan_put_tag(skb, vlan_tag);
+ if (skb)
+ skb->vlan_tci = 0;
}
return skb;
return NETDEV_TX_OK;
}
+/* Init RX & TX buffer descriptors
+ */
+static void fec_enet_bd_init(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct bufdesc *bdp;
+ unsigned int i;
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = fep->rx_bd_base;
+ for (i = 0; i < RX_RING_SIZE; i++) {
+
+ /* Initialize the BD for every fragment in the page. */
+ if (bdp->cbd_bufaddr)
+ bdp->cbd_sc = BD_ENET_RX_EMPTY;
+ else
+ bdp->cbd_sc = 0;
+ bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = fec_enet_get_prevdesc(bdp, fep->bufdesc_ex);
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ fep->cur_rx = fep->rx_bd_base;
+
+ /* ...and the same for transmit */
+ bdp = fep->tx_bd_base;
+ fep->cur_tx = bdp;
+ for (i = 0; i < TX_RING_SIZE; i++) {
+
+ /* Initialize the BD for every fragment in the page. */
+ bdp->cbd_sc = 0;
+ if (bdp->cbd_bufaddr && fep->tx_skbuff[i]) {
+ dev_kfree_skb_any(fep->tx_skbuff[i]);
+ fep->tx_skbuff[i] = NULL;
+ }
+ bdp->cbd_bufaddr = 0;
+ bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = fec_enet_get_prevdesc(bdp, fep->bufdesc_ex);
+ bdp->cbd_sc |= BD_SC_WRAP;
+ fep->dirty_tx = bdp;
+}
+
/* This function is called to start or restart the FEC during a link
* change. This only happens when switching between half and full
* duplex.
/* Set maximum receive buffer size. */
writel(PKT_MAXBLR_SIZE, fep->hwp + FEC_R_BUFF_SIZE);
+ fec_enet_bd_init(ndev);
+
/* Set receive and transmit descriptor base. */
writel(fep->bd_dma, fep->hwp + FEC_R_DES_START);
if (fep->bufdesc_ex)
writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc)
* RX_RING_SIZE, fep->hwp + FEC_X_DES_START);
- fep->cur_rx = fep->rx_bd_base;
for (i = 0; i <= TX_RING_MOD_MASK; i++) {
if (fep->tx_skbuff[i]) {
} else {
if (fep->link) {
fec_stop(ndev);
+ fep->link = phy_dev->link;
status_change = 1;
}
}
static void fec_enet_free_buffers(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- int i;
+ unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
static int fec_enet_alloc_buffers(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- int i;
+ unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
{
struct fec_enet_private *fep = netdev_priv(ndev);
struct bufdesc *cbd_base;
- struct bufdesc *bdp;
- int i;
/* Allocate memory for buffer descriptors. */
cbd_base = dma_alloc_coherent(NULL, PAGE_SIZE, &fep->bd_dma,
return -ENOMEM;
}
+ memset(cbd_base, 0, PAGE_SIZE);
spin_lock_init(&fep->hw_lock);
fep->netdev = ndev;
writel(FEC_RX_DISABLED_IMASK, fep->hwp + FEC_IMASK);
netif_napi_add(ndev, &fep->napi, fec_enet_rx_napi, FEC_NAPI_WEIGHT);
- /* Initialize the receive buffer descriptors. */
- bdp = fep->rx_bd_base;
- for (i = 0; i < RX_RING_SIZE; i++) {
-
- /* Initialize the BD for every fragment in the page. */
- bdp->cbd_sc = 0;
- bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
- }
-
- /* Set the last buffer to wrap */
- bdp = fec_enet_get_prevdesc(bdp, fep->bufdesc_ex);
- bdp->cbd_sc |= BD_SC_WRAP;
-
- /* ...and the same for transmit */
- bdp = fep->tx_bd_base;
- fep->cur_tx = bdp;
- for (i = 0; i < TX_RING_SIZE; i++) {
-
- /* Initialize the BD for every fragment in the page. */
- bdp->cbd_sc = 0;
- bdp->cbd_bufaddr = 0;
- bdp = fec_enet_get_nextdesc(bdp, fep->bufdesc_ex);
- }
-
- /* Set the last buffer to wrap */
- bdp = fec_enet_get_prevdesc(bdp, fep->bufdesc_ex);
- bdp->cbd_sc |= BD_SC_WRAP;
- fep->dirty_tx = bdp;
-
fec_restart(ndev, 0);
return 0;
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
}
+EXPORT_SYMBOL(fec_ptp_start_cyclecounter);
/**
* fec_ptp_adjfreq - adjust ptp cycle frequency
return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
-EFAULT : 0;
}
+EXPORT_SYMBOL(fec_ptp_ioctl);
/**
* fec_time_keep - call timecounter_read every second to avoid timer overrun
pr_info("registered PHC device on %s\n", ndev->name);
}
}
+EXPORT_SYMBOL(fec_ptp_init);
}
static int e100_exec_cb(struct nic *nic, struct sk_buff *skb,
- void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
+ int (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
{
struct cb *cb;
unsigned long flags;
nic->cbs_avail--;
cb->skb = skb;
+ err = cb_prepare(nic, cb, skb);
+ if (err)
+ goto err_unlock;
+
if (unlikely(!nic->cbs_avail))
err = -ENOSPC;
- cb_prepare(nic, cb, skb);
/* Order is important otherwise we'll be in a race with h/w:
* set S-bit in current first, then clear S-bit in previous. */
nic->mii.mdio_write = mdio_write;
}
-static void e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb)
+static int e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb)
{
struct config *config = &cb->u.config;
u8 *c = (u8 *)config;
netif_printk(nic, hw, KERN_DEBUG, nic->netdev,
"[16-23]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
c[16], c[17], c[18], c[19], c[20], c[21], c[22], c[23]);
+ return 0;
}
/*************************************************************************
return fw;
}
-static void e100_setup_ucode(struct nic *nic, struct cb *cb,
+static int e100_setup_ucode(struct nic *nic, struct cb *cb,
struct sk_buff *skb)
{
const struct firmware *fw = (void *)skb;
cb->u.ucode[min_size] |= cpu_to_le32((BUNDLESMALL) ? 0xFFFF : 0xFF80);
cb->command = cpu_to_le16(cb_ucode | cb_el);
+ return 0;
}
static inline int e100_load_ucode_wait(struct nic *nic)
return err;
}
-static void e100_setup_iaaddr(struct nic *nic, struct cb *cb,
+static int e100_setup_iaaddr(struct nic *nic, struct cb *cb,
struct sk_buff *skb)
{
cb->command = cpu_to_le16(cb_iaaddr);
memcpy(cb->u.iaaddr, nic->netdev->dev_addr, ETH_ALEN);
+ return 0;
}
-static void e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb)
+static int e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb)
{
cb->command = cpu_to_le16(cb_dump);
cb->u.dump_buffer_addr = cpu_to_le32(nic->dma_addr +
offsetof(struct mem, dump_buf));
+ return 0;
}
static int e100_phy_check_without_mii(struct nic *nic)
return 0;
}
-static void e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb)
+static int e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb)
{
struct net_device *netdev = nic->netdev;
struct netdev_hw_addr *ha;
memcpy(&cb->u.multi.addr[i++ * ETH_ALEN], &ha->addr,
ETH_ALEN);
}
+ return 0;
}
static void e100_set_multicast_list(struct net_device *netdev)
round_jiffies(jiffies + E100_WATCHDOG_PERIOD));
}
-static void e100_xmit_prepare(struct nic *nic, struct cb *cb,
+static int e100_xmit_prepare(struct nic *nic, struct cb *cb,
struct sk_buff *skb)
{
+ dma_addr_t dma_addr;
cb->command = nic->tx_command;
+ dma_addr = pci_map_single(nic->pdev,
+ skb->data, skb->len, PCI_DMA_TODEVICE);
+ /* If we can't map the skb, have the upper layer try later */
+ if (pci_dma_mapping_error(nic->pdev, dma_addr))
+ return -ENOMEM;
+
/*
* Use the last 4 bytes of the SKB payload packet as the CRC, used for
* testing, ie sending frames with bad CRC.
cb->u.tcb.tcb_byte_count = 0;
cb->u.tcb.threshold = nic->tx_threshold;
cb->u.tcb.tbd_count = 1;
- cb->u.tcb.tbd.buf_addr = cpu_to_le32(pci_map_single(nic->pdev,
- skb->data, skb->len, PCI_DMA_TODEVICE));
- /* check for mapping failure? */
+ cb->u.tcb.tbd.buf_addr = cpu_to_le32(dma_addr);
cb->u.tcb.tbd.size = cpu_to_le16(skb->len);
skb_tx_timestamp(skb);
+ return 0;
}
static netdev_tx_t e100_xmit_frame(struct sk_buff *skb,
txdr->buffer_info[i].dma =
dma_map_single(&pdev->dev, skb->data, skb->len,
DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, txdr->buffer_info[i].dma)) {
+ ret_val = 4;
+ goto err_nomem;
+ }
tx_desc->buffer_addr = cpu_to_le64(txdr->buffer_info[i].dma);
tx_desc->lower.data = cpu_to_le32(skb->len);
tx_desc->lower.data |= cpu_to_le32(E1000_TXD_CMD_EOP |
rxdr->buffer_info = kcalloc(rxdr->count, sizeof(struct e1000_buffer),
GFP_KERNEL);
if (!rxdr->buffer_info) {
- ret_val = 4;
+ ret_val = 5;
goto err_nomem;
}
rxdr->desc = dma_alloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma,
GFP_KERNEL);
if (!rxdr->desc) {
- ret_val = 5;
+ ret_val = 6;
goto err_nomem;
}
memset(rxdr->desc, 0, rxdr->size);
skb = alloc_skb(E1000_RXBUFFER_2048 + NET_IP_ALIGN, GFP_KERNEL);
if (!skb) {
- ret_val = 6;
+ ret_val = 7;
goto err_nomem;
}
skb_reserve(skb, NET_IP_ALIGN);
rxdr->buffer_info[i].dma =
dma_map_single(&pdev->dev, skb->data,
E1000_RXBUFFER_2048, DMA_FROM_DEVICE);
+ if (dma_mapping_error(&pdev->dev, rxdr->buffer_info[i].dma)) {
+ ret_val = 8;
+ goto err_nomem;
+ }
rx_desc->buffer_addr = cpu_to_le64(rxdr->buffer_info[i].dma);
memset(skb->data, 0x00, skb->len);
}
}
}
- if (!buffer_info->dma)
+ if (!buffer_info->dma) {
buffer_info->dma = dma_map_page(&pdev->dev,
buffer_info->page, 0,
PAGE_SIZE,
DMA_FROM_DEVICE);
+ if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
+ }
rx_desc = E1000_RX_DESC_EXT(*rx_ring, i);
rx_desc->read.buffer_addr = cpu_to_le64(buffer_info->dma);
**/
void igb_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
{
- u32 dtxswc;
+ u32 reg_val, reg_offset;
switch (hw->mac.type) {
case e1000_82576:
+ reg_offset = E1000_DTXSWC;
+ break;
case e1000_i350:
- dtxswc = rd32(E1000_DTXSWC);
- if (enable) {
- dtxswc |= (E1000_DTXSWC_MAC_SPOOF_MASK |
- E1000_DTXSWC_VLAN_SPOOF_MASK);
- /* The PF can spoof - it has to in order to
- * support emulation mode NICs */
- dtxswc ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
- } else {
- dtxswc &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
- E1000_DTXSWC_VLAN_SPOOF_MASK);
- }
- wr32(E1000_DTXSWC, dtxswc);
+ reg_offset = E1000_TXSWC;
break;
default:
- break;
+ return;
+ }
+
+ reg_val = rd32(reg_offset);
+ if (enable) {
+ reg_val |= (E1000_DTXSWC_MAC_SPOOF_MASK |
+ E1000_DTXSWC_VLAN_SPOOF_MASK);
+ /* The PF can spoof - it has to in order to
+ * support emulation mode NICs
+ */
+ reg_val ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
+ } else {
+ reg_val &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
+ E1000_DTXSWC_VLAN_SPOOF_MASK);
}
+ wr32(reg_offset, reg_val);
}
/**
enum e1000_ring_flags_t {
IGB_RING_FLAG_RX_SCTP_CSUM,
IGB_RING_FLAG_RX_LB_VLAN_BSWAP,
- IGB_RING_FLAG_RX_BUILD_SKB_ENABLED,
IGB_RING_FLAG_TX_CTX_IDX,
IGB_RING_FLAG_TX_DETECT_HANG
};
-#define ring_uses_build_skb(ring) \
- test_bit(IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags)
-#define set_ring_build_skb_enabled(ring) \
- set_bit(IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags)
-#define clear_ring_build_skb_enabled(ring) \
- clear_bit(IGB_RING_FLAG_RX_BUILD_SKB_ENABLED, &(ring)->flags)
-
#define IGB_TXD_DCMD (E1000_ADVTXD_DCMD_EOP | E1000_ADVTXD_DCMD_RS)
#define IGB_RX_DESC(R, i) \
#include <linux/pci.h>
#ifdef CONFIG_IGB_HWMON
-struct i2c_board_info i350_sensor_info = {
+static struct i2c_board_info i350_sensor_info = {
I2C_BOARD_INFO("i350bb", (0Xf8 >> 1)),
};
if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211))
return;
- igb_enable_sriov(pdev, max_vfs);
pci_sriov_set_totalvfs(pdev, 7);
+ igb_enable_sriov(pdev, max_vfs);
#endif /* CONFIG_PCI_IOV */
}
if (max_vfs > 7) {
dev_warn(&pdev->dev,
"Maximum of 7 VFs per PF, using max\n");
- adapter->vfs_allocated_count = 7;
+ max_vfs = adapter->vfs_allocated_count = 7;
} else
adapter->vfs_allocated_count = max_vfs;
if (adapter->vfs_allocated_count)
wr32(E1000_RXDCTL(reg_idx), rxdctl);
}
-static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
- struct igb_ring *rx_ring)
-{
-#define IGB_MAX_BUILD_SKB_SIZE \
- (SKB_WITH_OVERHEAD(IGB_RX_BUFSZ) - \
- (NET_SKB_PAD + NET_IP_ALIGN + IGB_TS_HDR_LEN))
-
- /* set build_skb flag */
- if (adapter->max_frame_size <= IGB_MAX_BUILD_SKB_SIZE)
- set_ring_build_skb_enabled(rx_ring);
- else
- clear_ring_build_skb_enabled(rx_ring);
-}
-
/**
* igb_configure_rx - Configure receive Unit after Reset
* @adapter: board private structure
/* Setup the HW Rx Head and Tail Descriptor Pointers and
* the Base and Length of the Rx Descriptor Ring */
- for (i = 0; i < adapter->num_rx_queues; i++) {
- struct igb_ring *rx_ring = adapter->rx_ring[i];
- igb_set_rx_buffer_len(adapter, rx_ring);
- igb_configure_rx_ring(adapter, rx_ring);
- }
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ igb_configure_rx_ring(adapter, adapter->rx_ring[i]);
}
/**
return igb_can_reuse_rx_page(rx_buffer, page, truesize);
}
-static struct sk_buff *igb_build_rx_buffer(struct igb_ring *rx_ring,
- union e1000_adv_rx_desc *rx_desc)
-{
- struct igb_rx_buffer *rx_buffer;
- struct sk_buff *skb;
- struct page *page;
- void *page_addr;
- unsigned int size = le16_to_cpu(rx_desc->wb.upper.length);
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = IGB_RX_BUFSZ;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
- SKB_DATA_ALIGN(NET_SKB_PAD +
- NET_IP_ALIGN +
- size);
-#endif
-
- /* If we spanned a buffer we have a huge mess so test for it */
- BUG_ON(unlikely(!igb_test_staterr(rx_desc, E1000_RXD_STAT_EOP)));
-
- rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
- page = rx_buffer->page;
- prefetchw(page);
-
- page_addr = page_address(page) + rx_buffer->page_offset;
-
- /* prefetch first cache line of first page */
- prefetch(page_addr + NET_SKB_PAD + NET_IP_ALIGN);
-#if L1_CACHE_BYTES < 128
- prefetch(page_addr + L1_CACHE_BYTES + NET_SKB_PAD + NET_IP_ALIGN);
-#endif
-
- /* build an skb to around the page buffer */
- skb = build_skb(page_addr, truesize);
- if (unlikely(!skb)) {
- rx_ring->rx_stats.alloc_failed++;
- return NULL;
- }
-
- /* we are reusing so sync this buffer for CPU use */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_buffer->dma,
- rx_buffer->page_offset,
- IGB_RX_BUFSZ,
- DMA_FROM_DEVICE);
-
- /* update pointers within the skb to store the data */
- skb_reserve(skb, NET_IP_ALIGN + NET_SKB_PAD);
- __skb_put(skb, size);
-
- /* pull timestamp out of packet data */
- if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) {
- igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb);
- __skb_pull(skb, IGB_TS_HDR_LEN);
- }
-
- if (igb_can_reuse_rx_page(rx_buffer, page, truesize)) {
- /* hand second half of page back to the ring */
- igb_reuse_rx_page(rx_ring, rx_buffer);
- } else {
- /* we are not reusing the buffer so unmap it */
- dma_unmap_page(rx_ring->dev, rx_buffer->dma,
- PAGE_SIZE, DMA_FROM_DEVICE);
- }
-
- /* clear contents of buffer_info */
- rx_buffer->dma = 0;
- rx_buffer->page = NULL;
-
- return skb;
-}
-
static struct sk_buff *igb_fetch_rx_buffer(struct igb_ring *rx_ring,
union e1000_adv_rx_desc *rx_desc,
struct sk_buff *skb)
rmb();
/* retrieve a buffer from the ring */
- if (ring_uses_build_skb(rx_ring))
- skb = igb_build_rx_buffer(rx_ring, rx_desc);
- else
- skb = igb_fetch_rx_buffer(rx_ring, rx_desc, skb);
+ skb = igb_fetch_rx_buffer(rx_ring, rx_desc, skb);
/* exit if we failed to retrieve a buffer */
if (!skb)
return true;
}
-static inline unsigned int igb_rx_offset(struct igb_ring *rx_ring)
-{
- if (ring_uses_build_skb(rx_ring))
- return NET_SKB_PAD + NET_IP_ALIGN;
- else
- return 0;
-}
-
/**
* igb_alloc_rx_buffers - Replace used receive buffers; packet split
* @adapter: address of board private structure
* Refresh the desc even if buffer_addrs didn't change
* because each write-back erases this info.
*/
- rx_desc->read.pkt_addr = cpu_to_le64(bi->dma +
- bi->page_offset +
- igb_rx_offset(rx_ring));
+ rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
rx_desc++;
bi++;
case e1000_82576:
snprintf(adapter->ptp_caps.name, 16, "%pm", netdev->dev_addr);
adapter->ptp_caps.owner = THIS_MODULE;
- adapter->ptp_caps.max_adj = 1000000000;
+ adapter->ptp_caps.max_adj = 999999881;
adapter->ptp_caps.n_ext_ts = 0;
adapter->ptp_caps.pps = 0;
adapter->ptp_caps.adjfreq = igb_ptp_adjfreq_82576;
skb->data,
adapter->rx_buffer_len,
DMA_FROM_DEVICE);
+ if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
rx_desc = IXGB_RX_DESC(*rx_ring, i);
rx_desc->buff_addr = cpu_to_le64(buffer_info->dma);
rx_desc->status = 0;
- if (++i == rx_ring->count) i = 0;
+ if (++i == rx_ring->count)
+ i = 0;
buffer_info = &rx_ring->buffer_info[i];
}
ixgbe_dbg_init();
#endif /* CONFIG_DEBUG_FS */
+ ret = pci_register_driver(&ixgbe_driver);
+ if (ret) {
+#ifdef CONFIG_DEBUG_FS
+ ixgbe_dbg_exit();
+#endif /* CONFIG_DEBUG_FS */
+ return ret;
+ }
+
#ifdef CONFIG_IXGBE_DCA
dca_register_notify(&dca_notifier);
#endif
- ret = pci_register_driver(&ixgbe_driver);
- return ret;
+ return 0;
}
module_init(ixgbe_init_module);
if ((vf >= adapter->num_vfs) || (vlan > 4095) || (qos > 7))
return -EINVAL;
if (vlan || qos) {
+ if (adapter->vfinfo[vf].pf_vlan)
+ err = ixgbe_set_vf_vlan(adapter, false,
+ adapter->vfinfo[vf].pf_vlan,
+ vf);
+ if (err)
+ goto out;
err = ixgbe_set_vf_vlan(adapter, true, vlan, vf);
if (err)
goto out;
free_irq(adapter->msix_entries[vector].vector,
adapter->q_vector[vector]);
}
- pci_disable_msix(adapter->pdev);
- kfree(adapter->msix_entries);
- adapter->msix_entries = NULL;
+ /* This failure is non-recoverable - it indicates the system is
+ * out of MSIX vector resources and the VF driver cannot run
+ * without them. Set the number of msix vectors to zero
+ * indicating that not enough can be allocated. The error
+ * will be returned to the user indicating device open failed.
+ * Any further attempts to force the driver to open will also
+ * fail. The only way to recover is to unload the driver and
+ * reload it again. If the system has recovered some MSIX
+ * vectors then it may succeed.
+ */
+ adapter->num_msix_vectors = 0;
return err;
}
struct ixgbe_hw *hw = &adapter->hw;
int err;
+ /* A previous failure to open the device because of a lack of
+ * available MSIX vector resources may have reset the number
+ * of msix vectors variable to zero. The only way to recover
+ * is to unload/reload the driver and hope that the system has
+ * been able to recover some MSIX vector resources.
+ */
+ if (!adapter->num_msix_vectors)
+ return -ENOMEM;
+
/* disallow open during test */
if (test_bit(__IXGBEVF_TESTING, &adapter->state))
return -EBUSY;
err_req_irq:
ixgbevf_down(adapter);
- ixgbevf_free_irq(adapter);
err_setup_rx:
ixgbevf_free_all_rx_resources(adapter);
err_setup_tx:
return 0;
err_free:
- kfree(dev);
+ free_netdev(dev);
err_out:
return err;
}
config MVMDIO
tristate "Marvell MDIO interface support"
+ select PHYLIB
---help---
This driver supports the MDIO interface found in the network
interface units of the Marvell EBU SoCs (Kirkwood, Orion5x,
config MVNETA
tristate "Marvell Armada 370/XP network interface support"
depends on MACH_ARMADA_370_XP
- select PHYLIB
select MVMDIO
---help---
This driver supports the network interface units in the
static int txq_number = 8;
static int rxq_def;
-static int txq_def;
#define MVNETA_DRIVER_NAME "mvneta"
#define MVNETA_DRIVER_VERSION "1.0"
static int mvneta_tx(struct sk_buff *skb, struct net_device *dev)
{
struct mvneta_port *pp = netdev_priv(dev);
- struct mvneta_tx_queue *txq = &pp->txqs[txq_def];
+ u16 txq_id = skb_get_queue_mapping(skb);
+ struct mvneta_tx_queue *txq = &pp->txqs[txq_id];
struct mvneta_tx_desc *tx_desc;
struct netdev_queue *nq;
int frags = 0;
goto out;
frags = skb_shinfo(skb)->nr_frags + 1;
- nq = netdev_get_tx_queue(dev, txq_def);
+ nq = netdev_get_tx_queue(dev, txq_id);
/* Get a descriptor for the first part of the packet */
tx_desc = mvneta_txq_next_desc_get(txq);
return -EINVAL;
}
- dev = alloc_etherdev_mq(sizeof(struct mvneta_port), 8);
+ dev = alloc_etherdev_mqs(sizeof(struct mvneta_port), txq_number, rxq_number);
if (!dev)
return -ENOMEM;
netif_napi_add(dev, &pp->napi, mvneta_poll, pp->weight);
+ dev->features = NETIF_F_SG | NETIF_F_IP_CSUM;
+ dev->hw_features |= NETIF_F_SG | NETIF_F_IP_CSUM;
+ dev->vlan_features |= NETIF_F_SG | NETIF_F_IP_CSUM;
+ dev->priv_flags |= IFF_UNICAST_FLT;
+
err = register_netdev(dev);
if (err < 0) {
dev_err(&pdev->dev, "failed to register\n");
goto err_deinit;
}
- dev->features = NETIF_F_SG | NETIF_F_IP_CSUM;
- dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM;
- dev->priv_flags |= IFF_UNICAST_FLT;
-
netdev_info(dev, "mac: %pM\n", dev->dev_addr);
platform_set_drvdata(pdev, pp->dev);
module_param(txq_number, int, S_IRUGO);
module_param(rxq_def, int, S_IRUGO);
-module_param(txq_def, int, S_IRUGO);
sky2_write32(hw, RB_ADDR(q, RB_RX_UTHP), tp);
sky2_write32(hw, RB_ADDR(q, RB_RX_LTHP), space/2);
- tp = space - 2048/8;
+ tp = space - 8192/8;
sky2_write32(hw, RB_ADDR(q, RB_RX_UTPP), tp);
sky2_write32(hw, RB_ADDR(q, RB_RX_LTPP), space/4);
} else {
GM_IS_RX_FF_OR = 1<<1, /* Receive FIFO Overrun */
GM_IS_RX_COMPL = 1<<0, /* Frame Reception Complete */
-#define GMAC_DEF_MSK GM_IS_TX_FF_UR
+#define GMAC_DEF_MSK (GM_IS_TX_FF_UR | GM_IS_RX_FF_OR)
};
/* GMAC_LINK_CTRL 16 bit GMAC Link Control Reg (YUKON only) */
static void mlx4_en_u64_to_mac(unsigned char dst_mac[ETH_ALEN + 2], u64 src_mac)
{
- unsigned int i;
- for (i = ETH_ALEN - 1; i; --i) {
+ int i;
+ for (i = ETH_ALEN - 1; i >= 0; --i) {
dst_mac[i] = src_mac & 0xff;
src_mac >>= 8;
}
/* Flush multicast filter */
mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, 0, 1, MLX4_MCAST_CONFIG);
+ /* Remove flow steering rules for the port*/
+ if (mdev->dev->caps.steering_mode ==
+ MLX4_STEERING_MODE_DEVICE_MANAGED) {
+ ASSERT_RTNL();
+ list_for_each_entry_safe(flow, tmp_flow,
+ &priv->ethtool_list, list) {
+ mlx4_flow_detach(mdev->dev, flow->id);
+ list_del(&flow->list);
+ }
+ }
+
mlx4_en_destroy_drop_qp(priv);
/* Free TX Rings */
if (!(mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAGS2_REASSIGN_MAC_EN))
mdev->mac_removed[priv->port] = 1;
- /* Remove flow steering rules for the port*/
- if (mdev->dev->caps.steering_mode ==
- MLX4_STEERING_MODE_DEVICE_MANAGED) {
- ASSERT_RTNL();
- list_for_each_entry_safe(flow, tmp_flow,
- &priv->ethtool_list, list) {
- mlx4_flow_detach(mdev->dev, flow->id);
- list_del(&flow->list);
- }
- }
-
/* Free RX Rings */
for (i = 0; i < priv->rx_ring_num; i++) {
mlx4_en_deactivate_rx_ring(priv, &priv->rx_ring[i]);
struct mlx4_slave_event_eq_info *event_eq =
priv->mfunc.master.slave_state[slave].event_eq;
u32 in_modifier = vhcr->in_modifier;
- u32 eqn = in_modifier & 0x1FF;
+ u32 eqn = in_modifier & 0x3FF;
u64 in_param = vhcr->in_param;
int err = 0;
int i;
struct list_head mcg_list;
spinlock_t mcg_spl;
int local_qpn;
+ atomic_t ref_count;
};
enum res_mtt_states {
struct res_fs_rule {
struct res_common com;
+ int qpn;
};
static void *res_tracker_lookup(struct rb_root *root, u64 res_id)
return dev->caps.num_mpts - 1;
}
-static void *find_res(struct mlx4_dev *dev, int res_id,
+static void *find_res(struct mlx4_dev *dev, u64 res_id,
enum mlx4_resource type)
{
struct mlx4_priv *priv = mlx4_priv(dev);
ret->local_qpn = id;
INIT_LIST_HEAD(&ret->mcg_list);
spin_lock_init(&ret->mcg_spl);
+ atomic_set(&ret->ref_count, 0);
return &ret->com;
}
return &ret->com;
}
-static struct res_common *alloc_fs_rule_tr(u64 id)
+static struct res_common *alloc_fs_rule_tr(u64 id, int qpn)
{
struct res_fs_rule *ret;
ret->com.res_id = id;
ret->com.state = RES_FS_RULE_ALLOCATED;
-
+ ret->qpn = qpn;
return &ret->com;
}
ret = alloc_xrcdn_tr(id);
break;
case RES_FS_RULE:
- ret = alloc_fs_rule_tr(id);
+ ret = alloc_fs_rule_tr(id, extra);
break;
default:
return NULL;
static int remove_qp_ok(struct res_qp *res)
{
- if (res->com.state == RES_QP_BUSY)
+ if (res->com.state == RES_QP_BUSY || atomic_read(&res->ref_count) ||
+ !list_empty(&res->mcg_list)) {
+ pr_err("resource tracker: fail to remove qp, state %d, ref_count %d\n",
+ res->com.state, atomic_read(&res->ref_count));
return -EBUSY;
- else if (res->com.state != RES_QP_RESERVED)
+ } else if (res->com.state != RES_QP_RESERVED) {
return -EPERM;
+ }
return 0;
}
struct list_head *rlist = &tracker->slave_list[slave].res_list[RES_MAC];
int err;
int qpn;
+ struct res_qp *rqp;
struct mlx4_net_trans_rule_hw_ctrl *ctrl;
struct _rule_hw *rule_header;
int header_id;
ctrl = (struct mlx4_net_trans_rule_hw_ctrl *)inbox->buf;
qpn = be32_to_cpu(ctrl->qpn) & 0xffffff;
- err = get_res(dev, slave, qpn, RES_QP, NULL);
+ err = get_res(dev, slave, qpn, RES_QP, &rqp);
if (err) {
pr_err("Steering rule with qpn 0x%x rejected.\n", qpn);
return err;
if (err)
goto err_put;
- err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, 0);
+ err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, qpn);
if (err) {
mlx4_err(dev, "Fail to add flow steering resources.\n ");
/* detach rule*/
mlx4_cmd(dev, vhcr->out_param, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
+ goto err_put;
}
+ atomic_inc(&rqp->ref_count);
err_put:
put_res(dev, slave, qpn, RES_QP);
return err;
struct mlx4_cmd_info *cmd)
{
int err;
+ struct res_qp *rqp;
+ struct res_fs_rule *rrule;
if (dev->caps.steering_mode !=
MLX4_STEERING_MODE_DEVICE_MANAGED)
return -EOPNOTSUPP;
+ err = get_res(dev, slave, vhcr->in_param, RES_FS_RULE, &rrule);
+ if (err)
+ return err;
+ /* Release the rule form busy state before removal */
+ put_res(dev, slave, vhcr->in_param, RES_FS_RULE);
+ err = get_res(dev, slave, rrule->qpn, RES_QP, &rqp);
+ if (err)
+ return err;
+
err = rem_res_range(dev, slave, vhcr->in_param, 1, RES_FS_RULE, 0);
if (err) {
mlx4_err(dev, "Fail to remove flow steering resources.\n ");
- return err;
+ goto out;
}
err = mlx4_cmd(dev, vhcr->in_param, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
+ if (!err)
+ atomic_dec(&rqp->ref_count);
+out:
+ put_res(dev, slave, rrule->qpn, RES_QP);
return err;
}
mutex_lock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex);
/*VLAN*/
rem_slave_macs(dev, slave);
+ rem_slave_fs_rule(dev, slave);
rem_slave_qps(dev, slave);
rem_slave_srqs(dev, slave);
rem_slave_cqs(dev, slave);
rem_slave_mtts(dev, slave);
rem_slave_counters(dev, slave);
rem_slave_xrcdns(dev, slave);
- rem_slave_fs_rule(dev, slave);
mutex_unlock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex);
}
for (; rxfc != 0; rxfc--) {
rxh = ks8851_rdreg32(ks, KS_RXFHSR);
rxstat = rxh & 0xffff;
- rxlen = rxh >> 16;
+ rxlen = (rxh >> 16) & 0xfff;
netif_dbg(ks, rx_status, ks->netdev,
"rx: stat 0x%04x, len 0x%04x\n", rxstat, rxlen);
}
platform_set_drvdata(pdev, ndev);
- if (lpc_mii_init(pldat) != 0)
+ ret = lpc_mii_init(pldat);
+ if (ret)
goto err_out_unregister_netdev;
netdev_info(ndev, "LPC mac at 0x%08x irq %d\n",
skb->protocol = eth_type_trans(skb, netdev);
if (tcp_ip_status & PCH_GBE_RXD_ACC_STAT_TCPIPOK)
- skb->ip_summed = CHECKSUM_NONE;
- else
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
napi_gro_receive(&adapter->napi, skb);
(*work_done)++;
}
} while ((adapter->ahw->linkup && ahw->has_link_events) != 1);
+ /* Make sure carrier is off and queue is stopped during loopback */
+ if (netif_running(netdev)) {
+ netif_carrier_off(netdev);
+ netif_stop_queue(netdev);
+ }
+
ret = qlcnic_do_lb_test(adapter, mode);
qlcnic_83xx_clear_lb_mode(adapter, mode);
void qlcnic_83xx_get_stats(struct qlcnic_adapter *adapter, u64 *data)
{
struct qlcnic_cmd_args cmd;
+ struct net_device *netdev = adapter->netdev;
int ret = 0;
qlcnic_alloc_mbx_args(&cmd, adapter, QLCNIC_CMD_GET_STATISTICS);
data = qlcnic_83xx_fill_stats(adapter, &cmd, data,
QLC_83XX_STAT_TX, &ret);
if (ret) {
- dev_info(&adapter->pdev->dev, "Error getting MAC stats\n");
+ netdev_err(netdev, "Error getting Tx stats\n");
goto out;
}
/* Get MAC stats */
data = qlcnic_83xx_fill_stats(adapter, &cmd, data,
QLC_83XX_STAT_MAC, &ret);
if (ret) {
- dev_info(&adapter->pdev->dev,
- "Error getting Rx stats\n");
+ netdev_err(netdev, "Error getting MAC stats\n");
goto out;
}
/* Get Rx stats */
data = qlcnic_83xx_fill_stats(adapter, &cmd, data,
QLC_83XX_STAT_RX, &ret);
if (ret)
- dev_info(&adapter->pdev->dev,
- "Error getting Tx stats\n");
+ netdev_err(netdev, "Error getting Rx stats\n");
out:
qlcnic_free_mbx_args(&cmd);
}
memcpy(&first_desc->eth_addr, skb->data, ETH_ALEN);
}
opcode = TX_ETHER_PKT;
- if ((adapter->netdev->features & (NETIF_F_TSO | NETIF_F_TSO6)) &&
- skb_shinfo(skb)->gso_size > 0) {
+ if (skb_is_gso(skb)) {
hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
first_desc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
first_desc->total_hdr_length = hdr_len;
}
err = qlcnic_config_led(adapter, b_state, b_rate);
- if (!err)
+ if (!err) {
err = len;
- else
ahw->beacon_state = b_state;
+ }
if (test_and_clear_bit(__QLCNIC_DIAG_RES_ALLOC, &adapter->state))
qlcnic_diag_free_res(adapter->netdev, max_sds_rings);
*/
#define DRV_NAME "qlge"
#define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver "
-#define DRV_VERSION "v1.00.00.31"
+#define DRV_VERSION "v1.00.00.32"
#define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */
ecmd->supported = SUPPORTED_10000baseT_Full;
ecmd->advertising = ADVERTISED_10000baseT_Full;
- ecmd->autoneg = AUTONEG_ENABLE;
ecmd->transceiver = XCVR_EXTERNAL;
if ((qdev->link_status & STS_LINK_TYPE_MASK) ==
STS_LINK_TYPE_10GBASET) {
ecmd->supported |= (SUPPORTED_TP | SUPPORTED_Autoneg);
ecmd->advertising |= (ADVERTISED_TP | ADVERTISED_Autoneg);
ecmd->port = PORT_TP;
+ ecmd->autoneg = AUTONEG_ENABLE;
} else {
ecmd->supported |= SUPPORTED_FIBRE;
ecmd->advertising |= ADVERTISED_FIBRE;
}
/* Categorizing receive firmware frame errors */
-static void ql_categorize_rx_err(struct ql_adapter *qdev, u8 rx_err)
+static void ql_categorize_rx_err(struct ql_adapter *qdev, u8 rx_err,
+ struct rx_ring *rx_ring)
{
struct nic_stats *stats = &qdev->nic_stats;
stats->rx_err_count++;
+ rx_ring->rx_errors++;
switch (rx_err & IB_MAC_IOCB_RSP_ERR_MASK) {
case IB_MAC_IOCB_RSP_ERR_CODE_ERR:
struct bq_desc *lbq_desc = ql_get_curr_lchunk(qdev, rx_ring);
struct napi_struct *napi = &rx_ring->napi;
+ /* Frame error, so drop the packet. */
+ if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) {
+ ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring);
+ put_page(lbq_desc->p.pg_chunk.page);
+ return;
+ }
napi->dev = qdev->ndev;
skb = napi_get_frags(napi);
addr = lbq_desc->p.pg_chunk.va;
prefetch(addr);
+ /* Frame error, so drop the packet. */
+ if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) {
+ ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring);
+ goto err_out;
+ }
+
/* The max framesize filter on this chip is set higher than
* MTU since FCoE uses 2k frames.
*/
memcpy(skb_put(new_skb, length), skb->data, length);
skb = new_skb;
+ /* Frame error, so drop the packet. */
+ if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) {
+ ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring);
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
/* loopback self test for ethtool */
if (test_bit(QL_SELFTEST, &qdev->flags)) {
ql_check_lb_frame(qdev, skb);
return;
}
+ /* Frame error, so drop the packet. */
+ if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) {
+ ql_categorize_rx_err(qdev, ib_mac_rsp->flags2, rx_ring);
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
/* The max framesize filter on this chip is set higher than
* MTU since FCoE uses 2k frames.
*/
QL_DUMP_IB_MAC_RSP(ib_mac_rsp);
- /* Frame error, so drop the packet. */
- if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) {
- ql_categorize_rx_err(qdev, ib_mac_rsp->flags2);
- return (unsigned long)length;
- }
-
if (ib_mac_rsp->flags4 & IB_MAC_IOCB_RSP_HV) {
/* The data and headers are split into
* separate buffers.
}
}
+static void rtl_speed_down(struct rtl8169_private *tp)
+{
+ u32 adv;
+ int lpa;
+
+ rtl_writephy(tp, 0x1f, 0x0000);
+ lpa = rtl_readphy(tp, MII_LPA);
+
+ if (lpa & (LPA_10HALF | LPA_10FULL))
+ adv = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full;
+ else if (lpa & (LPA_100HALF | LPA_100FULL))
+ adv = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
+ else
+ adv = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full |
+ (tp->mii.supports_gmii ?
+ ADVERTISED_1000baseT_Half |
+ ADVERTISED_1000baseT_Full : 0);
+
+ rtl8169_set_speed(tp->dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL,
+ adv);
+}
+
static void rtl_wol_suspend_quirk(struct rtl8169_private *tp)
{
void __iomem *ioaddr = tp->mmio_addr;
if (!(__rtl8169_get_wol(tp) & WAKE_ANY))
return false;
- rtl_writephy(tp, 0x1f, 0x0000);
- rtl_writephy(tp, MII_BMCR, 0x0000);
-
+ rtl_speed_down(tp);
rtl_wol_suspend_quirk(tp);
return true;
if (felic_stat & ECSR_LCHNG) {
/* Link Changed */
if (mdp->cd->no_psr || mdp->no_ether_link) {
- if (mdp->link == PHY_DOWN)
- link_stat = 0;
- else
- link_stat = PHY_ST_LINK;
+ goto ignore_link;
} else {
link_stat = (sh_eth_read(ndev, PSR));
if (mdp->ether_link_active_low)
}
}
+ignore_link:
if (intr_status & EESR_TWB) {
/* Write buck end. unused write back interrupt */
if (intr_status & EESR_TABT) /* Transmit Abort int */
struct sh_eth_private *mdp = netdev_priv(ndev);
struct sh_eth_cpu_data *cd = mdp->cd;
irqreturn_t ret = IRQ_NONE;
- u32 intr_status = 0;
+ unsigned long intr_status;
spin_lock(&mdp->lock);
- /* Get interrpt stat */
+ /* Get interrupt status */
intr_status = sh_eth_read(ndev, EESR);
+ /* Mask it with the interrupt mask, forcing ECI interrupt to be always
+ * enabled since it's the one that comes thru regardless of the mask,
+ * and we need to fully handle it in sh_eth_error() in order to quench
+ * it as it doesn't get cleared by just writing 1 to the ECI bit...
+ */
+ intr_status &= sh_eth_read(ndev, EESIPR) | DMAC_M_ECI;
/* Clear interrupt */
if (intr_status & (EESR_FRC | EESR_RMAF | EESR_RRF |
EESR_RTLF | EESR_RTSF | EESR_PRE | EESR_CERF |
struct phy_device *phydev = mdp->phydev;
int new_state = 0;
- if (phydev->link != PHY_DOWN) {
+ if (phydev->link) {
if (phydev->duplex != mdp->duplex) {
new_state = 1;
mdp->duplex = phydev->duplex;
if (mdp->cd->set_rate)
mdp->cd->set_rate(ndev);
}
- if (mdp->link == PHY_DOWN) {
+ if (!mdp->link) {
sh_eth_write(ndev,
(sh_eth_read(ndev, ECMR) & ~ECMR_TXF), ECMR);
new_state = 1;
mdp->link = phydev->link;
+ if (mdp->cd->no_psr || mdp->no_ether_link)
+ sh_eth_rcv_snd_enable(ndev);
}
} else if (mdp->link) {
new_state = 1;
- mdp->link = PHY_DOWN;
+ mdp->link = 0;
mdp->speed = 0;
mdp->duplex = -1;
+ if (mdp->cd->no_psr || mdp->no_ether_link)
+ sh_eth_rcv_snd_disable(ndev);
}
if (new_state && netif_msg_link(mdp))
snprintf(phy_id, sizeof(phy_id), PHY_ID_FMT,
mdp->mii_bus->id , mdp->phy_id);
- mdp->link = PHY_DOWN;
+ mdp->link = 0;
mdp->speed = 0;
mdp->duplex = -1;
/* MDIO bus release function */
static int sh_mdio_release(struct net_device *ndev)
{
+ struct sh_eth_private *mdp = netdev_priv(ndev);
struct mii_bus *bus = dev_get_drvdata(&ndev->dev);
/* unregister mdio bus */
/* free bitbang info */
free_mdio_bitbang(bus);
+ /* free bitbang memory */
+ kfree(mdp->bitbang);
+
return 0;
}
bitbang->ctrl.ops = &bb_ops;
/* MII controller setting */
+ mdp->bitbang = bitbang;
mdp->mii_bus = alloc_mdio_bitbang(&bitbang->ctrl);
if (!mdp->mii_bus) {
ret = -ENOMEM;
}
mdp->tsu_addr = ioremap(rtsu->start,
resource_size(rtsu));
+ if (mdp->tsu_addr == NULL) {
+ ret = -ENOMEM;
+ dev_err(&pdev->dev, "TSU ioremap failed.\n");
+ goto out_release;
+ }
mdp->port = devno % 2;
ndev->features = NETIF_F_HW_VLAN_FILTER;
}
const u16 *reg_offset;
void __iomem *addr;
void __iomem *tsu_addr;
+ struct bb_info *bitbang;
u32 num_rx_ring;
u32 num_tx_ring;
dma_addr_t rx_desc_dma;
u32 phy_id; /* PHY ID */
struct mii_bus *mii_bus; /* MDIO bus control */
struct phy_device *phydev; /* PHY device control */
- enum phy_state link;
+ int link;
phy_interface_t phy_interface;
int msg_enable;
int speed;
{
writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_INTR_MASK);
writel(MMC_DEFAULT_MASK, ioaddr + MMC_TX_INTR_MASK);
+ writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_IPC_INTR_MASK);
}
/* This reads the MAC core counters (if actaully supported).
* queue is stopped then start the queue as we have free desc for tx
*/
if (unlikely(netif_queue_stopped(ndev)))
- netif_start_queue(ndev);
+ netif_wake_queue(ndev);
cpts_tx_timestamp(priv->cpts, skb);
priv->stats.tx_packets++;
priv->stats.tx_bytes += len;
struct platform_device *mdio;
parp = of_get_property(slave_node, "phy_id", &lenp);
- if ((parp == NULL) && (lenp != (sizeof(void *) * 2))) {
+ if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) {
pr_err("Missing slave[%d] phy_id property\n", i);
ret = -EINVAL;
goto error_ret;
memcpy(slave_data->mac_addr, mac_addr, ETH_ALEN);
if (data->dual_emac) {
- if (of_property_read_u32(node, "dual_emac_res_vlan",
+ if (of_property_read_u32(slave_node, "dual_emac_res_vlan",
&prop)) {
pr_err("Missing dual_emac_res_vlan in DT.\n");
slave_data->dual_emac_res_vlan = i+1;
* queue is stopped then start the queue as we have free desc for tx
*/
if (unlikely(netif_queue_stopped(ndev)))
- netif_start_queue(ndev);
+ netif_wake_queue(ndev);
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += len;
dev_kfree_skb_any(skb);
packet->trans_id;
/* Notify the layer above us */
- nvsc_packet->completion.send.send_completion(
- nvsc_packet->completion.send.send_completion_ctx);
+ if (nvsc_packet)
+ nvsc_packet->completion.send.send_completion(
+ nvsc_packet->completion.send.
+ send_completion_ctx);
num_outstanding_sends =
atomic_dec_return(&net_device->num_outstanding_sends);
int ret = 0;
struct nvsp_message sendMessage;
struct net_device *ndev;
+ u64 req_id;
net_device = get_outbound_net_device(device);
if (!net_device)
0xFFFFFFFF;
sendMessage.msg.v1_msg.send_rndis_pkt.send_buf_section_size = 0;
+ if (packet->completion.send.send_completion)
+ req_id = (u64)packet;
+ else
+ req_id = 0;
+
if (packet->page_buf_cnt) {
ret = vmbus_sendpacket_pagebuffer(device->channel,
packet->page_buf,
packet->page_buf_cnt,
&sendMessage,
sizeof(struct nvsp_message),
- (unsigned long)packet);
+ req_id);
} else {
ret = vmbus_sendpacket(device->channel, &sendMessage,
sizeof(struct nvsp_message),
- (unsigned long)packet,
+ req_id,
VM_PKT_DATA_INBAND,
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
-
}
if (ret == 0) {
if (status == 1) {
netif_carrier_on(net);
- netif_wake_queue(net);
ndev_ctx = netdev_priv(net);
schedule_delayed_work(&ndev_ctx->dwork, 0);
schedule_delayed_work(&ndev_ctx->dwork, msecs_to_jiffies(20));
} else {
netif_carrier_off(net);
- netif_tx_disable(net);
}
}
static void rndis_filter_send_completion(void *ctx);
-static void rndis_filter_send_request_completion(void *ctx);
-
-
static struct rndis_device *get_rndis_device(void)
{
packet->page_buf[0].len;
}
- packet->completion.send.send_completion_ctx = req;/* packet; */
- packet->completion.send.send_completion =
- rndis_filter_send_request_completion;
- packet->completion.send.send_completion_tid = (unsigned long)dev;
+ packet->completion.send.send_completion = NULL;
ret = netvsc_send(dev->net_dev->dev, packet);
return ret;
/* Pass it back to the original handler */
filter_pkt->completion(filter_pkt->completion_ctx);
}
-
-
-static void rndis_filter_send_request_completion(void *ctx)
-{
- /* Noop */
-}
if (tun->flags & TUN_TAP_MQ &&
(tun->numqueues + tun->numdisabled > 1))
- return err;
+ return -EBUSY;
}
else {
char *name;
goto error;
if (skb) {
- if (skb->len <= sizeof(ETH_HLEN))
+ if (skb->len <= ETH_HLEN)
goto error;
/* mapping VLANs to MBIM sessions:
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/ethtool.h>
+#include <linux/etherdevice.h>
#include <linux/mii.h>
#include <linux/usb.h>
#include <linux/usb/cdc.h>
struct usb_interface *data;
};
+/* default ethernet address used by the modem */
+static const u8 default_modem_addr[ETH_ALEN] = {0x02, 0x50, 0xf3};
+
+/* Make up an ethernet header if the packet doesn't have one.
+ *
+ * A firmware bug common among several devices cause them to send raw
+ * IP packets under some circumstances. There is no way for the
+ * driver/host to know when this will happen. And even when the bug
+ * hits, some packets will still arrive with an intact header.
+ *
+ * The supported devices are only capably of sending IPv4, IPv6 and
+ * ARP packets on a point-to-point link. Any packet with an ethernet
+ * header will have either our address or a broadcast/multicast
+ * address as destination. ARP packets will always have a header.
+ *
+ * This means that this function will reliably add the appropriate
+ * header iff necessary, provided our hardware address does not start
+ * with 4 or 6.
+ *
+ * Another common firmware bug results in all packets being addressed
+ * to 00:a0:c6:00:00:00 despite the host address being different.
+ * This function will also fixup such packets.
+ */
+static int qmi_wwan_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+{
+ __be16 proto;
+
+ /* usbnet rx_complete guarantees that skb->len is at least
+ * hard_header_len, so we can inspect the dest address without
+ * checking skb->len
+ */
+ switch (skb->data[0] & 0xf0) {
+ case 0x40:
+ proto = htons(ETH_P_IP);
+ break;
+ case 0x60:
+ proto = htons(ETH_P_IPV6);
+ break;
+ case 0x00:
+ if (is_multicast_ether_addr(skb->data))
+ return 1;
+ /* possibly bogus destination - rewrite just in case */
+ skb_reset_mac_header(skb);
+ goto fix_dest;
+ default:
+ /* pass along other packets without modifications */
+ return 1;
+ }
+ if (skb_headroom(skb) < ETH_HLEN)
+ return 0;
+ skb_push(skb, ETH_HLEN);
+ skb_reset_mac_header(skb);
+ eth_hdr(skb)->h_proto = proto;
+ memset(eth_hdr(skb)->h_source, 0, ETH_ALEN);
+fix_dest:
+ memcpy(eth_hdr(skb)->h_dest, dev->net->dev_addr, ETH_ALEN);
+ return 1;
+}
+
+/* very simplistic detection of IPv4 or IPv6 headers */
+static bool possibly_iphdr(const char *data)
+{
+ return (data[0] & 0xd0) == 0x40;
+}
+
+/* disallow addresses which may be confused with IP headers */
+static int qmi_wwan_mac_addr(struct net_device *dev, void *p)
+{
+ int ret;
+ struct sockaddr *addr = p;
+
+ ret = eth_prepare_mac_addr_change(dev, p);
+ if (ret < 0)
+ return ret;
+ if (possibly_iphdr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+ eth_commit_mac_addr_change(dev, p);
+ return 0;
+}
+
+static const struct net_device_ops qmi_wwan_netdev_ops = {
+ .ndo_open = usbnet_open,
+ .ndo_stop = usbnet_stop,
+ .ndo_start_xmit = usbnet_start_xmit,
+ .ndo_tx_timeout = usbnet_tx_timeout,
+ .ndo_change_mtu = usbnet_change_mtu,
+ .ndo_set_mac_address = qmi_wwan_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
/* using a counter to merge subdriver requests with our own into a combined state */
static int qmi_wwan_manage_power(struct usbnet *dev, int on)
{
usb_driver_release_interface(driver, info->data);
}
+ /* Never use the same address on both ends of the link, even
+ * if the buggy firmware told us to.
+ */
+ if (!compare_ether_addr(dev->net->dev_addr, default_modem_addr))
+ eth_hw_addr_random(dev->net);
+
+ /* make MAC addr easily distinguishable from an IP header */
+ if (possibly_iphdr(dev->net->dev_addr)) {
+ dev->net->dev_addr[0] |= 0x02; /* set local assignment bit */
+ dev->net->dev_addr[0] &= 0xbf; /* clear "IP" bit */
+ }
+ dev->net->netdev_ops = &qmi_wwan_netdev_ops;
err:
return status;
}
.bind = qmi_wwan_bind,
.unbind = qmi_wwan_unbind,
.manage_power = qmi_wwan_manage_power,
+ .rx_fixup = qmi_wwan_rx_fixup,
};
#define HUAWEI_VENDOR_ID 0x12D1
static int smsc75xx_change_mtu(struct net_device *netdev, int new_mtu)
{
struct usbnet *dev = netdev_priv(netdev);
+ int ret;
+
+ if (new_mtu > MAX_SINGLE_PACKET_SIZE)
+ return -EINVAL;
- int ret = smsc75xx_set_rx_max_frame_length(dev, new_mtu);
+ ret = smsc75xx_set_rx_max_frame_length(dev, new_mtu + ETH_HLEN);
if (ret < 0) {
netdev_warn(dev->net, "Failed to set mac rx frame length\n");
return ret;
netif_dbg(dev, ifup, dev->net, "FCT_TX_CTL set to 0x%08x\n", buf);
- ret = smsc75xx_set_rx_max_frame_length(dev, 1514);
+ ret = smsc75xx_set_rx_max_frame_length(dev, dev->net->mtu + ETH_HLEN);
if (ret < 0) {
netdev_warn(dev->net, "Failed to set max rx frame length\n");
return ret;
else if (rx_cmd_a & (RX_CMD_A_LONG | RX_CMD_A_RUNT))
dev->net->stats.rx_frame_errors++;
} else {
- /* ETH_FRAME_LEN + 4(CRC) + 2(COE) + 4(Vlan) */
- if (unlikely(size > (ETH_FRAME_LEN + 12))) {
+ /* MAX_SINGLE_PACKET_SIZE + 4(CRC) + 2(COE) + 4(Vlan) */
+ if (unlikely(size > (MAX_SINGLE_PACKET_SIZE + ETH_HLEN + 12))) {
netif_dbg(dev, rx_err, dev->net,
"size err rx_cmd_a=0x%08x\n",
rx_cmd_a);
AR_PHY_AGC_CONTROL_FLTR_CAL |
AR_PHY_AGC_CONTROL_PKDET_CAL;
+ /* Use chip chainmask only for calibration */
ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask);
if (rtt) {
ar9003_hw_rtt_disable(ah);
}
+ /* Revert chainmask to runtime parameters */
+ ar9003_hw_set_chain_masks(ah, ah->rxchainmask, ah->txchainmask);
+
/* Initialize list pointers */
ah->cal_list = ah->cal_list_last = ah->cal_list_curr = NULL;
{0x00008258, 0x00000000},
{0x0000825c, 0x40000000},
{0x00008260, 0x00080922},
- {0x00008264, 0x9bc00010},
+ {0x00008264, 0x9d400010},
{0x00008268, 0xffffffff},
{0x0000826c, 0x0000ffff},
{0x00008270, 0x00000000},
u32 sz, i;
struct channel_detector *cd;
- cd = kmalloc(sizeof(*cd), GFP_KERNEL);
+ cd = kmalloc(sizeof(*cd), GFP_ATOMIC);
if (cd == NULL)
goto fail;
INIT_LIST_HEAD(&cd->head);
cd->freq = freq;
sz = sizeof(cd->detectors) * dpd->num_radar_types;
- cd->detectors = kzalloc(sz, GFP_KERNEL);
+ cd->detectors = kzalloc(sz, GFP_ATOMIC);
if (cd->detectors == NULL)
goto fail;
{
struct pulse_elem *p = pool_get_pulse_elem();
if (p == NULL) {
- p = kmalloc(sizeof(*p), GFP_KERNEL);
+ p = kmalloc(sizeof(*p), GFP_ATOMIC);
if (p == NULL) {
DFS_POOL_STAT_INC(pulse_alloc_error);
return false;
ps.deadline_ts = ps.first_ts + ps.dur;
new_ps = pool_get_pseq_elem();
if (new_ps == NULL) {
- new_ps = kmalloc(sizeof(*new_ps), GFP_KERNEL);
+ new_ps = kmalloc(sizeof(*new_ps), GFP_ATOMIC);
if (new_ps == NULL) {
DFS_POOL_STAT_INC(pseq_alloc_error);
return false;
* required version.
*/
if (priv->fw_version_major != MAJOR_VERSION_REQ ||
- priv->fw_version_minor != MINOR_VERSION_REQ) {
+ priv->fw_version_minor < MINOR_VERSION_REQ) {
dev_err(priv->dev, "ath9k_htc: Please upgrade to FW version %d.%d\n",
MAJOR_VERSION_REQ, MINOR_VERSION_REQ);
return -EINVAL;
int i;
bool needreset = false;
- for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++)
- if (ATH_TXQ_SETUP(sc, i)) {
- txq = &sc->tx.txq[i];
- ath_txq_lock(sc, txq);
- if (txq->axq_depth) {
- if (txq->axq_tx_inprogress) {
- needreset = true;
- ath_txq_unlock(sc, txq);
- break;
- } else {
- txq->axq_tx_inprogress = true;
- }
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+ txq = sc->tx.txq_map[i];
+
+ ath_txq_lock(sc, txq);
+ if (txq->axq_depth) {
+ if (txq->axq_tx_inprogress) {
+ needreset = true;
+ ath_txq_unlock(sc, txq);
+ break;
+ } else {
+ txq->axq_tx_inprogress = true;
}
- ath_txq_unlock_complete(sc, txq);
}
+ ath_txq_unlock_complete(sc, txq);
+ }
if (needreset) {
ath_dbg(ath9k_hw_common(sc->sc_ah), RESET,
{
struct ath_softc *sc = (struct ath_softc *)data;
- ieee80211_queue_work(sc->hw, &sc->hw_check_work);
+ if (!test_bit(SC_OP_INVALID, &sc->sc_flags))
+ ieee80211_queue_work(sc->hw, &sc->hw_check_work);
}
/*
if (r) {
ath_err(common,
"Unable to reset channel, reset status %d\n", r);
+
+ ath9k_hw_enable_interrupts(ah);
+ ath9k_queue_reset(sc, RESET_TYPE_BB_HANG);
+
goto out;
}
const struct b43_dma_ops *ops;
struct b43_dmaring *ring;
struct b43_dmadesc_meta *meta;
+ static const struct b43_txstatus fake; /* filled with 0 */
+ const struct b43_txstatus *txstat;
int slot, firstused;
bool frame_succeed;
+ int skip;
+ static u8 err_out1, err_out2;
ring = parse_cookie(dev, status->cookie, &slot);
if (unlikely(!ring))
firstused = ring->current_slot - ring->used_slots + 1;
if (firstused < 0)
firstused = ring->nr_slots + firstused;
+
+ skip = 0;
if (unlikely(slot != firstused)) {
/* This possibly is a firmware bug and will result in
- * malfunction, memory leaks and/or stall of DMA functionality. */
- b43dbg(dev->wl, "Out of order TX status report on DMA ring %d. "
- "Expected %d, but got %d\n",
- ring->index, firstused, slot);
- return;
+ * malfunction, memory leaks and/or stall of DMA functionality.
+ */
+ if (slot == next_slot(ring, next_slot(ring, firstused))) {
+ /* If a single header/data pair was missed, skip over
+ * the first two slots in an attempt to recover.
+ */
+ slot = firstused;
+ skip = 2;
+ if (!err_out1) {
+ /* Report the error once. */
+ b43dbg(dev->wl,
+ "Skip on DMA ring %d slot %d.\n",
+ ring->index, slot);
+ err_out1 = 1;
+ }
+ } else {
+ /* More than a single header/data pair were missed.
+ * Report this error once.
+ */
+ if (!err_out2)
+ b43dbg(dev->wl,
+ "Out of order TX status report on DMA ring %d. Expected %d, but got %d\n",
+ ring->index, firstused, slot);
+ err_out2 = 1;
+ return;
+ }
}
ops = ring->ops;
slot, firstused, ring->index);
break;
}
+
if (meta->skb) {
struct b43_private_tx_info *priv_info =
- b43_get_priv_tx_info(IEEE80211_SKB_CB(meta->skb));
+ b43_get_priv_tx_info(IEEE80211_SKB_CB(meta->skb));
- unmap_descbuffer(ring, meta->dmaaddr, meta->skb->len, 1);
+ unmap_descbuffer(ring, meta->dmaaddr,
+ meta->skb->len, 1);
kfree(priv_info->bouncebuffer);
priv_info->bouncebuffer = NULL;
} else {
struct ieee80211_tx_info *info;
if (unlikely(!meta->skb)) {
- /* This is a scatter-gather fragment of a frame, so
- * the skb pointer must not be NULL. */
+ /* This is a scatter-gather fragment of a frame,
+ * so the skb pointer must not be NULL.
+ */
b43dbg(dev->wl, "TX status unexpected NULL skb "
"at slot %d (first=%d) on ring %d\n",
slot, firstused, ring->index);
/*
* Call back to inform the ieee80211 subsystem about
- * the status of the transmission.
+ * the status of the transmission. When skipping over
+ * a missed TX status report, use a status structure
+ * filled with zeros to indicate that the frame was not
+ * sent (frame_count 0) and not acknowledged
*/
- frame_succeed = b43_fill_txstatus_report(dev, info, status);
+ if (unlikely(skip))
+ txstat = &fake;
+ else
+ txstat = status;
+
+ frame_succeed = b43_fill_txstatus_report(dev, info,
+ txstat);
#ifdef CONFIG_B43_DEBUG
if (frame_succeed)
ring->nr_succeed_tx_packets++;
/* Everything unmapped and free'd. So it's not used anymore. */
ring->used_slots--;
- if (meta->is_last_fragment) {
+ if (meta->is_last_fragment && !skip) {
/* This is the last scatter-gather
* fragment of the frame. We are done. */
break;
}
slot = next_slot(ring, slot);
+ if (skip > 0)
+ --skip;
}
if (ring->stopped) {
B43_WARN_ON(free_slots(ring) < TX_SLOTS_PER_FRAME);
u16 clip_off[2] = { 0xFFFF, 0xFFFF };
u8 vcm_final = 0;
- s8 offset[4];
+ s32 offset[4];
s32 results[8][4] = { };
s32 results_min[4] = { };
s32 poll_results[4] = { };
}
for (i = 0; i < 4; i += 2) {
s32 curr;
- s32 mind = 40;
+ s32 mind = 0x100000;
s32 minpoll = 249;
u8 minvcm = 0;
if (2 * core != i)
u8 regs_save_radio[2];
u16 regs_save_phy[2];
- s8 offset[4];
+ s32 offset[4];
u8 core;
u8 rail;
}
for (i = 0; i < 4; i++) {
- s32 mind = 40;
+ s32 mind = 0x100000;
u8 minvcm = 0;
s32 minpoll = 249;
s32 curr;
#endif
#ifdef CONFIG_B43_SSB
case B43_BUS_SSB:
- /* FIXME */
+ ssb_pmu_spuravoid_pllupdate(&dev->dev->sdev->bus->chipco,
+ avoid);
break;
#endif
}
goto err;
}
- /* External image takes precedence if specified */
if (brcmf_sdbrcm_download_code_file(bus)) {
brcmf_err("dongle image file download failed\n");
goto err;
}
- /* External nvram takes precedence if specified */
- if (brcmf_sdbrcm_download_nvram(bus))
+ if (brcmf_sdbrcm_download_nvram(bus)) {
brcmf_err("dongle nvram file download failed\n");
+ goto err;
+ }
/* Take arm out of reset */
if (brcmf_sdbrcm_download_state(bus, false)) {
brcmf_add_keyext(struct wiphy *wiphy, struct net_device *ndev,
u8 key_idx, const u8 *mac_addr, struct key_params *params)
{
+ struct brcmf_if *ifp = netdev_priv(ndev);
struct brcmf_wsec_key key;
s32 err = 0;
+ u8 keybuf[8];
memset(&key, 0, sizeof(key));
key.index = (u32) key_idx;
brcmf_dbg(CONN, "Setting the key index %d\n", key.index);
memcpy(key.data, params->key, key.len);
- if (params->cipher == WLAN_CIPHER_SUITE_TKIP) {
- u8 keybuf[8];
+ if ((ifp->vif->mode != WL_MODE_AP) &&
+ (params->cipher == WLAN_CIPHER_SUITE_TKIP)) {
+ brcmf_dbg(CONN, "Swapping RX/TX MIC key\n");
memcpy(keybuf, &key.data[24], sizeof(keybuf));
memcpy(&key.data[24], &key.data[16], sizeof(keybuf));
memcpy(&key.data[16], keybuf, sizeof(keybuf));
break;
case WLAN_CIPHER_SUITE_TKIP:
if (ifp->vif->mode != WL_MODE_AP) {
- brcmf_dbg(CONN, "Swapping key\n");
+ brcmf_dbg(CONN, "Swapping RX/TX MIC key\n");
memcpy(keybuf, &key.data[24], sizeof(keybuf));
memcpy(&key.data[24], &key.data[16], sizeof(keybuf));
memcpy(&key.data[16], keybuf, sizeof(keybuf));
err = -EAGAIN;
goto done;
}
- switch (wsec & ~SES_OW_ENABLED) {
- case WEP_ENABLED:
+ if (wsec & WEP_ENABLED) {
sec = &profile->sec;
if (sec->cipher_pairwise & WLAN_CIPHER_SUITE_WEP40) {
params.cipher = WLAN_CIPHER_SUITE_WEP40;
params.cipher = WLAN_CIPHER_SUITE_WEP104;
brcmf_dbg(CONN, "WLAN_CIPHER_SUITE_WEP104\n");
}
- break;
- case TKIP_ENABLED:
+ } else if (wsec & TKIP_ENABLED) {
params.cipher = WLAN_CIPHER_SUITE_TKIP;
brcmf_dbg(CONN, "WLAN_CIPHER_SUITE_TKIP\n");
- break;
- case AES_ENABLED:
+ } else if (wsec & AES_ENABLED) {
params.cipher = WLAN_CIPHER_SUITE_AES_CMAC;
brcmf_dbg(CONN, "WLAN_CIPHER_SUITE_AES_CMAC\n");
- break;
- default:
+ } else {
brcmf_err("Invalid algo (0x%x)\n", wsec);
err = -EINVAL;
goto done;
static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev)
{
struct brcmf_if *ifp = netdev_priv(ndev);
- s32 err = -EPERM;
+ s32 err;
struct brcmf_fil_bss_enable_le bss_enable;
+ struct brcmf_join_params join_params;
brcmf_dbg(TRACE, "Enter\n");
/* Due to most likely deauths outstanding we sleep */
/* first to make sure they get processed by fw. */
msleep(400);
- err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_AP, 0);
- if (err < 0) {
- brcmf_err("setting AP mode failed %d\n", err);
- goto exit;
- }
+
+ memset(&join_params, 0, sizeof(join_params));
+ err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SET_SSID,
+ &join_params, sizeof(join_params));
+ if (err < 0)
+ brcmf_err("SET SSID error (%d)\n", err);
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_UP, 0);
- if (err < 0) {
+ if (err < 0)
brcmf_err("BRCMF_C_UP error %d\n", err);
- goto exit;
- }
+ err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_AP, 0);
+ if (err < 0)
+ brcmf_err("setting AP mode failed %d\n", err);
+ err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_INFRA, 0);
+ if (err < 0)
+ brcmf_err("setting INFRA mode failed %d\n", err);
} else {
bss_enable.bsscfg_idx = cpu_to_le32(ifp->bssidx);
bss_enable.enable = cpu_to_le32(0);
set_bit(BRCMF_VIF_STATUS_AP_CREATING, &ifp->vif->sme_state);
clear_bit(BRCMF_VIF_STATUS_AP_CREATED, &ifp->vif->sme_state);
-exit:
return err;
}
BIT(NL80211_IFTYPE_ADHOC) |
BIT(NL80211_IFTYPE_AP)
},
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_P2P_DEVICE)
- },
{
.max = 1,
.types = BIT(NL80211_IFTYPE_P2P_CLIENT) |
BIT(NL80211_IFTYPE_ADHOC) |
BIT(NL80211_IFTYPE_AP) |
BIT(NL80211_IFTYPE_P2P_CLIENT) |
- BIT(NL80211_IFTYPE_P2P_GO) |
- BIT(NL80211_IFTYPE_P2P_DEVICE);
+ BIT(NL80211_IFTYPE_P2P_GO);
wiphy->iface_combinations = brcmf_iface_combos;
wiphy->n_iface_combinations = ARRAY_SIZE(brcmf_iface_combos);
wiphy->bands[IEEE80211_BAND_2GHZ] = &__wl_band_2ghz;
}
}
+/**
+ * This function frees the WL per-device resources.
+ *
+ * This function frees resources owned by the WL device pointed to
+ * by the wl parameter.
+ *
+ * precondition: can both be called locked and unlocked
+ *
+ */
+static void brcms_free(struct brcms_info *wl)
+{
+ struct brcms_timer *t, *next;
+
+ /* free ucode data */
+ if (wl->fw.fw_cnt)
+ brcms_ucode_data_free(&wl->ucode);
+ if (wl->irq)
+ free_irq(wl->irq, wl);
+
+ /* kill dpc */
+ tasklet_kill(&wl->tasklet);
+
+ if (wl->pub) {
+ brcms_debugfs_detach(wl->pub);
+ brcms_c_module_unregister(wl->pub, "linux", wl);
+ }
+
+ /* free common resources */
+ if (wl->wlc) {
+ brcms_c_detach(wl->wlc);
+ wl->wlc = NULL;
+ wl->pub = NULL;
+ }
+
+ /* virtual interface deletion is deferred so we cannot spinwait */
+
+ /* wait for all pending callbacks to complete */
+ while (atomic_read(&wl->callbacks) > 0)
+ schedule();
+
+ /* free timers */
+ for (t = wl->timers; t; t = next) {
+ next = t->next;
+#ifdef DEBUG
+ kfree(t->name);
+#endif
+ kfree(t);
+ }
+}
+
+/*
+* called from both kernel as from this kernel module (error flow on attach)
+* precondition: perimeter lock is not acquired.
+*/
+static void brcms_remove(struct bcma_device *pdev)
+{
+ struct ieee80211_hw *hw = bcma_get_drvdata(pdev);
+ struct brcms_info *wl = hw->priv;
+
+ if (wl->wlc) {
+ wiphy_rfkill_set_hw_state(wl->pub->ieee_hw->wiphy, false);
+ wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy);
+ ieee80211_unregister_hw(hw);
+ }
+
+ brcms_free(wl);
+
+ bcma_set_drvdata(pdev, NULL);
+ ieee80211_free_hw(hw);
+}
+
+/*
+ * Precondition: Since this function is called in brcms_pci_probe() context,
+ * no locking is required.
+ */
+static void brcms_release_fw(struct brcms_info *wl)
+{
+ int i;
+ for (i = 0; i < MAX_FW_IMAGES; i++) {
+ release_firmware(wl->fw.fw_bin[i]);
+ release_firmware(wl->fw.fw_hdr[i]);
+ }
+}
+
+/*
+ * Precondition: Since this function is called in brcms_pci_probe() context,
+ * no locking is required.
+ */
+static int brcms_request_fw(struct brcms_info *wl, struct bcma_device *pdev)
+{
+ int status;
+ struct device *device = &pdev->dev;
+ char fw_name[100];
+ int i;
+
+ memset(&wl->fw, 0, sizeof(struct brcms_firmware));
+ for (i = 0; i < MAX_FW_IMAGES; i++) {
+ if (brcms_firmwares[i] == NULL)
+ break;
+ sprintf(fw_name, "%s-%d.fw", brcms_firmwares[i],
+ UCODE_LOADER_API_VER);
+ status = request_firmware(&wl->fw.fw_bin[i], fw_name, device);
+ if (status) {
+ wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n",
+ KBUILD_MODNAME, fw_name);
+ return status;
+ }
+ sprintf(fw_name, "%s_hdr-%d.fw", brcms_firmwares[i],
+ UCODE_LOADER_API_VER);
+ status = request_firmware(&wl->fw.fw_hdr[i], fw_name, device);
+ if (status) {
+ wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n",
+ KBUILD_MODNAME, fw_name);
+ return status;
+ }
+ wl->fw.hdr_num_entries[i] =
+ wl->fw.fw_hdr[i]->size / (sizeof(struct firmware_hdr));
+ }
+ wl->fw.fw_cnt = i;
+ status = brcms_ucode_data_init(wl, &wl->ucode);
+ brcms_release_fw(wl);
+ return status;
+}
+
static void brcms_ops_tx(struct ieee80211_hw *hw,
struct ieee80211_tx_control *control,
struct sk_buff *skb)
if (!blocked)
wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy);
+ if (!wl->ucode.bcm43xx_bomminor) {
+ err = brcms_request_fw(wl, wl->wlc->hw->d11core);
+ if (err) {
+ brcms_remove(wl->wlc->hw->d11core);
+ return -ENOENT;
+ }
+ }
+
spin_lock_bh(&wl->lock);
/* avoid acknowledging frames before a non-monitor device is added */
wl->mute_tx = true;
wake_up(&wl->tx_flush_wq);
}
-/*
- * Precondition: Since this function is called in brcms_pci_probe() context,
- * no locking is required.
- */
-static int brcms_request_fw(struct brcms_info *wl, struct bcma_device *pdev)
-{
- int status;
- struct device *device = &pdev->dev;
- char fw_name[100];
- int i;
-
- memset(&wl->fw, 0, sizeof(struct brcms_firmware));
- for (i = 0; i < MAX_FW_IMAGES; i++) {
- if (brcms_firmwares[i] == NULL)
- break;
- sprintf(fw_name, "%s-%d.fw", brcms_firmwares[i],
- UCODE_LOADER_API_VER);
- status = request_firmware(&wl->fw.fw_bin[i], fw_name, device);
- if (status) {
- wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n",
- KBUILD_MODNAME, fw_name);
- return status;
- }
- sprintf(fw_name, "%s_hdr-%d.fw", brcms_firmwares[i],
- UCODE_LOADER_API_VER);
- status = request_firmware(&wl->fw.fw_hdr[i], fw_name, device);
- if (status) {
- wiphy_err(wl->wiphy, "%s: fail to load firmware %s\n",
- KBUILD_MODNAME, fw_name);
- return status;
- }
- wl->fw.hdr_num_entries[i] =
- wl->fw.fw_hdr[i]->size / (sizeof(struct firmware_hdr));
- }
- wl->fw.fw_cnt = i;
- return brcms_ucode_data_init(wl, &wl->ucode);
-}
-
-/*
- * Precondition: Since this function is called in brcms_pci_probe() context,
- * no locking is required.
- */
-static void brcms_release_fw(struct brcms_info *wl)
-{
- int i;
- for (i = 0; i < MAX_FW_IMAGES; i++) {
- release_firmware(wl->fw.fw_bin[i]);
- release_firmware(wl->fw.fw_hdr[i]);
- }
-}
-
-/**
- * This function frees the WL per-device resources.
- *
- * This function frees resources owned by the WL device pointed to
- * by the wl parameter.
- *
- * precondition: can both be called locked and unlocked
- *
- */
-static void brcms_free(struct brcms_info *wl)
-{
- struct brcms_timer *t, *next;
-
- /* free ucode data */
- if (wl->fw.fw_cnt)
- brcms_ucode_data_free(&wl->ucode);
- if (wl->irq)
- free_irq(wl->irq, wl);
-
- /* kill dpc */
- tasklet_kill(&wl->tasklet);
-
- if (wl->pub) {
- brcms_debugfs_detach(wl->pub);
- brcms_c_module_unregister(wl->pub, "linux", wl);
- }
-
- /* free common resources */
- if (wl->wlc) {
- brcms_c_detach(wl->wlc);
- wl->wlc = NULL;
- wl->pub = NULL;
- }
-
- /* virtual interface deletion is deferred so we cannot spinwait */
-
- /* wait for all pending callbacks to complete */
- while (atomic_read(&wl->callbacks) > 0)
- schedule();
-
- /* free timers */
- for (t = wl->timers; t; t = next) {
- next = t->next;
-#ifdef DEBUG
- kfree(t->name);
-#endif
- kfree(t);
- }
-}
-
-/*
-* called from both kernel as from this kernel module (error flow on attach)
-* precondition: perimeter lock is not acquired.
-*/
-static void brcms_remove(struct bcma_device *pdev)
-{
- struct ieee80211_hw *hw = bcma_get_drvdata(pdev);
- struct brcms_info *wl = hw->priv;
-
- if (wl->wlc) {
- wiphy_rfkill_set_hw_state(wl->pub->ieee_hw->wiphy, false);
- wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy);
- ieee80211_unregister_hw(hw);
- }
-
- brcms_free(wl);
-
- bcma_set_drvdata(pdev, NULL);
- ieee80211_free_hw(hw);
-}
-
static irqreturn_t brcms_isr(int irq, void *dev_id)
{
struct brcms_info *wl;
spin_lock_init(&wl->lock);
spin_lock_init(&wl->isr_lock);
- /* prepare ucode */
- if (brcms_request_fw(wl, pdev) < 0) {
- wiphy_err(wl->wiphy, "%s: Failed to find firmware usually in "
- "%s\n", KBUILD_MODNAME, "/lib/firmware/brcm");
- brcms_release_fw(wl);
- brcms_remove(pdev);
- return NULL;
- }
-
/* common load-time initialization */
wl->wlc = brcms_c_attach((void *)wl, pdev, unit, false, &err);
- brcms_release_fw(wl);
if (!wl->wlc) {
wiphy_err(wl->wiphy, "%s: attach() failed with code %d\n",
KBUILD_MODNAME, err);
gain0_15 = ((biq1 & 0xf) << 12) |
((tia & 0xf) << 8) |
((lna2 & 0x3) << 6) |
- ((lna2 & 0x3) << 4) |
- ((lna1 & 0x3) << 2) |
- ((lna1 & 0x3) << 0);
+ ((lna2 &
+ 0x3) << 4) | ((lna1 & 0x3) << 2) | ((lna1 & 0x3) << 0);
mod_phy_reg(pi, 0x4b6, (0xffff << 0), gain0_15 << 0);
mod_phy_reg(pi, 0x4b7, (0xf << 0), gain16_19 << 0);
}
mod_phy_reg(pi, 0x44d, (0x1 << 0), (!trsw) << 0);
- mod_phy_reg(pi, 0x4b1, (0x3 << 11), lna1 << 11);
- mod_phy_reg(pi, 0x4e6, (0x3 << 3), lna1 << 3);
}
return (iq_est.i_pwr + iq_est.q_pwr) / nsamples;
}
-static bool wlc_lcnphy_rx_iq_cal_gain(struct brcms_phy *pi, u16 biq1_gain,
- u16 tia_gain, u16 lna2_gain)
-{
- u32 i_thresh_l, q_thresh_l;
- u32 i_thresh_h, q_thresh_h;
- struct lcnphy_iq_est iq_est_h, iq_est_l;
-
- wlc_lcnphy_set_rx_gain_by_distribution(pi, 0, 0, 0, biq1_gain, tia_gain,
- lna2_gain, 0);
-
- wlc_lcnphy_rx_gain_override_enable(pi, true);
- wlc_lcnphy_start_tx_tone(pi, 2000, (40 >> 1), 0);
- udelay(500);
- write_radio_reg(pi, RADIO_2064_REG112, 0);
- if (!wlc_lcnphy_rx_iq_est(pi, 1024, 32, &iq_est_l))
- return false;
-
- wlc_lcnphy_start_tx_tone(pi, 2000, 40, 0);
- udelay(500);
- write_radio_reg(pi, RADIO_2064_REG112, 0);
- if (!wlc_lcnphy_rx_iq_est(pi, 1024, 32, &iq_est_h))
- return false;
-
- i_thresh_l = (iq_est_l.i_pwr << 1);
- i_thresh_h = (iq_est_l.i_pwr << 2) + iq_est_l.i_pwr;
-
- q_thresh_l = (iq_est_l.q_pwr << 1);
- q_thresh_h = (iq_est_l.q_pwr << 2) + iq_est_l.q_pwr;
- if ((iq_est_h.i_pwr > i_thresh_l) &&
- (iq_est_h.i_pwr < i_thresh_h) &&
- (iq_est_h.q_pwr > q_thresh_l) &&
- (iq_est_h.q_pwr < q_thresh_h))
- return true;
-
- return false;
-}
-
static bool
wlc_lcnphy_rx_iq_cal(struct brcms_phy *pi,
const struct lcnphy_rx_iqcomp *iqcomp,
RFOverrideVal0_old, rfoverride2_old, rfoverride2val_old,
rfoverride3_old, rfoverride3val_old, rfoverride4_old,
rfoverride4val_old, afectrlovr_old, afectrlovrval_old;
- int tia_gain, lna2_gain, biq1_gain;
- bool set_gain;
+ int tia_gain;
+ u32 received_power, rx_pwr_threshold;
u16 old_sslpnCalibClkEnCtrl, old_sslpnRxFeClkEnCtrl;
u16 values_to_save[11];
s16 *ptr;
goto cal_done;
}
- WARN_ON(module != 1);
- tx_pwr_ctrl = wlc_lcnphy_get_tx_pwr_ctrl(pi);
- wlc_lcnphy_set_tx_pwr_ctrl(pi, LCNPHY_TX_PWR_CTRL_OFF);
-
- for (i = 0; i < 11; i++)
- values_to_save[i] =
- read_radio_reg(pi, rxiq_cal_rf_reg[i]);
- Core1TxControl_old = read_phy_reg(pi, 0x631);
-
- or_phy_reg(pi, 0x631, 0x0015);
-
- RFOverride0_old = read_phy_reg(pi, 0x44c);
- RFOverrideVal0_old = read_phy_reg(pi, 0x44d);
- rfoverride2_old = read_phy_reg(pi, 0x4b0);
- rfoverride2val_old = read_phy_reg(pi, 0x4b1);
- rfoverride3_old = read_phy_reg(pi, 0x4f9);
- rfoverride3val_old = read_phy_reg(pi, 0x4fa);
- rfoverride4_old = read_phy_reg(pi, 0x938);
- rfoverride4val_old = read_phy_reg(pi, 0x939);
- afectrlovr_old = read_phy_reg(pi, 0x43b);
- afectrlovrval_old = read_phy_reg(pi, 0x43c);
- old_sslpnCalibClkEnCtrl = read_phy_reg(pi, 0x6da);
- old_sslpnRxFeClkEnCtrl = read_phy_reg(pi, 0x6db);
-
- tx_gain_override_old = wlc_lcnphy_tx_gain_override_enabled(pi);
- if (tx_gain_override_old) {
- wlc_lcnphy_get_tx_gain(pi, &old_gains);
- tx_gain_index_old = pi_lcn->lcnphy_current_index;
- }
-
- wlc_lcnphy_set_tx_pwr_by_index(pi, tx_gain_idx);
+ if (module == 1) {
- mod_phy_reg(pi, 0x4f9, (0x1 << 0), 1 << 0);
- mod_phy_reg(pi, 0x4fa, (0x1 << 0), 0 << 0);
+ tx_pwr_ctrl = wlc_lcnphy_get_tx_pwr_ctrl(pi);
+ wlc_lcnphy_set_tx_pwr_ctrl(pi, LCNPHY_TX_PWR_CTRL_OFF);
- mod_phy_reg(pi, 0x43b, (0x1 << 1), 1 << 1);
- mod_phy_reg(pi, 0x43c, (0x1 << 1), 0 << 1);
+ for (i = 0; i < 11; i++)
+ values_to_save[i] =
+ read_radio_reg(pi, rxiq_cal_rf_reg[i]);
+ Core1TxControl_old = read_phy_reg(pi, 0x631);
+
+ or_phy_reg(pi, 0x631, 0x0015);
+
+ RFOverride0_old = read_phy_reg(pi, 0x44c);
+ RFOverrideVal0_old = read_phy_reg(pi, 0x44d);
+ rfoverride2_old = read_phy_reg(pi, 0x4b0);
+ rfoverride2val_old = read_phy_reg(pi, 0x4b1);
+ rfoverride3_old = read_phy_reg(pi, 0x4f9);
+ rfoverride3val_old = read_phy_reg(pi, 0x4fa);
+ rfoverride4_old = read_phy_reg(pi, 0x938);
+ rfoverride4val_old = read_phy_reg(pi, 0x939);
+ afectrlovr_old = read_phy_reg(pi, 0x43b);
+ afectrlovrval_old = read_phy_reg(pi, 0x43c);
+ old_sslpnCalibClkEnCtrl = read_phy_reg(pi, 0x6da);
+ old_sslpnRxFeClkEnCtrl = read_phy_reg(pi, 0x6db);
+
+ tx_gain_override_old = wlc_lcnphy_tx_gain_override_enabled(pi);
+ if (tx_gain_override_old) {
+ wlc_lcnphy_get_tx_gain(pi, &old_gains);
+ tx_gain_index_old = pi_lcn->lcnphy_current_index;
+ }
- write_radio_reg(pi, RADIO_2064_REG116, 0x06);
- write_radio_reg(pi, RADIO_2064_REG12C, 0x07);
- write_radio_reg(pi, RADIO_2064_REG06A, 0xd3);
- write_radio_reg(pi, RADIO_2064_REG098, 0x03);
- write_radio_reg(pi, RADIO_2064_REG00B, 0x7);
- mod_radio_reg(pi, RADIO_2064_REG113, 1 << 4, 1 << 4);
- write_radio_reg(pi, RADIO_2064_REG01D, 0x01);
- write_radio_reg(pi, RADIO_2064_REG114, 0x01);
- write_radio_reg(pi, RADIO_2064_REG02E, 0x10);
- write_radio_reg(pi, RADIO_2064_REG12A, 0x08);
-
- mod_phy_reg(pi, 0x938, (0x1 << 0), 1 << 0);
- mod_phy_reg(pi, 0x939, (0x1 << 0), 0 << 0);
- mod_phy_reg(pi, 0x938, (0x1 << 1), 1 << 1);
- mod_phy_reg(pi, 0x939, (0x1 << 1), 1 << 1);
- mod_phy_reg(pi, 0x938, (0x1 << 2), 1 << 2);
- mod_phy_reg(pi, 0x939, (0x1 << 2), 1 << 2);
- mod_phy_reg(pi, 0x938, (0x1 << 3), 1 << 3);
- mod_phy_reg(pi, 0x939, (0x1 << 3), 1 << 3);
- mod_phy_reg(pi, 0x938, (0x1 << 5), 1 << 5);
- mod_phy_reg(pi, 0x939, (0x1 << 5), 0 << 5);
+ wlc_lcnphy_set_tx_pwr_by_index(pi, tx_gain_idx);
- mod_phy_reg(pi, 0x43b, (0x1 << 0), 1 << 0);
- mod_phy_reg(pi, 0x43c, (0x1 << 0), 0 << 0);
+ mod_phy_reg(pi, 0x4f9, (0x1 << 0), 1 << 0);
+ mod_phy_reg(pi, 0x4fa, (0x1 << 0), 0 << 0);
- write_phy_reg(pi, 0x6da, 0xffff);
- or_phy_reg(pi, 0x6db, 0x3);
+ mod_phy_reg(pi, 0x43b, (0x1 << 1), 1 << 1);
+ mod_phy_reg(pi, 0x43c, (0x1 << 1), 0 << 1);
- wlc_lcnphy_set_trsw_override(pi, tx_switch, rx_switch);
- set_gain = false;
-
- lna2_gain = 3;
- while ((lna2_gain >= 0) && !set_gain) {
- tia_gain = 4;
-
- while ((tia_gain >= 0) && !set_gain) {
- biq1_gain = 6;
-
- while ((biq1_gain >= 0) && !set_gain) {
- set_gain = wlc_lcnphy_rx_iq_cal_gain(pi,
- (u16)
- biq1_gain,
- (u16)
- tia_gain,
- (u16)
- lna2_gain);
- biq1_gain -= 1;
- }
+ write_radio_reg(pi, RADIO_2064_REG116, 0x06);
+ write_radio_reg(pi, RADIO_2064_REG12C, 0x07);
+ write_radio_reg(pi, RADIO_2064_REG06A, 0xd3);
+ write_radio_reg(pi, RADIO_2064_REG098, 0x03);
+ write_radio_reg(pi, RADIO_2064_REG00B, 0x7);
+ mod_radio_reg(pi, RADIO_2064_REG113, 1 << 4, 1 << 4);
+ write_radio_reg(pi, RADIO_2064_REG01D, 0x01);
+ write_radio_reg(pi, RADIO_2064_REG114, 0x01);
+ write_radio_reg(pi, RADIO_2064_REG02E, 0x10);
+ write_radio_reg(pi, RADIO_2064_REG12A, 0x08);
+
+ mod_phy_reg(pi, 0x938, (0x1 << 0), 1 << 0);
+ mod_phy_reg(pi, 0x939, (0x1 << 0), 0 << 0);
+ mod_phy_reg(pi, 0x938, (0x1 << 1), 1 << 1);
+ mod_phy_reg(pi, 0x939, (0x1 << 1), 1 << 1);
+ mod_phy_reg(pi, 0x938, (0x1 << 2), 1 << 2);
+ mod_phy_reg(pi, 0x939, (0x1 << 2), 1 << 2);
+ mod_phy_reg(pi, 0x938, (0x1 << 3), 1 << 3);
+ mod_phy_reg(pi, 0x939, (0x1 << 3), 1 << 3);
+ mod_phy_reg(pi, 0x938, (0x1 << 5), 1 << 5);
+ mod_phy_reg(pi, 0x939, (0x1 << 5), 0 << 5);
+
+ mod_phy_reg(pi, 0x43b, (0x1 << 0), 1 << 0);
+ mod_phy_reg(pi, 0x43c, (0x1 << 0), 0 << 0);
+
+ wlc_lcnphy_start_tx_tone(pi, 2000, 120, 0);
+ write_phy_reg(pi, 0x6da, 0xffff);
+ or_phy_reg(pi, 0x6db, 0x3);
+ wlc_lcnphy_set_trsw_override(pi, tx_switch, rx_switch);
+ wlc_lcnphy_rx_gain_override_enable(pi, true);
+
+ tia_gain = 8;
+ rx_pwr_threshold = 950;
+ while (tia_gain > 0) {
tia_gain -= 1;
+ wlc_lcnphy_set_rx_gain_by_distribution(pi,
+ 0, 0, 2, 2,
+ (u16)
+ tia_gain, 1, 0);
+ udelay(500);
+
+ received_power =
+ wlc_lcnphy_measure_digital_power(pi, 2000);
+ if (received_power < rx_pwr_threshold)
+ break;
}
- lna2_gain -= 1;
- }
+ result = wlc_lcnphy_calc_rx_iq_comp(pi, 0xffff);
- if (set_gain)
- result = wlc_lcnphy_calc_rx_iq_comp(pi, 1024);
- else
- result = false;
+ wlc_lcnphy_stop_tx_tone(pi);
- wlc_lcnphy_stop_tx_tone(pi);
+ write_phy_reg(pi, 0x631, Core1TxControl_old);
- write_phy_reg(pi, 0x631, Core1TxControl_old);
-
- write_phy_reg(pi, 0x44c, RFOverrideVal0_old);
- write_phy_reg(pi, 0x44d, RFOverrideVal0_old);
- write_phy_reg(pi, 0x4b0, rfoverride2_old);
- write_phy_reg(pi, 0x4b1, rfoverride2val_old);
- write_phy_reg(pi, 0x4f9, rfoverride3_old);
- write_phy_reg(pi, 0x4fa, rfoverride3val_old);
- write_phy_reg(pi, 0x938, rfoverride4_old);
- write_phy_reg(pi, 0x939, rfoverride4val_old);
- write_phy_reg(pi, 0x43b, afectrlovr_old);
- write_phy_reg(pi, 0x43c, afectrlovrval_old);
- write_phy_reg(pi, 0x6da, old_sslpnCalibClkEnCtrl);
- write_phy_reg(pi, 0x6db, old_sslpnRxFeClkEnCtrl);
+ write_phy_reg(pi, 0x44c, RFOverrideVal0_old);
+ write_phy_reg(pi, 0x44d, RFOverrideVal0_old);
+ write_phy_reg(pi, 0x4b0, rfoverride2_old);
+ write_phy_reg(pi, 0x4b1, rfoverride2val_old);
+ write_phy_reg(pi, 0x4f9, rfoverride3_old);
+ write_phy_reg(pi, 0x4fa, rfoverride3val_old);
+ write_phy_reg(pi, 0x938, rfoverride4_old);
+ write_phy_reg(pi, 0x939, rfoverride4val_old);
+ write_phy_reg(pi, 0x43b, afectrlovr_old);
+ write_phy_reg(pi, 0x43c, afectrlovrval_old);
+ write_phy_reg(pi, 0x6da, old_sslpnCalibClkEnCtrl);
+ write_phy_reg(pi, 0x6db, old_sslpnRxFeClkEnCtrl);
- wlc_lcnphy_clear_trsw_override(pi);
+ wlc_lcnphy_clear_trsw_override(pi);
- mod_phy_reg(pi, 0x44c, (0x1 << 2), 0 << 2);
+ mod_phy_reg(pi, 0x44c, (0x1 << 2), 0 << 2);
- for (i = 0; i < 11; i++)
- write_radio_reg(pi, rxiq_cal_rf_reg[i],
- values_to_save[i]);
+ for (i = 0; i < 11; i++)
+ write_radio_reg(pi, rxiq_cal_rf_reg[i],
+ values_to_save[i]);
- if (tx_gain_override_old)
- wlc_lcnphy_set_tx_pwr_by_index(pi, tx_gain_index_old);
- else
- wlc_lcnphy_disable_tx_gain_override(pi);
+ if (tx_gain_override_old)
+ wlc_lcnphy_set_tx_pwr_by_index(pi, tx_gain_index_old);
+ else
+ wlc_lcnphy_disable_tx_gain_override(pi);
- wlc_lcnphy_set_tx_pwr_ctrl(pi, tx_pwr_ctrl);
- wlc_lcnphy_rx_gain_override_enable(pi, false);
+ wlc_lcnphy_set_tx_pwr_ctrl(pi, tx_pwr_ctrl);
+ wlc_lcnphy_rx_gain_override_enable(pi, false);
+ }
cal_done:
kfree(ptr);
write_radio_reg(pi, RADIO_2064_REG038, 3);
write_radio_reg(pi, RADIO_2064_REG091, 7);
}
-
- if (!(pi->sh->boardflags & BFL_FEM)) {
- u8 reg038[14] = {0xd, 0xe, 0xd, 0xd, 0xd, 0xc,
- 0xa, 0xb, 0xb, 0x3, 0x3, 0x2, 0x0, 0x0};
-
- write_radio_reg(pi, RADIO_2064_REG02A, 0xf);
- write_radio_reg(pi, RADIO_2064_REG091, 0x3);
- write_radio_reg(pi, RADIO_2064_REG038, 0x3);
-
- write_radio_reg(pi, RADIO_2064_REG038, reg038[channel - 1]);
- }
}
static int
} else {
mod_radio_reg(pi, RADIO_2064_REG03A, 1, 0x1);
mod_radio_reg(pi, RADIO_2064_REG11A, 0x8, 0x8);
- mod_radio_reg(pi, RADIO_2064_REG028, 0x1, 0x0);
- mod_radio_reg(pi, RADIO_2064_REG11A, 0x4, 1<<2);
- mod_radio_reg(pi, RADIO_2064_REG036, 0x10, 0x0);
- mod_radio_reg(pi, RADIO_2064_REG11A, 0x10, 1<<4);
- mod_radio_reg(pi, RADIO_2064_REG036, 0x3, 0x0);
- mod_radio_reg(pi, RADIO_2064_REG035, 0xff, 0x77);
- mod_radio_reg(pi, RADIO_2064_REG028, 0x1e, 0xe<<1);
- mod_radio_reg(pi, RADIO_2064_REG112, 0x80, 1<<7);
- mod_radio_reg(pi, RADIO_2064_REG005, 0x7, 1<<1);
- mod_radio_reg(pi, RADIO_2064_REG029, 0xf0, 0<<4);
}
} else {
mod_phy_reg(pi, 0x4d9, (0x1 << 2), (0x1) << 2);
(auxpga_vmid_temp << 0) | (auxpga_gain_temp << 12));
mod_radio_reg(pi, RADIO_2064_REG082, (1 << 5), (1 << 5));
- mod_radio_reg(pi, RADIO_2064_REG07C, (1 << 0), (1 << 0));
}
static void wlc_lcnphy_tssi_setup(struct brcms_phy *pi)
{
struct phytbl_info tab;
u32 rfseq, ind;
- u8 tssi_sel;
tab.tbl_id = LCNPHY_TBL_ID_TXPWRCTL;
tab.tbl_width = 32;
mod_phy_reg(pi, 0x503, (0x1 << 4), (1) << 4);
- if (pi->sh->boardflags & BFL_FEM) {
- tssi_sel = 0x1;
- wlc_lcnphy_set_tssi_mux(pi, LCNPHY_TSSI_EXT);
- } else {
- tssi_sel = 0xe;
- wlc_lcnphy_set_tssi_mux(pi, LCNPHY_TSSI_POST_PA);
- }
+ wlc_lcnphy_set_tssi_mux(pi, LCNPHY_TSSI_EXT);
mod_phy_reg(pi, 0x4a4, (0x1 << 14), (0) << 14);
mod_phy_reg(pi, 0x4a4, (0x1 << 15), (1) << 15);
mod_phy_reg(pi, 0x49a, (0x1ff << 0), (0xff) << 0);
if (LCNREV_IS(pi->pubpi.phy_rev, 2)) {
- mod_radio_reg(pi, RADIO_2064_REG028, 0xf, tssi_sel);
+ mod_radio_reg(pi, RADIO_2064_REG028, 0xf, 0xe);
mod_radio_reg(pi, RADIO_2064_REG086, 0x4, 0x4);
} else {
- mod_radio_reg(pi, RADIO_2064_REG028, 0x1e, tssi_sel << 1);
mod_radio_reg(pi, RADIO_2064_REG03A, 0x1, 1);
mod_radio_reg(pi, RADIO_2064_REG11A, 0x8, 1 << 3);
}
mod_phy_reg(pi, 0x4d7, (0xf << 8), (0) << 8);
- mod_radio_reg(pi, RADIO_2064_REG035, 0xff, 0x0);
- mod_radio_reg(pi, RADIO_2064_REG036, 0x3, 0x0);
- mod_radio_reg(pi, RADIO_2064_REG11A, 0x8, 0x8);
-
wlc_lcnphy_pwrctrl_rssiparams(pi);
}
read_radio_reg(pi, RADIO_2064_REG007) & 1;
u16 SAVE_jtag_auxpga = read_radio_reg(pi, RADIO_2064_REG0FF) & 0x10;
u16 SAVE_iqadc_aux_en = read_radio_reg(pi, RADIO_2064_REG11F) & 4;
- u8 SAVE_bbmult = wlc_lcnphy_get_bbmult(pi);
-
idleTssi = read_phy_reg(pi, 0x4ab);
suspend = (0 == (bcma_read32(pi->d11core, D11REGOFFS(maccontrol)) &
MCTL_EN_MAC));
mod_radio_reg(pi, RADIO_2064_REG0FF, 0x10, 1 << 4);
mod_radio_reg(pi, RADIO_2064_REG11F, 0x4, 1 << 2);
wlc_lcnphy_tssi_setup(pi);
-
- mod_phy_reg(pi, 0x4d7, (0x1 << 0), (1 << 0));
- mod_phy_reg(pi, 0x4d7, (0x1 << 6), (1 << 6));
-
- wlc_lcnphy_set_bbmult(pi, 0x0);
-
wlc_phy_do_dummy_tx(pi, true, OFF);
idleTssi = ((read_phy_reg(pi, 0x4ab) & (0x1ff << 0))
>> 0);
mod_phy_reg(pi, 0x44c, (0x1 << 12), (0) << 12);
- wlc_lcnphy_set_bbmult(pi, SAVE_bbmult);
wlc_lcnphy_set_tx_gain_override(pi, tx_gain_override_old);
wlc_lcnphy_set_tx_gain(pi, &old_gains);
wlc_lcnphy_set_tx_pwr_ctrl(pi, SAVE_txpwrctrl);
wlc_lcnphy_write_table(pi, &tab);
tab.tbl_offset++;
}
- mod_phy_reg(pi, 0x4d0, (0x1 << 0), (0) << 0);
- mod_phy_reg(pi, 0x4d3, (0xff << 0), (0) << 0);
- mod_phy_reg(pi, 0x4d3, (0xff << 8), (0) << 8);
- mod_phy_reg(pi, 0x4d0, (0x1 << 4), (0) << 4);
- mod_phy_reg(pi, 0x4d0, (0x1 << 2), (0) << 2);
mod_phy_reg(pi, 0x410, (0x1 << 7), (0) << 7);
target_gains.pad_gain = 21;
target_gains.dac_gain = 0;
wlc_lcnphy_set_tx_gain(pi, &target_gains);
+ wlc_lcnphy_set_tx_pwr_by_index(pi, 16);
if (LCNREV_IS(pi->pubpi.phy_rev, 1) || pi_lcn->lcnphy_hw_iqcal_en) {
lcnphy_recal ? LCNPHY_CAL_RECAL :
LCNPHY_CAL_FULL), false);
} else {
- wlc_lcnphy_set_tx_pwr_by_index(pi, 16);
wlc_lcnphy_tx_iqlo_soft_cal_full(pi);
}
if (CHSPEC_IS5G(pi->radio_chanspec))
pa_gain = 0x70;
else
- pa_gain = 0x60;
+ pa_gain = 0x70;
if (pi->sh->boardflags & BFL_FEM)
pa_gain = 0x10;
-
tab.tbl_id = LCNPHY_TBL_ID_TXPWRCTL;
tab.tbl_width = 32;
tab.tbl_len = 1;
tab.tbl_ptr = &val;
for (j = 0; j < 128; j++) {
- if (pi->sh->boardflags & BFL_FEM)
- gm_gain = gain_table[j].gm;
- else
- gm_gain = 15;
-
+ gm_gain = gain_table[j].gm;
val = (((u32) pa_gain << 24) |
(gain_table[j].pad << 16) |
(gain_table[j].pga << 8) | gm_gain);
write_phy_reg(pi, 0x4ea, 0x4688);
- if (pi->sh->boardflags & BFL_FEM)
- mod_phy_reg(pi, 0x4eb, (0x7 << 0), 2 << 0);
- else
- mod_phy_reg(pi, 0x4eb, (0x7 << 0), 3 << 0);
+ mod_phy_reg(pi, 0x4eb, (0x7 << 0), 2 << 0);
mod_phy_reg(pi, 0x4eb, (0x7 << 6), 0 << 6);
wlc_lcnphy_rcal(pi);
wlc_lcnphy_rc_cal(pi);
-
- if (!(pi->sh->boardflags & BFL_FEM)) {
- write_radio_reg(pi, RADIO_2064_REG032, 0x6f);
- write_radio_reg(pi, RADIO_2064_REG033, 0x19);
- write_radio_reg(pi, RADIO_2064_REG039, 0xe);
- }
-
}
static void wlc_lcnphy_radio_init(struct brcms_phy *pi)
wlc_lcnphy_write_table(pi, &tab);
}
- if (!(pi->sh->boardflags & BFL_FEM)) {
- tab.tbl_id = LCNPHY_TBL_ID_RFSEQ;
- tab.tbl_width = 16;
- tab.tbl_ptr = &val;
- tab.tbl_len = 1;
+ tab.tbl_id = LCNPHY_TBL_ID_RFSEQ;
+ tab.tbl_width = 16;
+ tab.tbl_ptr = &val;
+ tab.tbl_len = 1;
- val = 150;
- tab.tbl_offset = 0;
- wlc_lcnphy_write_table(pi, &tab);
+ val = 114;
+ tab.tbl_offset = 0;
+ wlc_lcnphy_write_table(pi, &tab);
- val = 220;
- tab.tbl_offset = 1;
- wlc_lcnphy_write_table(pi, &tab);
- }
+ val = 130;
+ tab.tbl_offset = 1;
+ wlc_lcnphy_write_table(pi, &tab);
+
+ val = 6;
+ tab.tbl_offset = 8;
+ wlc_lcnphy_write_table(pi, &tab);
if (CHSPEC_IS2G(pi->radio_chanspec)) {
if (pi->sh->boardflags & BFL_FEM)
wlc_lcnphy_load_tx_iir_filter(pi, true, 3);
mod_phy_reg(pi, 0x4eb, (0x7 << 3), (1) << 3);
- wlc_lcnphy_tssi_setup(pi);
}
void wlc_phy_detach_lcnphy(struct brcms_phy *pi)
if (!wlc_phy_txpwr_srom_read_lcnphy(pi))
return false;
- if (LCNREV_IS(pi->pubpi.phy_rev, 1)) {
+ if ((pi->sh->boardflags & BFL_FEM) &&
+ (LCNREV_IS(pi->pubpi.phy_rev, 1))) {
if (pi_lcn->lcnphy_tempsense_option == 3) {
pi->hwpwrctrl = true;
pi->hwpwrctrl_capable = true;
};
static const u16 dot11lcn_sw_ctrl_tbl_4313_rev0[] = {
- 0x0009,
0x000a,
- 0x0005,
- 0x0006,
0x0009,
- 0x000a,
- 0x0005,
0x0006,
- 0x0009,
- 0x000a,
0x0005,
- 0x0006,
- 0x0009,
0x000a,
- 0x0005,
- 0x0006,
0x0009,
- 0x000a,
- 0x0005,
0x0006,
- 0x0009,
- 0x000a,
0x0005,
- 0x0006,
- 0x0009,
0x000a,
- 0x0005,
- 0x0006,
0x0009,
- 0x000a,
- 0x0005,
0x0006,
- 0x0009,
- 0x000a,
0x0005,
- 0x0006,
- 0x0009,
0x000a,
- 0x0005,
- 0x0006,
0x0009,
- 0x000a,
- 0x0005,
0x0006,
- 0x0009,
- 0x000a,
0x0005,
- 0x0006,
+ 0x000a,
0x0009,
+ 0x0006,
+ 0x0005,
0x000a,
+ 0x0009,
+ 0x0006,
0x0005,
+ 0x000a,
+ 0x0009,
0x0006,
+ 0x0005,
+ 0x000a,
0x0009,
+ 0x0006,
+ 0x0005,
0x000a,
+ 0x0009,
+ 0x0006,
0x0005,
+ 0x000a,
+ 0x0009,
0x0006,
+ 0x0005,
+ 0x000a,
0x0009,
+ 0x0006,
+ 0x0005,
0x000a,
+ 0x0009,
+ 0x0006,
0x0005,
+ 0x000a,
+ 0x0009,
0x0006,
+ 0x0005,
+ 0x000a,
0x0009,
+ 0x0006,
+ 0x0005,
0x000a,
+ 0x0009,
+ 0x0006,
0x0005,
+ 0x000a,
+ 0x0009,
0x0006,
+ 0x0005,
};
static const u16 dot11lcn_sw_ctrl_tbl_rev0[] = {
dma_addr_t txcmd_phys;
int txq_id = skb_get_queue_mapping(skb);
u16 len, idx, hdr_len;
+ u16 firstlen, secondlen;
u8 id;
u8 unicast;
u8 sta_id;
len =
sizeof(struct il3945_tx_cmd) + sizeof(struct il_cmd_header) +
hdr_len;
- len = (len + 3) & ~3;
+ firstlen = (len + 3) & ~3;
/* Physical address of this Tx command's header (not MAC header!),
* within command buffer array. */
txcmd_phys =
- pci_map_single(il->pci_dev, &out_cmd->hdr, len, PCI_DMA_TODEVICE);
+ pci_map_single(il->pci_dev, &out_cmd->hdr, firstlen,
+ PCI_DMA_TODEVICE);
if (unlikely(pci_dma_mapping_error(il->pci_dev, txcmd_phys)))
goto drop_unlock;
/* Set up TFD's 2nd entry to point directly to remainder of skb,
* if any (802.11 null frames have no payload). */
- len = skb->len - hdr_len;
- if (len) {
+ secondlen = skb->len - hdr_len;
+ if (secondlen > 0) {
phys_addr =
- pci_map_single(il->pci_dev, skb->data + hdr_len, len,
+ pci_map_single(il->pci_dev, skb->data + hdr_len, secondlen,
PCI_DMA_TODEVICE);
if (unlikely(pci_dma_mapping_error(il->pci_dev, phys_addr)))
goto drop_unlock;
/* Add buffer containing Tx command and MAC(!) header to TFD's
* first entry */
- il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, len, 1, 0);
+ il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, firstlen, 1, 0);
dma_unmap_addr_set(out_meta, mapping, txcmd_phys);
- dma_unmap_len_set(out_meta, len, len);
- if (len)
- il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, len, 0,
- U32_PAD(len));
+ dma_unmap_len_set(out_meta, len, firstlen);
+ if (secondlen > 0)
+ il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, secondlen, 0,
+ U32_PAD(secondlen));
if (!ieee80211_has_morefrags(hdr->frame_control)) {
txq->need_update = 1;
int rate_idx;
int i;
u32 rate;
- u8 use_green = il4965_rs_use_green(il, sta);
+ u8 use_green;
u8 active_tbl = 0;
u8 valid_tx_ant;
struct il_station_priv *sta_priv;
if (!sta || !lq_sta)
return;
+ use_green = il4965_rs_use_green(il, sta);
sta_priv = (void *)sta->drv_priv;
i = lq_sta->last_txrate_idx;
return -EIO;
}
+ /*
+ * This can happen upon FW ASSERT: we clear the STATUS_FW_ERROR flag
+ * in iwl_down but cancel the workers only later.
+ */
+ if (!priv->ucode_loaded) {
+ IWL_ERR(priv, "Fw not loaded - dropping CMD: %x\n", cmd->id);
+ return -EIO;
+ }
+
/*
* Synchronous commands from this op-mode must hold
* the mutex, this ensures we don't try to send two
mutex_lock(&priv->mutex);
+ if (changes & BSS_CHANGED_IDLE && bss_conf->idle) {
+ /*
+ * If we go idle, then clearly no "passive-no-rx"
+ * workaround is needed any more, this is a reset.
+ */
+ iwlagn_lift_passive_no_rx(priv);
+ }
+
if (unlikely(!iwl_is_ready(priv))) {
IWL_DEBUG_MAC80211(priv, "leave - not ready\n");
mutex_unlock(&priv->mutex);
priv->timestamp = bss_conf->sync_tsf;
ctx->staging.filter_flags |= RXON_FILTER_ASSOC_MSK;
} else {
- /*
- * If we disassociate while there are pending
- * frames, just wake up the queues and let the
- * frames "escape" ... This shouldn't really
- * be happening to start with, but we should
- * not get stuck in this case either since it
- * can happen if userspace gets confused.
- */
- iwlagn_lift_passive_no_rx(priv);
-
ctx->staging.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
if (ctx->ctxid == IWL_RXON_CTX_BSS)
memset(&info->status, 0, sizeof(info->status));
if (status == TX_STATUS_FAIL_PASSIVE_NO_RX &&
- iwl_is_associated_ctx(ctx) && ctx->vif &&
+ ctx->vif &&
ctx->vif->type == NL80211_IFTYPE_STATION) {
/* block and stop all queues */
priv->passive_no_rx = true;
return -EIO;
}
+ priv->ucode_loaded = true;
+
if (ucode_type != IWL_UCODE_WOWLAN) {
/* delay a bit to give rfkill time to run */
msleep(5);
return ret;
}
- priv->ucode_loaded = true;
-
return 0;
}
/* If platform's RF_KILL switch is NOT set to KILL */
hw_rfkill = iwl_is_rfkill_set(trans);
+ if (hw_rfkill)
+ set_bit(STATUS_RFKILL, &trans_pcie->status);
+ else
+ clear_bit(STATUS_RFKILL, &trans_pcie->status);
iwl_op_mode_hw_rf_kill(trans->op_mode, hw_rfkill);
if (hw_rfkill && !run_in_rfkill)
return -ERFKILL;
static int iwl_trans_pcie_start_hw(struct iwl_trans *trans)
{
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
bool hw_rfkill;
int err;
iwl_enable_rfkill_int(trans);
hw_rfkill = iwl_is_rfkill_set(trans);
+ if (hw_rfkill)
+ set_bit(STATUS_RFKILL, &trans_pcie->status);
+ else
+ clear_bit(STATUS_RFKILL, &trans_pcie->status);
iwl_op_mode_hw_rf_kill(trans->op_mode, hw_rfkill);
return 0;
* op_mode.
*/
hw_rfkill = iwl_is_rfkill_set(trans);
+ if (hw_rfkill)
+ set_bit(STATUS_RFKILL, &trans_pcie->status);
+ else
+ clear_bit(STATUS_RFKILL, &trans_pcie->status);
iwl_op_mode_hw_rf_kill(trans->op_mode, hw_rfkill);
}
}
for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) {
int copy = 0;
- if (!cmd->len)
+ if (!cmd->len[i])
continue;
/* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */
}
}
- for (i = 0; i < request->n_channels; i++) {
+ for (i = 0; i < min_t(u32, request->n_channels,
+ MWIFIEX_USER_SCAN_CHAN_MAX); i++) {
chan = request->channels[i];
priv->user_scan_cfg->chan_list[i].chan_number = chan->hw_value;
priv->user_scan_cfg->chan_list[i].radio_type = chan->band;
return -1;
}
+ cmd_code = le16_to_cpu(host_cmd->command);
+ cmd_size = le16_to_cpu(host_cmd->size);
+
+ if (adapter->hw_status == MWIFIEX_HW_STATUS_RESET &&
+ cmd_code != HostCmd_CMD_FUNC_SHUTDOWN &&
+ cmd_code != HostCmd_CMD_FUNC_INIT) {
+ dev_err(adapter->dev,
+ "DNLD_CMD: FW in reset state, ignore cmd %#x\n",
+ cmd_code);
+ mwifiex_complete_cmd(adapter, cmd_node);
+ mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
+ return -1;
+ }
+
/* Set command sequence number */
adapter->seq_num++;
host_cmd->seq_num = cpu_to_le16(HostCmd_SET_SEQ_NO_BSS_INFO
adapter->curr_cmd = cmd_node;
spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
- cmd_code = le16_to_cpu(host_cmd->command);
- cmd_size = le16_to_cpu(host_cmd->size);
-
/* Adjust skb length */
if (cmd_node->cmd_skb->len > cmd_size)
/*
ret = mwifiex_send_cmd_async(priv, cmd_no, cmd_action, cmd_oid,
data_buf);
- if (!ret)
- ret = mwifiex_wait_queue_complete(adapter);
return ret;
}
if (cmd_no == HostCmd_CMD_802_11_SCAN) {
mwifiex_queue_scan_cmd(priv, cmd_node);
} else {
- adapter->cmd_queued = cmd_node;
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true);
queue_work(adapter->workqueue, &adapter->main_work);
+ if (cmd_node->wait_q_enabled)
+ ret = mwifiex_wait_queue_complete(adapter, cmd_node);
}
return ret;
return ret;
}
+ /* cancel current command */
+ if (adapter->curr_cmd) {
+ dev_warn(adapter->dev, "curr_cmd is still in processing\n");
+ del_timer(&adapter->cmd_timer);
+ mwifiex_insert_cmd_to_free_q(adapter, adapter->curr_cmd);
+ adapter->curr_cmd = NULL;
+ }
+
/* shut down mwifiex */
dev_dbg(adapter->dev, "info: shutdown mwifiex...\n");
u16 cmd_wait_q_required;
struct mwifiex_wait_queue cmd_wait_q;
u8 scan_wait_q_woken;
- struct cmd_ctrl_node *cmd_queued;
spinlock_t queue_lock; /* lock for tx queues */
struct completion fw_load;
u8 country_code[IEEE80211_COUNTRY_STRING_LEN];
struct mwifiex_multicast_list *mcast_list);
int mwifiex_copy_mcast_addr(struct mwifiex_multicast_list *mlist,
struct net_device *dev);
-int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter);
+int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter,
+ struct cmd_ctrl_node *cmd_queued);
int mwifiex_bss_start(struct mwifiex_private *priv, struct cfg80211_bss *bss,
struct cfg80211_ssid *req_ssid);
int mwifiex_cancel_hs(struct mwifiex_private *priv, int cmd_type);
}
memcpy(adapter->upld_buf, skb->data,
min_t(u32, MWIFIEX_SIZE_OF_CMD_BUFFER, skb->len));
+ skb_push(skb, INTF_HEADER_LEN);
if (mwifiex_map_pci_memory(adapter, skb, MWIFIEX_UPLD_SIZE,
PCI_DMA_FROMDEVICE))
return -1;
list_del(&cmd_node->list);
spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
flags);
- adapter->cmd_queued = cmd_node;
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node,
true);
queue_work(adapter->workqueue, &adapter->main_work);
+
+ /* Perform internal scan synchronously */
+ if (!priv->scan_request) {
+ dev_dbg(adapter->dev, "wait internal scan\n");
+ mwifiex_wait_queue_complete(adapter, cmd_node);
+ }
} else {
spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
flags);
/* Need to indicate IOCTL complete */
if (adapter->curr_cmd->wait_q_enabled) {
adapter->cmd_wait_q.status = 0;
- mwifiex_complete_cmd(adapter, adapter->curr_cmd);
+ if (!priv->scan_request) {
+ dev_dbg(adapter->dev,
+ "complete internal scan\n");
+ mwifiex_complete_cmd(adapter,
+ adapter->curr_cmd);
+ }
}
if (priv->report_scan_result)
priv->report_scan_result = false;
/* Normal scan */
ret = mwifiex_scan_networks(priv, NULL);
- if (!ret)
- ret = mwifiex_wait_queue_complete(priv->adapter);
-
up(&priv->async_sem);
return ret;
* This function waits on a cmd wait queue. It also cancels the pending
* request after waking up, in case of errors.
*/
-int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter)
+int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter,
+ struct cmd_ctrl_node *cmd_queued)
{
int status;
- struct cmd_ctrl_node *cmd_queued;
-
- if (!adapter->cmd_queued)
- return 0;
-
- cmd_queued = adapter->cmd_queued;
- adapter->cmd_queued = NULL;
dev_dbg(adapter->dev, "cmd pending\n");
atomic_inc(&adapter->cmd_pending);
config RT2400PCI
tristate "Ralink rt2400 (PCI/PCMCIA) support"
depends on PCI
+ select RT2X00_LIB_MMIO
select RT2X00_LIB_PCI
select EEPROM_93CX6
---help---
config RT2500PCI
tristate "Ralink rt2500 (PCI/PCMCIA) support"
depends on PCI
+ select RT2X00_LIB_MMIO
select RT2X00_LIB_PCI
select EEPROM_93CX6
---help---
tristate "Ralink rt2501/rt61 (PCI/PCMCIA) support"
depends on PCI
select RT2X00_LIB_PCI
+ select RT2X00_LIB_MMIO
select RT2X00_LIB_FIRMWARE
select RT2X00_LIB_CRYPTO
select CRC_ITU_T
tristate "Ralink rt27xx/rt28xx/rt30xx (PCI/PCIe/PCMCIA) support"
depends on PCI || SOC_RT288X || SOC_RT305X
select RT2800_LIB
+ select RT2X00_LIB_MMIO
select RT2X00_LIB_PCI if PCI
select RT2X00_LIB_SOC if SOC_RT288X || SOC_RT305X
select RT2X00_LIB_FIRMWARE
config RT2800_LIB
tristate
+config RT2X00_LIB_MMIO
+ tristate
+
config RT2X00_LIB_PCI
tristate
select RT2X00_LIB
rt2x00lib-$(CONFIG_RT2X00_LIB_LEDS) += rt2x00leds.o
obj-$(CONFIG_RT2X00_LIB) += rt2x00lib.o
+obj-$(CONFIG_RT2X00_LIB_MMIO) += rt2x00mmio.o
obj-$(CONFIG_RT2X00_LIB_PCI) += rt2x00pci.o
obj-$(CONFIG_RT2X00_LIB_SOC) += rt2x00soc.o
obj-$(CONFIG_RT2X00_LIB_USB) += rt2x00usb.o
#include <linux/slab.h>
#include "rt2x00.h"
+#include "rt2x00mmio.h"
#include "rt2x00pci.h"
#include "rt2400pci.h"
#include <linux/slab.h>
#include "rt2x00.h"
+#include "rt2x00mmio.h"
#include "rt2x00pci.h"
#include "rt2500pci.h"
#include <linux/eeprom_93cx6.h>
#include "rt2x00.h"
+#include "rt2x00mmio.h"
#include "rt2x00pci.h"
#include "rt2x00soc.h"
#include "rt2800lib.h"
--- /dev/null
+/*
+ Copyright (C) 2004 - 2009 Ivo van Doorn <IvDoorn@gmail.com>
+ <http://rt2x00.serialmonkey.com>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the
+ Free Software Foundation, Inc.,
+ 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+/*
+ Module: rt2x00mmio
+ Abstract: rt2x00 generic mmio device routines.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "rt2x00.h"
+#include "rt2x00mmio.h"
+
+/*
+ * Register access.
+ */
+int rt2x00pci_regbusy_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ const struct rt2x00_field32 field,
+ u32 *reg)
+{
+ unsigned int i;
+
+ if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags))
+ return 0;
+
+ for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
+ rt2x00pci_register_read(rt2x00dev, offset, reg);
+ if (!rt2x00_get_field32(*reg, field))
+ return 1;
+ udelay(REGISTER_BUSY_DELAY);
+ }
+
+ printk_once(KERN_ERR "%s() Indirect register access failed: "
+ "offset=0x%.08x, value=0x%.08x\n", __func__, offset, *reg);
+ *reg = ~0;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(rt2x00pci_regbusy_read);
+
+bool rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
+{
+ struct data_queue *queue = rt2x00dev->rx;
+ struct queue_entry *entry;
+ struct queue_entry_priv_pci *entry_priv;
+ struct skb_frame_desc *skbdesc;
+ int max_rx = 16;
+
+ while (--max_rx) {
+ entry = rt2x00queue_get_entry(queue, Q_INDEX);
+ entry_priv = entry->priv_data;
+
+ if (rt2x00dev->ops->lib->get_entry_state(entry))
+ break;
+
+ /*
+ * Fill in desc fields of the skb descriptor
+ */
+ skbdesc = get_skb_frame_desc(entry->skb);
+ skbdesc->desc = entry_priv->desc;
+ skbdesc->desc_len = entry->queue->desc_size;
+
+ /*
+ * DMA is already done, notify rt2x00lib that
+ * it finished successfully.
+ */
+ rt2x00lib_dmastart(entry);
+ rt2x00lib_dmadone(entry);
+
+ /*
+ * Send the frame to rt2x00lib for further processing.
+ */
+ rt2x00lib_rxdone(entry, GFP_ATOMIC);
+ }
+
+ return !max_rx;
+}
+EXPORT_SYMBOL_GPL(rt2x00pci_rxdone);
+
+void rt2x00pci_flush_queue(struct data_queue *queue, bool drop)
+{
+ unsigned int i;
+
+ for (i = 0; !rt2x00queue_empty(queue) && i < 10; i++)
+ msleep(10);
+}
+EXPORT_SYMBOL_GPL(rt2x00pci_flush_queue);
+
+/*
+ * Device initialization handlers.
+ */
+static int rt2x00pci_alloc_queue_dma(struct rt2x00_dev *rt2x00dev,
+ struct data_queue *queue)
+{
+ struct queue_entry_priv_pci *entry_priv;
+ void *addr;
+ dma_addr_t dma;
+ unsigned int i;
+
+ /*
+ * Allocate DMA memory for descriptor and buffer.
+ */
+ addr = dma_alloc_coherent(rt2x00dev->dev,
+ queue->limit * queue->desc_size,
+ &dma, GFP_KERNEL);
+ if (!addr)
+ return -ENOMEM;
+
+ memset(addr, 0, queue->limit * queue->desc_size);
+
+ /*
+ * Initialize all queue entries to contain valid addresses.
+ */
+ for (i = 0; i < queue->limit; i++) {
+ entry_priv = queue->entries[i].priv_data;
+ entry_priv->desc = addr + i * queue->desc_size;
+ entry_priv->desc_dma = dma + i * queue->desc_size;
+ }
+
+ return 0;
+}
+
+static void rt2x00pci_free_queue_dma(struct rt2x00_dev *rt2x00dev,
+ struct data_queue *queue)
+{
+ struct queue_entry_priv_pci *entry_priv =
+ queue->entries[0].priv_data;
+
+ if (entry_priv->desc)
+ dma_free_coherent(rt2x00dev->dev,
+ queue->limit * queue->desc_size,
+ entry_priv->desc, entry_priv->desc_dma);
+ entry_priv->desc = NULL;
+}
+
+int rt2x00pci_initialize(struct rt2x00_dev *rt2x00dev)
+{
+ struct data_queue *queue;
+ int status;
+
+ /*
+ * Allocate DMA
+ */
+ queue_for_each(rt2x00dev, queue) {
+ status = rt2x00pci_alloc_queue_dma(rt2x00dev, queue);
+ if (status)
+ goto exit;
+ }
+
+ /*
+ * Register interrupt handler.
+ */
+ status = request_irq(rt2x00dev->irq,
+ rt2x00dev->ops->lib->irq_handler,
+ IRQF_SHARED, rt2x00dev->name, rt2x00dev);
+ if (status) {
+ ERROR(rt2x00dev, "IRQ %d allocation failed (error %d).\n",
+ rt2x00dev->irq, status);
+ goto exit;
+ }
+
+ return 0;
+
+exit:
+ queue_for_each(rt2x00dev, queue)
+ rt2x00pci_free_queue_dma(rt2x00dev, queue);
+
+ return status;
+}
+EXPORT_SYMBOL_GPL(rt2x00pci_initialize);
+
+void rt2x00pci_uninitialize(struct rt2x00_dev *rt2x00dev)
+{
+ struct data_queue *queue;
+
+ /*
+ * Free irq line.
+ */
+ free_irq(rt2x00dev->irq, rt2x00dev);
+
+ /*
+ * Free DMA
+ */
+ queue_for_each(rt2x00dev, queue)
+ rt2x00pci_free_queue_dma(rt2x00dev, queue);
+}
+EXPORT_SYMBOL_GPL(rt2x00pci_uninitialize);
+
+/*
+ * rt2x00mmio module information.
+ */
+MODULE_AUTHOR(DRV_PROJECT);
+MODULE_VERSION(DRV_VERSION);
+MODULE_DESCRIPTION("rt2x00 mmio library");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ Copyright (C) 2004 - 2009 Ivo van Doorn <IvDoorn@gmail.com>
+ <http://rt2x00.serialmonkey.com>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the
+ Free Software Foundation, Inc.,
+ 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+/*
+ Module: rt2x00mmio
+ Abstract: Data structures for the rt2x00mmio module.
+ */
+
+#ifndef RT2X00MMIO_H
+#define RT2X00MMIO_H
+
+#include <linux/io.h>
+
+/*
+ * Register access.
+ */
+static inline void rt2x00pci_register_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ u32 *value)
+{
+ *value = readl(rt2x00dev->csr.base + offset);
+}
+
+static inline void rt2x00pci_register_multiread(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ void *value, const u32 length)
+{
+ memcpy_fromio(value, rt2x00dev->csr.base + offset, length);
+}
+
+static inline void rt2x00pci_register_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ u32 value)
+{
+ writel(value, rt2x00dev->csr.base + offset);
+}
+
+static inline void rt2x00pci_register_multiwrite(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ const void *value,
+ const u32 length)
+{
+ __iowrite32_copy(rt2x00dev->csr.base + offset, value, length >> 2);
+}
+
+/**
+ * rt2x00pci_regbusy_read - Read from register with busy check
+ * @rt2x00dev: Device pointer, see &struct rt2x00_dev.
+ * @offset: Register offset
+ * @field: Field to check if register is busy
+ * @reg: Pointer to where register contents should be stored
+ *
+ * This function will read the given register, and checks if the
+ * register is busy. If it is, it will sleep for a couple of
+ * microseconds before reading the register again. If the register
+ * is not read after a certain timeout, this function will return
+ * FALSE.
+ */
+int rt2x00pci_regbusy_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ const struct rt2x00_field32 field,
+ u32 *reg);
+
+/**
+ * struct queue_entry_priv_pci: Per entry PCI specific information
+ *
+ * @desc: Pointer to device descriptor
+ * @desc_dma: DMA pointer to &desc.
+ * @data: Pointer to device's entry memory.
+ * @data_dma: DMA pointer to &data.
+ */
+struct queue_entry_priv_pci {
+ __le32 *desc;
+ dma_addr_t desc_dma;
+};
+
+/**
+ * rt2x00pci_rxdone - Handle RX done events
+ * @rt2x00dev: Device pointer, see &struct rt2x00_dev.
+ *
+ * Returns true if there are still rx frames pending and false if all
+ * pending rx frames were processed.
+ */
+bool rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev);
+
+/**
+ * rt2x00pci_flush_queue - Flush data queue
+ * @queue: Data queue to stop
+ * @drop: True to drop all pending frames.
+ *
+ * This will wait for a maximum of 100ms, waiting for the queues
+ * to become empty.
+ */
+void rt2x00pci_flush_queue(struct data_queue *queue, bool drop);
+
+/*
+ * Device initialization handlers.
+ */
+int rt2x00pci_initialize(struct rt2x00_dev *rt2x00dev);
+void rt2x00pci_uninitialize(struct rt2x00_dev *rt2x00dev);
+
+#endif /* RT2X00MMIO_H */
#include "rt2x00.h"
#include "rt2x00pci.h"
-/*
- * Register access.
- */
-int rt2x00pci_regbusy_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- const struct rt2x00_field32 field,
- u32 *reg)
-{
- unsigned int i;
-
- if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags))
- return 0;
-
- for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
- rt2x00pci_register_read(rt2x00dev, offset, reg);
- if (!rt2x00_get_field32(*reg, field))
- return 1;
- udelay(REGISTER_BUSY_DELAY);
- }
-
- ERROR(rt2x00dev, "Indirect register access failed: "
- "offset=0x%.08x, value=0x%.08x\n", offset, *reg);
- *reg = ~0;
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(rt2x00pci_regbusy_read);
-
-bool rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
-{
- struct data_queue *queue = rt2x00dev->rx;
- struct queue_entry *entry;
- struct queue_entry_priv_pci *entry_priv;
- struct skb_frame_desc *skbdesc;
- int max_rx = 16;
-
- while (--max_rx) {
- entry = rt2x00queue_get_entry(queue, Q_INDEX);
- entry_priv = entry->priv_data;
-
- if (rt2x00dev->ops->lib->get_entry_state(entry))
- break;
-
- /*
- * Fill in desc fields of the skb descriptor
- */
- skbdesc = get_skb_frame_desc(entry->skb);
- skbdesc->desc = entry_priv->desc;
- skbdesc->desc_len = entry->queue->desc_size;
-
- /*
- * DMA is already done, notify rt2x00lib that
- * it finished successfully.
- */
- rt2x00lib_dmastart(entry);
- rt2x00lib_dmadone(entry);
-
- /*
- * Send the frame to rt2x00lib for further processing.
- */
- rt2x00lib_rxdone(entry, GFP_ATOMIC);
- }
-
- return !max_rx;
-}
-EXPORT_SYMBOL_GPL(rt2x00pci_rxdone);
-
-void rt2x00pci_flush_queue(struct data_queue *queue, bool drop)
-{
- unsigned int i;
-
- for (i = 0; !rt2x00queue_empty(queue) && i < 10; i++)
- msleep(10);
-}
-EXPORT_SYMBOL_GPL(rt2x00pci_flush_queue);
-
-/*
- * Device initialization handlers.
- */
-static int rt2x00pci_alloc_queue_dma(struct rt2x00_dev *rt2x00dev,
- struct data_queue *queue)
-{
- struct queue_entry_priv_pci *entry_priv;
- void *addr;
- dma_addr_t dma;
- unsigned int i;
-
- /*
- * Allocate DMA memory for descriptor and buffer.
- */
- addr = dma_alloc_coherent(rt2x00dev->dev,
- queue->limit * queue->desc_size,
- &dma, GFP_KERNEL);
- if (!addr)
- return -ENOMEM;
-
- memset(addr, 0, queue->limit * queue->desc_size);
-
- /*
- * Initialize all queue entries to contain valid addresses.
- */
- for (i = 0; i < queue->limit; i++) {
- entry_priv = queue->entries[i].priv_data;
- entry_priv->desc = addr + i * queue->desc_size;
- entry_priv->desc_dma = dma + i * queue->desc_size;
- }
-
- return 0;
-}
-
-static void rt2x00pci_free_queue_dma(struct rt2x00_dev *rt2x00dev,
- struct data_queue *queue)
-{
- struct queue_entry_priv_pci *entry_priv =
- queue->entries[0].priv_data;
-
- if (entry_priv->desc)
- dma_free_coherent(rt2x00dev->dev,
- queue->limit * queue->desc_size,
- entry_priv->desc, entry_priv->desc_dma);
- entry_priv->desc = NULL;
-}
-
-int rt2x00pci_initialize(struct rt2x00_dev *rt2x00dev)
-{
- struct data_queue *queue;
- int status;
-
- /*
- * Allocate DMA
- */
- queue_for_each(rt2x00dev, queue) {
- status = rt2x00pci_alloc_queue_dma(rt2x00dev, queue);
- if (status)
- goto exit;
- }
-
- /*
- * Register interrupt handler.
- */
- status = request_irq(rt2x00dev->irq,
- rt2x00dev->ops->lib->irq_handler,
- IRQF_SHARED, rt2x00dev->name, rt2x00dev);
- if (status) {
- ERROR(rt2x00dev, "IRQ %d allocation failed (error %d).\n",
- rt2x00dev->irq, status);
- goto exit;
- }
-
- return 0;
-
-exit:
- queue_for_each(rt2x00dev, queue)
- rt2x00pci_free_queue_dma(rt2x00dev, queue);
-
- return status;
-}
-EXPORT_SYMBOL_GPL(rt2x00pci_initialize);
-
-void rt2x00pci_uninitialize(struct rt2x00_dev *rt2x00dev)
-{
- struct data_queue *queue;
-
- /*
- * Free irq line.
- */
- free_irq(rt2x00dev->irq, rt2x00dev);
-
- /*
- * Free DMA
- */
- queue_for_each(rt2x00dev, queue)
- rt2x00pci_free_queue_dma(rt2x00dev, queue);
-}
-EXPORT_SYMBOL_GPL(rt2x00pci_uninitialize);
-
/*
* PCI driver handlers.
*/
*/
#define PCI_DEVICE_DATA(__ops) .driver_data = (kernel_ulong_t)(__ops)
-/*
- * Register access.
- */
-static inline void rt2x00pci_register_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- u32 *value)
-{
- *value = readl(rt2x00dev->csr.base + offset);
-}
-
-static inline void rt2x00pci_register_multiread(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- void *value, const u32 length)
-{
- memcpy_fromio(value, rt2x00dev->csr.base + offset, length);
-}
-
-static inline void rt2x00pci_register_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- u32 value)
-{
- writel(value, rt2x00dev->csr.base + offset);
-}
-
-static inline void rt2x00pci_register_multiwrite(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- const void *value,
- const u32 length)
-{
- __iowrite32_copy(rt2x00dev->csr.base + offset, value, length >> 2);
-}
-
-/**
- * rt2x00pci_regbusy_read - Read from register with busy check
- * @rt2x00dev: Device pointer, see &struct rt2x00_dev.
- * @offset: Register offset
- * @field: Field to check if register is busy
- * @reg: Pointer to where register contents should be stored
- *
- * This function will read the given register, and checks if the
- * register is busy. If it is, it will sleep for a couple of
- * microseconds before reading the register again. If the register
- * is not read after a certain timeout, this function will return
- * FALSE.
- */
-int rt2x00pci_regbusy_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- const struct rt2x00_field32 field,
- u32 *reg);
-
-/**
- * struct queue_entry_priv_pci: Per entry PCI specific information
- *
- * @desc: Pointer to device descriptor
- * @desc_dma: DMA pointer to &desc.
- * @data: Pointer to device's entry memory.
- * @data_dma: DMA pointer to &data.
- */
-struct queue_entry_priv_pci {
- __le32 *desc;
- dma_addr_t desc_dma;
-};
-
-/**
- * rt2x00pci_rxdone - Handle RX done events
- * @rt2x00dev: Device pointer, see &struct rt2x00_dev.
- *
- * Returns true if there are still rx frames pending and false if all
- * pending rx frames were processed.
- */
-bool rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev);
-
-/**
- * rt2x00pci_flush_queue - Flush data queue
- * @queue: Data queue to stop
- * @drop: True to drop all pending frames.
- *
- * This will wait for a maximum of 100ms, waiting for the queues
- * to become empty.
- */
-void rt2x00pci_flush_queue(struct data_queue *queue, bool drop);
-
-/*
- * Device initialization handlers.
- */
-int rt2x00pci_initialize(struct rt2x00_dev *rt2x00dev);
-void rt2x00pci_uninitialize(struct rt2x00_dev *rt2x00dev);
-
/*
* PCI driver handlers.
*/
#include <linux/eeprom_93cx6.h>
#include "rt2x00.h"
+#include "rt2x00mmio.h"
#include "rt2x00pci.h"
#include "rt61pci.h"
if (unlikely(!_urb)) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
"Can't allocate urb. Drop skb!\n");
+ kfree_skb(skb);
return;
}
_rtl_submit_tx_urb(hw, _urb);
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/gpio.h>
-#include <linux/mei_bus.h>
+#include <linux/mei_cl_bus.h>
#include <linux/nfc.h>
#include <net/nfc/hci.h>
#define MICROREAD_DRIVER_NAME "microread"
-#define MICROREAD_UUID UUID_LE(0x0bb17a78, 0x2a8e, 0x4c50, 0x94, \
- 0xd4, 0x50, 0x26, 0x67, 0x23, 0x77, 0x5c)
-
struct mei_nfc_hdr {
u8 cmd;
u8 status;
#define MEI_NFC_MAX_READ (MEI_NFC_HEADER_SIZE + MEI_NFC_MAX_HCI_PAYLOAD)
struct microread_mei_phy {
- struct mei_device *mei_device;
+ struct mei_cl_device *device;
struct nfc_hci_dev *hdev;
int powered;
MEI_DUMP_SKB_OUT("mei frame sent", skb);
- r = mei_send(phy->device, skb->data, skb->len);
+ r = mei_cl_send(phy->device, skb->data, skb->len);
if (r > 0)
r = 0;
return r;
}
-static void microread_event_cb(struct mei_device *device, u32 events,
+static void microread_event_cb(struct mei_cl_device *device, u32 events,
void *context)
{
struct microread_mei_phy *phy = context;
if (phy->hard_fault != 0)
return;
- if (events & BIT(MEI_EVENT_RX)) {
+ if (events & BIT(MEI_CL_EVENT_RX)) {
struct sk_buff *skb;
int reply_size;
if (!skb)
return;
- reply_size = mei_recv(device, skb->data, MEI_NFC_MAX_READ);
+ reply_size = mei_cl_recv(device, skb->data, MEI_NFC_MAX_READ);
if (reply_size < MEI_NFC_HEADER_SIZE) {
kfree(skb);
return;
.disable = microread_mei_disable,
};
-static int microread_mei_probe(struct mei_device *device,
- const struct mei_id *id)
+static int microread_mei_probe(struct mei_cl_device *device,
+ const struct mei_cl_device_id *id)
{
struct microread_mei_phy *phy;
int r;
}
phy->device = device;
- mei_set_clientdata(device, phy);
+ mei_cl_set_drvdata(device, phy);
- r = mei_register_event_cb(device, microread_event_cb, phy);
+ r = mei_cl_register_event_cb(device, microread_event_cb, phy);
if (r) {
pr_err(MICROREAD_DRIVER_NAME ": event cb registration failed\n");
goto err_out;
return r;
}
-static int microread_mei_remove(struct mei_device *device)
+static int microread_mei_remove(struct mei_cl_device *device)
{
- struct microread_mei_phy *phy = mei_get_clientdata(device);
+ struct microread_mei_phy *phy = mei_cl_get_drvdata(device);
pr_info("Removing microread\n");
return 0;
}
-static struct mei_id microread_mei_tbl[] = {
- { MICROREAD_DRIVER_NAME, MICROREAD_UUID },
+static struct mei_cl_device_id microread_mei_tbl[] = {
+ { MICROREAD_DRIVER_NAME },
/* required last entry */
{ }
};
-
MODULE_DEVICE_TABLE(mei, microread_mei_tbl);
-static struct mei_driver microread_driver = {
+static struct mei_cl_driver microread_driver = {
.id_table = microread_mei_tbl,
.name = MICROREAD_DRIVER_NAME,
pr_debug(DRIVER_DESC ": %s\n", __func__);
- r = mei_driver_register(µread_driver);
+ r = mei_cl_driver_register(µread_driver);
if (r) {
pr_err(MICROREAD_DRIVER_NAME ": driver registration failed\n");
return r;
static void microread_mei_exit(void)
{
- mei_driver_unregister(µread_driver);
+ mei_cl_driver_unregister(µread_driver);
}
module_init(microread_mei_init);
return;
}
- if (!pci_dev->pm_cap || !pci_dev->pme_support
- || pci_check_pme_status(pci_dev)) {
- if (pci_dev->pme_poll)
- pci_dev->pme_poll = false;
+ /* Clear PME Status if set. */
+ if (pci_dev->pme_support)
+ pci_check_pme_status(pci_dev);
- pci_wakeup_event(pci_dev);
- pm_runtime_resume(&pci_dev->dev);
- }
+ if (pci_dev->pme_poll)
+ pci_dev->pme_poll = false;
+
+ pci_wakeup_event(pci_dev);
+ pm_runtime_resume(&pci_dev->dev);
if (pci_dev->subordinate)
pci_pme_wakeup_bus(pci_dev->subordinate);
/*
* Turn off Bus Master bit on the device to tell it to not
- * continue to do DMA
+ * continue to do DMA. Don't touch devices in D3cold or unknown states.
*/
- pci_clear_master(pci_dev);
+ if (pci_dev->current_state <= PCI_D3hot)
+ pci_clear_master(pci_dev);
}
#ifdef CONFIG_PM
#define PCIE_PORTDRV_PM_OPS NULL
#endif /* !PM */
-/*
- * PCIe port runtime suspend is broken for some chipsets, so use a
- * black list to disable runtime PM for these chipsets.
- */
-static const struct pci_device_id port_runtime_pm_black_list[] = {
- { /* end: all zeroes */ }
-};
-
/*
* pcie_portdrv_probe - Probe PCI-Express port devices
* @dev: PCI-Express port device being probed
* it by default.
*/
dev->d3cold_allowed = false;
- if (!pci_match_id(port_runtime_pm_black_list, dev))
- pm_runtime_put_noidle(&dev->dev);
-
return 0;
}
static void pcie_portdrv_remove(struct pci_dev *dev)
{
- if (!pci_match_id(port_runtime_pm_black_list, dev))
- pm_runtime_get_noresume(&dev->dev);
pcie_port_device_remove(dev);
pci_disable_device(dev);
}
return min((size_t)(image - rom), size);
}
-static loff_t pci_find_rom(struct pci_dev *pdev, size_t *size)
-{
- struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
- loff_t start;
-
- /* assign the ROM an address if it doesn't have one */
- if (res->parent == NULL && pci_assign_resource(pdev, PCI_ROM_RESOURCE))
- return 0;
- start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
- *size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
-
- if (*size == 0)
- return 0;
-
- /* Enable ROM space decodes */
- if (pci_enable_rom(pdev))
- return 0;
-
- return start;
-}
-
/**
* pci_map_rom - map a PCI ROM to kernel space
* @pdev: pointer to pci device struct
void __iomem *pci_map_rom(struct pci_dev *pdev, size_t *size)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
- loff_t start = 0;
+ loff_t start;
void __iomem *rom;
/*
return (void __iomem *)(unsigned long)
pci_resource_start(pdev, PCI_ROM_RESOURCE);
} else {
- start = pci_find_rom(pdev, size);
- }
- }
+ /* assign the ROM an address if it doesn't have one */
+ if (res->parent == NULL &&
+ pci_assign_resource(pdev,PCI_ROM_RESOURCE))
+ return NULL;
+ start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
+ *size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
+ if (*size == 0)
+ return NULL;
- /*
- * Some devices may provide ROMs via a source other than the BAR
- */
- if (!start && pdev->rom && pdev->romlen) {
- *size = pdev->romlen;
- return phys_to_virt(pdev->rom);
+ /* Enable ROM space decodes */
+ if (pci_enable_rom(pdev))
+ return NULL;
+ }
}
- if (!start)
- return NULL;
-
rom = ioremap(start, *size);
if (!rom) {
/* restore enable if ioremap fails */
if (res->flags & (IORESOURCE_ROM_COPY | IORESOURCE_ROM_BIOS_COPY))
return;
- if (!pdev->rom || !pdev->romlen)
- iounmap(rom);
+ iounmap(rom);
/* Disable again before continuing, leave enabled if pci=rom */
if (!(res->flags & (IORESOURCE_ROM_ENABLE | IORESOURCE_ROM_SHADOW)))
}
}
+/**
+ * pci_platform_rom - provides a pointer to any ROM image provided by the
+ * platform
+ * @pdev: pointer to pci device struct
+ * @size: pointer to receive size of pci window over ROM
+ */
+void __iomem *pci_platform_rom(struct pci_dev *pdev, size_t *size)
+{
+ if (pdev->rom && pdev->romlen) {
+ *size = pdev->romlen;
+ return phys_to_virt((phys_addr_t)pdev->rom);
+ }
+
+ return NULL;
+}
+
EXPORT_SYMBOL(pci_map_rom);
EXPORT_SYMBOL(pci_unmap_rom);
EXPORT_SYMBOL_GPL(pci_enable_rom);
EXPORT_SYMBOL_GPL(pci_disable_rom);
+EXPORT_SYMBOL(pci_platform_rom);
/* special soc specific control */
if (ctrl->mpp_get || ctrl->mpp_set) {
- if (!ctrl->name || !ctrl->mpp_set || !ctrl->mpp_set) {
+ if (!ctrl->name || !ctrl->mpp_get || !ctrl->mpp_set) {
dev_err(&pdev->dev, "wrong soc control info\n");
return -EINVAL;
}
static int pinconf_dbg_state_print(struct seq_file *s, void *d)
{
if (strlen(dbg_state_name))
- seq_printf(s, "%s\n", dbg_pinname);
+ seq_printf(s, "%s\n", dbg_state_name);
else
seq_printf(s, "No pin state set\n");
return 0;
* pin config.
*/
-#ifdef CONFIG_GENERIC_PINCONF
+#if defined(CONFIG_GENERIC_PINCONF) && defined(CONFIG_DEBUG_FS)
void pinconf_generic_dump_pin(struct pinctrl_dev *pctldev,
struct seq_file *s, unsigned pin);
}
/* check if pin use AlternateFunction register */
- if ((af.alt_bit1 == UNUSED) && (af.alt_bit1 == UNUSED))
+ if ((af.alt_bit1 == UNUSED) && (af.alt_bit2 == UNUSED))
return mode;
/*
* if pin GPIOSEL bit is set and pin supports alternate function,
}
if (!gpio_range) {
+ /*
+ * A pin should not be freed more times than allocated.
+ */
+ if (WARN_ON(!desc->mux_usecount))
+ return NULL;
desc->mux_usecount--;
if (desc->mux_usecount)
return NULL;
{ KE_KEY, 0x2142, { KEY_MEDIA } },
{ KE_KEY, 0x213b, { KEY_INFO } },
{ KE_KEY, 0x2169, { KEY_DIRECTION } },
- { KE_KEY, 0x216a, { KEY_SETUP } },
{ KE_KEY, 0x231b, { KEY_HELP } },
{ KE_END, 0 }
};
err = hp_wmi_input_setup();
if (err)
return err;
-
- //Enable magic for hotkeys that run on the SMBus
- ec_write(0xe6,0x6e);
}
if (bios_capable) {
/* kthread for the hotkey poller */
static struct task_struct *tpacpi_hotkey_task;
-/* Acquired while the poller kthread is running, use to sync start/stop */
-static struct mutex hotkey_thread_mutex;
-
/*
* Acquire mutex to write poller control variables as an
* atomic block.
unsigned int poll_freq;
bool was_frozen;
- mutex_lock(&hotkey_thread_mutex);
-
if (tpacpi_lifecycle == TPACPI_LIFE_EXITING)
goto exit;
}
exit:
- mutex_unlock(&hotkey_thread_mutex);
return 0;
}
if (tpacpi_hotkey_task) {
kthread_stop(tpacpi_hotkey_task);
tpacpi_hotkey_task = NULL;
- mutex_lock(&hotkey_thread_mutex);
- /* at this point, the thread did exit */
- mutex_unlock(&hotkey_thread_mutex);
}
}
mutex_init(&hotkey_mutex);
#ifdef CONFIG_THINKPAD_ACPI_HOTKEY_POLL
- mutex_init(&hotkey_thread_mutex);
mutex_init(&hotkey_thread_data_mutex);
#endif
config REMOTEPROC
tristate
depends on HAS_DMA
- select FW_CONFIG
+ select FW_LOADER
select VIRTIO
config OMAP_REMOTEPROC
* TODO: support predefined notifyids (via resource table)
*/
ret = idr_alloc(&rproc->notifyids, rvring, 0, 0, GFP_KERNEL);
- if (ret) {
+ if (ret < 0) {
dev_err(dev, "idr_alloc failed: %d\n", ret);
dma_free_coherent(dev->parent, size, va, dma);
return ret;
/* it is now safe to add the virtio device */
ret = rproc_add_virtio_dev(rvdev, rsc->id);
if (ret)
- goto free_rvdev;
+ goto remove_rvdev;
return 0;
+remove_rvdev:
+ list_del(&rvdev->node);
free_rvdev:
kfree(rvdev);
return ret;
/* Unregister as remoteproc device */
rproc_del(sproc->rproc);
+ dma_free_coherent(sproc->rproc->dev.parent, SPROC_FW_SIZE,
+ sproc->fw_addr, sproc->fw_dma_addr);
rproc_put(sproc->rproc);
mdev->drv_data = NULL;
/* Register as a remoteproc device */
err = rproc_add(rproc);
if (err)
- goto free_rproc;
+ goto free_mem;
return 0;
+free_mem:
+ dma_free_coherent(rproc->dev.parent, SPROC_FW_SIZE,
+ sproc->fw_addr, sproc->fw_dma_addr);
free_rproc:
/* Reset device data upon error */
mdev->drv_data = NULL;
rtc->da9052 = dev_get_drvdata(pdev->dev.parent);
platform_set_drvdata(pdev, rtc);
- rtc->irq = platform_get_irq_byname(pdev, "ALM");
- ret = devm_request_threaded_irq(&pdev->dev, rtc->irq, NULL,
- da9052_rtc_irq,
- IRQF_TRIGGER_LOW | IRQF_ONESHOT,
- "ALM", rtc);
+ rtc->irq = DA9052_IRQ_ALARM;
+ ret = da9052_request_irq(rtc->da9052, rtc->irq, "ALM",
+ da9052_rtc_irq, rtc);
if (ret != 0) {
rtc_err(rtc->da9052, "irq registration failed: %d\n", ret);
return ret;
case EQC_WR_PROHIBIT:
spin_lock_irqsave(&bdev->lock, flags);
if (bdev->state != SCM_WR_PROHIBIT)
- pr_info("%lu: Write access to the SCM increment is suspended\n",
+ pr_info("%lx: Write access to the SCM increment is suspended\n",
(unsigned long) bdev->scmdev->address);
bdev->state = SCM_WR_PROHIBIT;
spin_unlock_irqrestore(&bdev->lock, flags);
spin_lock_irqsave(&bdev->lock, flags);
if (bdev->state == SCM_WR_PROHIBIT)
- pr_info("%lu: Write access to the SCM increment is restored\n",
+ pr_info("%lx: Write access to the SCM increment is restored\n",
(unsigned long) bdev->scmdev->address);
bdev->state = SCM_OPER;
spin_unlock_irqrestore(&bdev->lock, flags);
goto out;
scm_major = ret;
- if (scm_alloc_rqs(nr_requests))
+ ret = scm_alloc_rqs(nr_requests);
+ if (ret)
goto out_unreg;
scm_debug = debug_register("scm_log", 16, 1, 16);
- if (!scm_debug)
+ if (!scm_debug) {
+ ret = -ENOMEM;
goto out_free;
+ }
debug_register_view(scm_debug, &debug_hex_ascii_view);
debug_set_level(scm_debug, 2);
switch (event) {
case SCM_CHANGE:
- pr_info("%lu: The capabilities of the SCM increment changed\n",
+ pr_info("%lx: The capabilities of the SCM increment changed\n",
(unsigned long) scmdev->address);
SCM_LOG(2, "State changed");
SCM_LOG_STATE(2, scmdev);
int i, rc;
/* Check if the tty3270 is already there. */
- view = raw3270_find_view(&tty3270_fn, tty->index);
+ view = raw3270_find_view(&tty3270_fn, tty->index + RAW3270_FIRSTMINOR);
if (!IS_ERR(view)) {
tp = container_of(view, struct tty3270, view);
tty->driver_data = tp;
tp->inattr = TF_INPUT;
return tty_port_install(&tp->port, driver, tty);
}
- if (tty3270_max_index < tty->index)
- tty3270_max_index = tty->index;
+ if (tty3270_max_index < tty->index + 1)
+ tty3270_max_index = tty->index + 1;
/* Allocate tty3270 structure on first open. */
tp = tty3270_alloc_view();
if (IS_ERR(tp))
return PTR_ERR(tp);
- rc = raw3270_add_view(&tp->view, &tty3270_fn, tty->index);
+ rc = raw3270_add_view(&tp->view, &tty3270_fn,
+ tty->index + RAW3270_FIRSTMINOR);
if (rc) {
tty3270_free_view(tp);
return rc;
void tty3270_create_cb(int minor)
{
- tty_register_device(tty3270_driver, minor, NULL);
+ tty_register_device(tty3270_driver, minor - RAW3270_FIRSTMINOR, NULL);
}
void tty3270_destroy_cb(int minor)
{
- tty_unregister_device(tty3270_driver, minor);
+ tty_unregister_device(tty3270_driver, minor - RAW3270_FIRSTMINOR);
}
struct raw3270_notifier tty3270_notifier =
driver->driver_name = "tty3270";
driver->name = "3270/tty";
driver->major = IBM_TTY3270_MAJOR;
- driver->minor_start = 0;
+ driver->minor_start = RAW3270_FIRSTMINOR;
+ driver->name_base = RAW3270_FIRSTMINOR;
driver->type = TTY_DRIVER_TYPE_SYSTEM;
driver->subtype = SYSTEM_TYPE_TTY;
driver->init_termios = tty_std_termios;
unsigned long thread_start_mask;
unsigned long thread_allowed_mask;
unsigned long thread_running_mask;
+ struct task_struct *recovery_task;
spinlock_t ip_lock;
struct list_head ip_list;
struct list_head *ip_tbd_list;
extern struct kmem_cache *qeth_core_header_cache;
extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS];
+void qeth_set_recovery_task(struct qeth_card *);
+void qeth_clear_recovery_task(struct qeth_card *);
void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int);
int qeth_threads_running(struct qeth_card *, unsigned long);
int qeth_wait_for_threads(struct qeth_card *, unsigned long);
return "n/a";
}
+void qeth_set_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = current;
+}
+EXPORT_SYMBOL_GPL(qeth_set_recovery_task);
+
+void qeth_clear_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = NULL;
+}
+EXPORT_SYMBOL_GPL(qeth_clear_recovery_task);
+
+static bool qeth_is_recovery_task(const struct qeth_card *card)
+{
+ return card->recovery_task == current;
+}
+
void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
int clear_start_mask)
{
int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads)
{
+ if (qeth_is_recovery_task(card))
+ return 0;
return wait_event_interruptible(card->wait_q,
qeth_threads_running(card, threads) == 0);
}
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l2_set_offline(card->gdev, 1);
rc = __qeth_l2_set_online(card->gdev, 1);
if (!rc)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l3_set_offline(card->gdev, 1);
rc = __qeth_l3_set_online(card->gdev, 1);
if (!rc)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
return IRQ_HANDLED;
}
-static void __init reset_one_i2c(struct bbc_i2c_bus *bp)
+static void reset_one_i2c(struct bbc_i2c_bus *bp)
{
writeb(I2C_PCF_PIN, bp->i2c_control_regs + 0x0);
writeb(bp->own, bp->i2c_control_regs + 0x1);
writeb(I2C_PCF_IDLE, bp->i2c_control_regs + 0x0);
}
-static struct bbc_i2c_bus * __init attach_one_i2c(struct platform_device *op, int index)
+static struct bbc_i2c_bus * attach_one_i2c(struct platform_device *op, int index)
{
struct bbc_i2c_bus *bp;
struct device_node *dp;
fc_exch_init(lport);
fc_rport_init(lport);
fc_disc_init(lport);
+ fc_disc_config(lport, lport);
return 0;
}
}
ctlr = bnx2fc_to_ctlr(interface);
+ cdev = fcoe_ctlr_to_ctlr_dev(ctlr);
interface->vlan_id = vlan_id;
interface->timer_work_queue =
goto ifput_err;
}
- lport = bnx2fc_if_create(interface, &interface->hba->pcidev->dev, 0);
+ lport = bnx2fc_if_create(interface, &cdev->dev, 0);
if (!lport) {
printk(KERN_ERR PFX "Failed to create interface (%s)\n",
netdev->name);
/* Make this master N_port */
ctlr->lp = lport;
- cdev = fcoe_ctlr_to_ctlr_dev(ctlr);
-
if (link_state == BNX2FC_CREATE_LINK_UP)
cdev->enabled = FCOE_CTLR_ENABLED;
else
{
struct net_device *netdev = fcoe->netdev;
struct fcoe_ctlr *fip = fcoe_to_ctlr(fcoe);
- struct fcoe_ctlr_device *ctlr_dev = fcoe_ctlr_to_ctlr_dev(fip);
rtnl_lock();
if (!fcoe->removed)
/* tear-down the FCoE controller */
fcoe_ctlr_destroy(fip);
scsi_host_put(fip->lp->host);
- fcoe_ctlr_device_delete(ctlr_dev);
dev_put(netdev);
module_put(THIS_MODULE);
}
*/
static void fcoe_destroy_work(struct work_struct *work)
{
+ struct fcoe_ctlr_device *cdev;
+ struct fcoe_ctlr *ctlr;
struct fcoe_port *port;
struct fcoe_interface *fcoe;
struct Scsi_Host *shost;
mutex_lock(&fcoe_config_mutex);
fcoe = port->priv;
+ ctlr = fcoe_to_ctlr(fcoe);
+ cdev = fcoe_ctlr_to_ctlr_dev(ctlr);
+
fcoe_if_destroy(port->lport);
fcoe_interface_cleanup(fcoe);
mutex_unlock(&fcoe_config_mutex);
+
+ fcoe_ctlr_device_delete(cdev);
}
/**
rc = -EIO;
rtnl_unlock();
fcoe_interface_cleanup(fcoe);
- goto out_nortnl;
+ mutex_unlock(&fcoe_config_mutex);
+ fcoe_ctlr_device_delete(ctlr_dev);
+ goto out;
}
/* Make this the "master" N_Port */
out_nodev:
rtnl_unlock();
-out_nortnl:
mutex_unlock(&fcoe_config_mutex);
+out:
return rc;
}
fc_lport_set_local_id(fip->lp, new_port_id);
}
+/**
+ * fcoe_ctlr_mode_set() - Set or reset the ctlr's mode
+ * @lport: The local port to be (re)configured
+ * @fip: The FCoE controller whose mode is changing
+ * @fip_mode: The new fip mode
+ *
+ * Note that the we shouldn't be changing the libfc discovery settings
+ * (fc_disc_config) while an lport is going through the libfc state
+ * machine. The mode can only be changed when a fcoe_ctlr device is
+ * disabled, so that should ensure that this routine is only called
+ * when nothing is happening.
+ */
+void fcoe_ctlr_mode_set(struct fc_lport *lport, struct fcoe_ctlr *fip,
+ enum fip_state fip_mode)
+{
+ void *priv;
+
+ WARN_ON(lport->state != LPORT_ST_RESET &&
+ lport->state != LPORT_ST_DISABLED);
+
+ if (fip_mode == FIP_MODE_VN2VN) {
+ lport->rport_priv_size = sizeof(struct fcoe_rport);
+ lport->point_to_multipoint = 1;
+ lport->tt.disc_recv_req = fcoe_ctlr_disc_recv;
+ lport->tt.disc_start = fcoe_ctlr_disc_start;
+ lport->tt.disc_stop = fcoe_ctlr_disc_stop;
+ lport->tt.disc_stop_final = fcoe_ctlr_disc_stop_final;
+ priv = fip;
+ } else {
+ lport->rport_priv_size = 0;
+ lport->point_to_multipoint = 0;
+ lport->tt.disc_recv_req = NULL;
+ lport->tt.disc_start = NULL;
+ lport->tt.disc_stop = NULL;
+ lport->tt.disc_stop_final = NULL;
+ priv = lport;
+ }
+
+ fc_disc_config(lport, priv);
+}
+
/**
* fcoe_libfc_config() - Sets up libfc related properties for local port
* @lport: The local port to configure libfc for
fc_exch_init(lport);
fc_elsct_init(lport);
fc_lport_init(lport);
- if (fip->mode == FIP_MODE_VN2VN)
- lport->rport_priv_size = sizeof(struct fcoe_rport);
fc_rport_init(lport);
- if (fip->mode == FIP_MODE_VN2VN) {
- lport->point_to_multipoint = 1;
- lport->tt.disc_recv_req = fcoe_ctlr_disc_recv;
- lport->tt.disc_start = fcoe_ctlr_disc_start;
- lport->tt.disc_stop = fcoe_ctlr_disc_stop;
- lport->tt.disc_stop_final = fcoe_ctlr_disc_stop_final;
- mutex_init(&lport->disc.disc_mutex);
- INIT_LIST_HEAD(&lport->disc.rports);
- lport->disc.priv = fip;
- } else {
- fc_disc_init(lport);
- }
+ fc_disc_init(lport);
+ fcoe_ctlr_mode_set(lport, fip, fip->mode);
return 0;
}
EXPORT_SYMBOL_GPL(fcoe_libfc_config);
void fcoe_ctlr_set_fip_mode(struct fcoe_ctlr_device *ctlr_dev)
{
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
+ struct fc_lport *lport = ctlr->lp;
mutex_lock(&ctlr->ctlr_mutex);
switch (ctlr_dev->mode) {
}
mutex_unlock(&ctlr->ctlr_mutex);
+
+ fcoe_ctlr_mode_set(lport, ctlr, ctlr->mode);
}
EXPORT_SYMBOL(fcoe_ctlr_set_fip_mode);
sdev->allow_restart = 1;
blk_queue_rq_timeout(sdev->request_queue, 120 * HZ);
}
- scsi_adjust_queue_depth(sdev, 0, shost->cmd_per_lun);
spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ scsi_adjust_queue_depth(sdev, 0, shost->cmd_per_lun);
return 0;
}
ipr_trace;
}
- list_add_tail(&ipr_cmd->queue, &hrrq->hrrq_free_q);
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
if (!ipr_is_naca_model(res))
res->needs_sync_complete = 1;
int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
- rc = request_irq(pdev->irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
+ if (ioa_cfg->intr_flag == IPR_USE_MSIX)
+ rc = request_irq(ioa_cfg->vectors_info[0].vec, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
+ else
+ rc = request_irq(pdev->irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
if (rc) {
dev_err(&pdev->dev, "Can not assign irq %d\n", pdev->irq);
return rc;
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
- free_irq(pdev->irq, ioa_cfg);
+ if (ioa_cfg->intr_flag == IPR_USE_MSIX)
+ free_irq(ioa_cfg->vectors_info[0].vec, ioa_cfg);
+ else
+ free_irq(pdev->irq, ioa_cfg);
LEAVE;
spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
flush_work(&ioa_cfg->work_q);
+ INIT_LIST_HEAD(&ioa_cfg->used_res_q);
spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
spin_lock(&ipr_driver_lock);
}
/**
- * fc_disc_init() - Initialize the discovery layer for a local port
- * @lport: The local port that needs the discovery layer to be initialized
+ * fc_disc_config() - Configure the discovery layer for a local port
+ * @lport: The local port that needs the discovery layer to be configured
+ * @priv: Private data structre for users of the discovery layer
*/
-int fc_disc_init(struct fc_lport *lport)
+void fc_disc_config(struct fc_lport *lport, void *priv)
{
- struct fc_disc *disc;
+ struct fc_disc *disc = &lport->disc;
if (!lport->tt.disc_start)
lport->tt.disc_start = fc_disc_start;
lport->tt.disc_recv_req = fc_disc_recv_req;
disc = &lport->disc;
+
+ disc->priv = priv;
+}
+EXPORT_SYMBOL(fc_disc_config);
+
+/**
+ * fc_disc_init() - Initialize the discovery layer for a local port
+ * @lport: The local port that needs the discovery layer to be initialized
+ */
+void fc_disc_init(struct fc_lport *lport)
+{
+ struct fc_disc *disc = &lport->disc;
+
INIT_DELAYED_WORK(&disc->disc_work, fc_disc_timeout);
mutex_init(&disc->disc_mutex);
INIT_LIST_HEAD(&disc->rports);
-
- disc->priv = lport;
-
- return 0;
}
EXPORT_SYMBOL(fc_disc_init);
linkrate = phy->linkrate;
memcpy(sas_addr, phy->attached_sas_addr, SAS_ADDR_SIZE);
+ /* Handle vacant phy - rest of dr data is not valid so skip it */
+ if (phy->phy_state == PHY_VACANT) {
+ memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
+ phy->attached_dev_type = NO_DEVICE;
+ if (!test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) {
+ phy->phy_id = phy_id;
+ goto skip;
+ } else
+ goto out;
+ }
+
phy->attached_dev_type = to_dev_type(dr);
if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state))
goto out;
phy->phy->maximum_linkrate = dr->pmax_linkrate;
phy->phy->negotiated_linkrate = phy->linkrate;
+ skip:
if (new_phy)
if (sas_phy_add(phy->phy)) {
sas_phy_free(phy->phy);
if (!disc_req)
return -ENOMEM;
- disc_resp = alloc_smp_req(DISCOVER_RESP_SIZE);
+ disc_resp = alloc_smp_resp(DISCOVER_RESP_SIZE);
if (!disc_resp) {
kfree(disc_req);
return -ENOMEM;
struct lpfc_rqe *temp_hrqe;
struct lpfc_rqe *temp_drqe;
struct lpfc_register doorbell;
- int put_index = hq->host_index;
+ int put_index;
/* sanity check on queue memory */
if (unlikely(!hq) || unlikely(!dq))
return -ENOMEM;
+ put_index = hq->host_index;
temp_hrqe = hq->qe[hq->host_index].rqe;
temp_drqe = dq->qe[dq->host_index].rqe;
"Timer for the VP[%d] has stopped\n", vha->vp_idx);
}
- /* No pending activities shall be there on the vha now */
- if (ql2xextended_error_logging & ql_dbg_user)
- msleep(random32()%10); /* Just to see if something falls on
- * the net we have placed below */
-
BUG_ON(atomic_read(&vha->vref_count));
qla2x00_free_fcports(vha);
* | Mailbox commands | 0x115b | 0x111a-0x111b |
* | | | 0x112c-0x112e |
* | | | 0x113a |
+ * | | | 0x1155-0x1158 |
* | Device Discovery | 0x2087 | 0x2020-0x2022, |
* | | | 0x2016 |
* | Queue Command and IO tracing | 0x3031 | 0x3006-0x300b |
void *ring;
} aq, *aqp;
- if (!ha->tgt.atio_q_length)
+ if (!ha->tgt.atio_ring)
return ptr;
num_queues = 1;
#define MBX_1 BIT_1
#define MBX_0 BIT_0
-#define RNID_TYPE_SET_VERSION 0x9
#define RNID_TYPE_ASIC_TEMP 0xC
/*
extern int
qla2x00_disable_fce_trace(scsi_qla_host_t *, uint64_t *, uint64_t *);
-extern int
-qla2x00_set_driver_version(scsi_qla_host_t *, char *);
-
extern int
qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint8_t *,
uint16_t, uint16_t, uint16_t, uint16_t);
if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha))
qla24xx_read_fcp_prio_cfg(vha);
- qla2x00_set_driver_version(vha, QLA2XXX_VERSION);
-
return (rval);
}
mq_size += ha->max_rsp_queues *
(rsp->length * sizeof(response_t));
}
- if (ha->tgt.atio_q_length)
+ if (ha->tgt.atio_ring)
mq_size += ha->tgt.atio_q_length * sizeof(request_t);
/* Allocate memory for Fibre Channel Event Buffer. */
if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha))
return rval;
}
-int
-qla2x00_set_driver_version(scsi_qla_host_t *vha, char *version)
-{
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
- int len;
- uint16_t dwlen;
- uint8_t *str;
- dma_addr_t str_dma;
- struct qla_hw_data *ha = vha->hw;
-
- if (!IS_FWI2_CAPABLE(ha) || IS_QLA82XX(ha))
- return QLA_FUNCTION_FAILED;
-
- ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1155,
- "Entered %s.\n", __func__);
-
- str = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &str_dma);
- if (!str) {
- ql_log(ql_log_warn, vha, 0x1156,
- "Failed to allocate driver version param.\n");
- return QLA_MEMORY_ALLOC_FAILED;
- }
-
- memcpy(str, "\x7\x3\x11\x0", 4);
- dwlen = str[0];
- len = dwlen * sizeof(uint32_t) - 4;
- memset(str + 4, 0, len);
- if (len > strlen(version))
- len = strlen(version);
- memcpy(str + 4, version, len);
-
- mcp->mb[0] = MBC_SET_RNID_PARAMS;
- mcp->mb[1] = RNID_TYPE_SET_VERSION << 8 | dwlen;
- mcp->mb[2] = MSW(LSD(str_dma));
- mcp->mb[3] = LSW(LSD(str_dma));
- mcp->mb[6] = MSW(MSD(str_dma));
- mcp->mb[7] = LSW(MSD(str_dma));
- mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
- mcp->in_mb = MBX_0;
- mcp->tov = MBX_TOV_SECONDS;
- mcp->flags = 0;
- rval = qla2x00_mailbox_command(vha, mcp);
-
- if (rval != QLA_SUCCESS) {
- ql_dbg(ql_dbg_mbx, vha, 0x1157,
- "Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
- } else {
- ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1158,
- "Done %s.\n", __func__);
- }
-
- dma_pool_free(ha->s_dma_pool, str, str_dma);
-
- return rval;
-}
-
static int
qla2x00_read_asic_temperature(scsi_qla_host_t *vha, uint16_t *temp)
{
/*
* Driver version
*/
-#define QLA2XXX_VERSION "8.04.00.08-k"
+#define QLA2XXX_VERSION "8.04.00.13-k"
#define QLA_DRIVER_MAJOR_VER 8
#define QLA_DRIVER_MINOR_VER 4
tpnt->disk = disk;
disk->private_data = &tpnt->driver;
disk->queue = SDp->request_queue;
+ /* SCSI tape doesn't register this gendisk via add_disk(). Manually
+ * take queue reference that release_disk() expects. */
+ if (!blk_get_queue(disk->queue))
+ goto out_put_disk;
tpnt->driver = &st_template;
tpnt->device = SDp;
idr_preload_end();
if (error < 0) {
pr_warn("st: idr allocation failed: %d\n", error);
- goto out_put_disk;
+ goto out_put_queue;
}
tpnt->index = error;
sprintf(disk->disk_name, "st%d", tpnt->index);
spin_lock(&st_index_lock);
idr_remove(&st_index_idr, tpnt->index);
spin_unlock(&st_index_lock);
+out_put_queue:
+ blk_put_queue(disk->queue);
out_put_disk:
put_disk(disk);
kfree(tpnt);
config SPI_ALTERA
tristate "Altera SPI Controller"
+ depends on GENERIC_HARDIRQS
select SPI_BITBANG
help
This is the driver for the Altera SPI Controller.
config SPI_PXA2XX
tristate "PXA2xx SSP SPI master"
- depends on ARCH_PXA || PCI || ACPI
+ depends on (ARCH_PXA || PCI || ACPI) && GENERIC_HARDIRQS
select PXA_SSP if ARCH_PXA
help
This enables using a PXA2xx or Sodaville SSP port as a SPI master
static int bcm63xx_spi_setup(struct spi_device *spi)
{
struct bcm63xx_spi *bs;
- int ret;
bs = spi_master_get_devdata(spi->master);
default:
dev_err(dev, "unsupported MSG_CTL width: %d\n",
bs->msg_ctl_width);
- goto out_clk_disable;
+ goto out_err;
}
/* Initialize hardware */
for (i = count; i > 0; i--) {
data = tx_buf ? *tx_buf++ : 0;
- if (len == EOFBYTE)
+ if (len == EOFBYTE && t->cs_change)
setbits32(&fifo->txcmd, MPC512x_PSC_FIFO_EOF);
out_8(&fifo->txdata_8, data);
len--;
master->dev.parent = &pdev->dev;
master->dev.of_node = pdev->dev.of_node;
- ACPI_HANDLE_SET(&master->dev, ACPI_HANDLE(&pdev->dev));
/* the spi->mode bits understood by this driver: */
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
{
struct s3c64xx_spi_driver_data *sdd = data;
struct spi_master *spi = sdd->master;
- unsigned int val;
+ unsigned int val, clr = 0;
- val = readl(sdd->regs + S3C64XX_SPI_PENDING_CLR);
+ val = readl(sdd->regs + S3C64XX_SPI_STATUS);
- val &= S3C64XX_SPI_PND_RX_OVERRUN_CLR |
- S3C64XX_SPI_PND_RX_UNDERRUN_CLR |
- S3C64XX_SPI_PND_TX_OVERRUN_CLR |
- S3C64XX_SPI_PND_TX_UNDERRUN_CLR;
-
- writel(val, sdd->regs + S3C64XX_SPI_PENDING_CLR);
-
- if (val & S3C64XX_SPI_PND_RX_OVERRUN_CLR)
+ if (val & S3C64XX_SPI_ST_RX_OVERRUN_ERR) {
+ clr = S3C64XX_SPI_PND_RX_OVERRUN_CLR;
dev_err(&spi->dev, "RX overrun\n");
- if (val & S3C64XX_SPI_PND_RX_UNDERRUN_CLR)
+ }
+ if (val & S3C64XX_SPI_ST_RX_UNDERRUN_ERR) {
+ clr |= S3C64XX_SPI_PND_RX_UNDERRUN_CLR;
dev_err(&spi->dev, "RX underrun\n");
- if (val & S3C64XX_SPI_PND_TX_OVERRUN_CLR)
+ }
+ if (val & S3C64XX_SPI_ST_TX_OVERRUN_ERR) {
+ clr |= S3C64XX_SPI_PND_TX_OVERRUN_CLR;
dev_err(&spi->dev, "TX overrun\n");
- if (val & S3C64XX_SPI_PND_TX_UNDERRUN_CLR)
+ }
+ if (val & S3C64XX_SPI_ST_TX_UNDERRUN_ERR) {
+ clr |= S3C64XX_SPI_PND_TX_UNDERRUN_CLR;
dev_err(&spi->dev, "TX underrun\n");
+ }
+
+ /* Clear the pending irq by setting and then clearing it */
+ writel(clr, sdd->regs + S3C64XX_SPI_PENDING_CLR);
+ writel(0, sdd->regs + S3C64XX_SPI_PENDING_CLR);
return IRQ_HANDLED;
}
writel(0, regs + S3C64XX_SPI_MODE_CFG);
writel(0, regs + S3C64XX_SPI_PACKET_CNT);
- /* Clear any irq pending bits */
- writel(readl(regs + S3C64XX_SPI_PENDING_CLR),
- regs + S3C64XX_SPI_PENDING_CLR);
+ /* Clear any irq pending bits, should set and clear the bits */
+ val = S3C64XX_SPI_PND_RX_OVERRUN_CLR |
+ S3C64XX_SPI_PND_RX_UNDERRUN_CLR |
+ S3C64XX_SPI_PND_TX_OVERRUN_CLR |
+ S3C64XX_SPI_PND_TX_UNDERRUN_CLR;
+ writel(val, regs + S3C64XX_SPI_PENDING_CLR);
+ writel(0, regs + S3C64XX_SPI_PENDING_CLR);
writel(0, regs + S3C64XX_SPI_SWAP_CFG);
return 0;
}
-static int tegra_slink_prepare_transfer(struct spi_master *master)
-{
- struct tegra_slink_data *tspi = spi_master_get_devdata(master);
-
- return pm_runtime_get_sync(tspi->dev);
-}
-
-static int tegra_slink_unprepare_transfer(struct spi_master *master)
-{
- struct tegra_slink_data *tspi = spi_master_get_devdata(master);
-
- pm_runtime_put(tspi->dev);
- return 0;
-}
-
static int tegra_slink_transfer_one_message(struct spi_master *master,
struct spi_message *msg)
{
msg->status = 0;
msg->actual_length = 0;
+ ret = pm_runtime_get_sync(tspi->dev);
+ if (ret < 0) {
+ dev_err(tspi->dev, "runtime get failed: %d\n", ret);
+ goto done;
+ }
+
single_xfer = list_is_singular(&msg->transfers);
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
INIT_COMPLETION(tspi->xfer_completion);
exit:
tegra_slink_writel(tspi, tspi->def_command_reg, SLINK_COMMAND);
tegra_slink_writel(tspi, tspi->def_command2_reg, SLINK_COMMAND2);
+ pm_runtime_put(tspi->dev);
+done:
msg->status = ret;
spi_finalize_current_message(master);
return ret;
/* the spi->mode bits understood by this driver: */
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
master->setup = tegra_slink_setup;
- master->prepare_transfer_hardware = tegra_slink_prepare_transfer;
master->transfer_one_message = tegra_slink_transfer_one_message;
- master->unprepare_transfer_hardware = tegra_slink_unprepare_transfer;
master->num_chipselect = MAX_CHIP_SELECT;
master->bus_num = -1;
/* Lock queue and check for queue work */
spin_lock_irqsave(&master->queue_lock, flags);
if (list_empty(&master->queue) || !master->running) {
- if (master->busy && master->unprepare_transfer_hardware) {
- ret = master->unprepare_transfer_hardware(master);
- if (ret) {
- spin_unlock_irqrestore(&master->queue_lock, flags);
- dev_err(&master->dev,
- "failed to unprepare transfer hardware\n");
- return;
- }
+ if (!master->busy) {
+ spin_unlock_irqrestore(&master->queue_lock, flags);
+ return;
}
master->busy = false;
spin_unlock_irqrestore(&master->queue_lock, flags);
+ if (master->unprepare_transfer_hardware &&
+ master->unprepare_transfer_hardware(master))
+ dev_err(&master->dev,
+ "failed to unprepare transfer hardware\n");
return;
}
acpi_status status;
acpi_handle handle;
- handle = ACPI_HANDLE(&master->dev);
+ handle = ACPI_HANDLE(master->dev.parent);
if (!handle)
return;
return 0;
}
}
+
+void ssb_pmu_spuravoid_pllupdate(struct ssb_chipcommon *cc, int spuravoid)
+{
+ u32 pmu_ctl = 0;
+
+ switch (cc->dev->bus->chip_id) {
+ case 0x4322:
+ ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL0, 0x11100070);
+ ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL1, 0x1014140a);
+ ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL5, 0x88888854);
+ if (spuravoid == 1)
+ ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL2, 0x05201828);
+ else
+ ssb_chipco_pll_write(cc, SSB_PMU1_PLLCTL2, 0x05001828);
+ pmu_ctl = SSB_CHIPCO_PMU_CTL_PLL_UPD;
+ break;
+ case 43222:
+ /* TODO: BCM43222 requires updating PLLs too */
+ return;
+ default:
+ ssb_printk(KERN_ERR PFX
+ "Unknown spuravoidance settings for chip 0x%04X, not changing PLL\n",
+ cc->dev->bus->chip_id);
+ return;
+ }
+
+ chipco_set32(cc, SSB_CHIPCO_PMU_CTL, pmu_ctl);
+}
+EXPORT_SYMBOL_GPL(ssb_pmu_spuravoid_pllupdate);
case TRIG_NONE:
/* continous acquisition */
devpriv->ai_continous = 1;
- devpriv->ai_sample_count = 0;
+ devpriv->ai_sample_count = 1;
break;
}
depends on CONFIGFS_FS=y && SYSFS=y && !HIGHMEM && ZCACHE=y
depends on NET
# must ensure struct page is 8-byte aligned
- select HAVE_ALIGNED_STRUCT_PAGE if !64_BIT
+ select HAVE_ALIGNED_STRUCT_PAGE if !64BIT
default n
help
RAMster allows RAM on other machines in a cluster to be utilized
{
char *endptr;
unsigned long id;
+ unsigned char id_as_uchar;
unsigned char digest[MD5_SIGNATURE_SIZE];
unsigned char type, response[MD5_SIGNATURE_SIZE * 2 + 2];
unsigned char identifier[10], *challenge = NULL;
goto out;
}
- sg_init_one(&sg, &id, 1);
+ /* To handle both endiannesses */
+ id_as_uchar = id;
+ sg_init_one(&sg, &id_as_uchar, 1);
ret = crypto_hash_update(&desc, &sg, 1);
if (ret < 0) {
pr_err("crypto_hash_update() failed for id\n");
case REPORT_LUNS:
case RECEIVE_DIAGNOSTIC:
case SEND_DIAGNOSTIC:
+ return 0;
case MAINTENANCE_IN:
switch (cdb[1] & 0x1f) {
case MI_REPORT_TARGET_PGS:
switch (cdb[0]) {
case INQUIRY:
case REPORT_LUNS:
+ return 0;
case MAINTENANCE_IN:
switch (cdb[1] & 0x1f) {
case MI_REPORT_TARGET_PGS:
switch (cdb[0]) {
case INQUIRY:
case REPORT_LUNS:
+ return 0;
case MAINTENANCE_IN:
switch (cdb[1] & 0x1f) {
case MI_REPORT_TARGET_PGS:
#define FD_DEVICE_QUEUE_DEPTH 32
#define FD_MAX_DEVICE_QUEUE_DEPTH 128
#define FD_BLOCKSIZE 512
-#define FD_MAX_SECTORS 1024
+#define FD_MAX_SECTORS 2048
#define RRF_EMULATE_CDB 0x01
#define RRF_GOT_LBA 0x02
pr_debug("PSCSI: i: %d page: %p len: %d off: %d\n", i,
page, len, off);
- while (len > 0 && data_len > 0) {
+ /*
+ * We only have one page of data in each sg element,
+ * we can not cross a page boundary.
+ */
+ if (off + len > PAGE_SIZE)
+ goto fail;
+
+ if (len > 0 && data_len > 0) {
bytes = min_t(unsigned int, len, PAGE_SIZE - off);
bytes = min(bytes, data_len);
bio = NULL;
}
- len -= bytes;
data_len -= bytes;
- off = 0;
}
}
break;
case SYNCHRONIZE_CACHE:
case SYNCHRONIZE_CACHE_16:
- if (!ops->execute_sync_cache)
- return TCM_UNSUPPORTED_SCSI_OPCODE;
+ if (!ops->execute_sync_cache) {
+ size = 0;
+ cmd->execute_cmd = sbc_emulate_noop;
+ break;
+ }
/*
* Extract LBA and range to be flushed for emulated SYNCHRONIZE_CACHE
if (se_tpg->se_tpg_type == TRANSPORT_TPG_TYPE_NORMAL) {
if (core_tpg_setup_virtual_lun0(se_tpg) < 0) {
- kfree(se_tpg);
+ array_free(se_tpg->tpg_lun_list,
+ TRANSPORT_MAX_LUNS_PER_TPG);
return -ENOMEM;
}
}
return ret;
ret = target_check_reservation(cmd);
- if (ret)
+ if (ret) {
+ cmd->scsi_status = SAM_STAT_RESERVATION_CONFLICT;
return ret;
+ }
ret = dev->transport->parse_cdb(cmd);
if (ret)
mxvar_sdriver, brd->idx + i, &pdev->dev);
if (IS_ERR(tty_dev)) {
retval = PTR_ERR(tty_dev);
- for (i--; i >= 0; i--)
+ for (; i > 0; i--)
tty_unregister_device(mxvar_sdriver,
- brd->idx + i);
+ brd->idx + i - 1);
goto err_relbrd;
}
}
tty_dev = tty_port_register_device(&brd->ports[i].port,
mxvar_sdriver, brd->idx + i, NULL);
if (IS_ERR(tty_dev)) {
- for (i--; i >= 0; i--)
+ for (; i > 0; i--)
tty_unregister_device(mxvar_sdriver,
- brd->idx + i);
+ brd->idx + i - 1);
for (i = 0; i < brd->info->nports; i++)
tty_port_destroy(&brd->ports[i].port);
free_irq(brd->irq, brd);
+++ /dev/null
-/*
- * Driver for 8250/16550-type serial ports
- *
- * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
- *
- * Copyright (C) 2001 Russell King.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * A note about mapbase / membase
- *
- * mapbase is the physical address of the IO port.
- * membase is an 'ioremapped' cookie.
- */
-
-#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
-#define SUPPORT_SYSRQ
-#endif
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/ioport.h>
-#include <linux/init.h>
-#include <linux/console.h>
-#include <linux/sysrq.h>
-#include <linux/delay.h>
-#include <linux/platform_device.h>
-#include <linux/tty.h>
-#include <linux/ratelimit.h>
-#include <linux/tty_flip.h>
-#include <linux/serial_reg.h>
-#include <linux/serial_core.h>
-#include <linux/serial.h>
-#include <linux/serial_8250.h>
-#include <linux/nmi.h>
-#include <linux/mutex.h>
-#include <linux/slab.h>
-#ifdef CONFIG_SPARC
-#include <linux/sunserialcore.h>
-#endif
-
-#include <asm/io.h>
-#include <asm/irq.h>
-
-#include "8250.h"
-
-/*
- * Configuration:
- * share_irqs - whether we pass IRQF_SHARED to request_irq(). This option
- * is unsafe when used on edge-triggered interrupts.
- */
-static unsigned int share_irqs = SERIAL8250_SHARE_IRQS;
-
-static unsigned int nr_uarts = CONFIG_SERIAL_8250_RUNTIME_UARTS;
-
-static struct uart_driver serial8250_reg;
-
-static int serial_index(struct uart_port *port)
-{
- return (serial8250_reg.minor - 64) + port->line;
-}
-
-static unsigned int skip_txen_test; /* force skip of txen test at init time */
-
-/*
- * Debugging.
- */
-#if 0
-#define DEBUG_AUTOCONF(fmt...) printk(fmt)
-#else
-#define DEBUG_AUTOCONF(fmt...) do { } while (0)
-#endif
-
-#if 0
-#define DEBUG_INTR(fmt...) printk(fmt)
-#else
-#define DEBUG_INTR(fmt...) do { } while (0)
-#endif
-
-#define PASS_LIMIT 512
-
-#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
-
-
-#ifdef CONFIG_SERIAL_8250_DETECT_IRQ
-#define CONFIG_SERIAL_DETECT_IRQ 1
-#endif
-#ifdef CONFIG_SERIAL_8250_MANY_PORTS
-#define CONFIG_SERIAL_MANY_PORTS 1
-#endif
-
-/*
- * HUB6 is always on. This will be removed once the header
- * files have been cleaned.
- */
-#define CONFIG_HUB6 1
-
-#include <asm/serial.h>
-/*
- * SERIAL_PORT_DFNS tells us about built-in ports that have no
- * standard enumeration mechanism. Platforms that can find all
- * serial ports via mechanisms like ACPI or PCI need not supply it.
- */
-#ifndef SERIAL_PORT_DFNS
-#define SERIAL_PORT_DFNS
-#endif
-
-static const struct old_serial_port old_serial_port[] = {
- SERIAL_PORT_DFNS /* defined in asm/serial.h */
-};
-
-#define UART_NR CONFIG_SERIAL_8250_NR_UARTS
-
-#ifdef CONFIG_SERIAL_8250_RSA
-
-#define PORT_RSA_MAX 4
-static unsigned long probe_rsa[PORT_RSA_MAX];
-static unsigned int probe_rsa_count;
-#endif /* CONFIG_SERIAL_8250_RSA */
-
-struct irq_info {
- struct hlist_node node;
- int irq;
- spinlock_t lock; /* Protects list not the hash */
- struct list_head *head;
-};
-
-#define NR_IRQ_HASH 32 /* Can be adjusted later */
-static struct hlist_head irq_lists[NR_IRQ_HASH];
-static DEFINE_MUTEX(hash_mutex); /* Used to walk the hash */
-
-/*
- * Here we define the default xmit fifo size used for each type of UART.
- */
-static const struct serial8250_config uart_config[] = {
- [PORT_UNKNOWN] = {
- .name = "unknown",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_8250] = {
- .name = "8250",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16450] = {
- .name = "16450",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16550] = {
- .name = "16550",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16550A] = {
- .name = "16550A",
- .fifo_size = 16,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO,
- },
- [PORT_CIRRUS] = {
- .name = "Cirrus",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16650] = {
- .name = "ST16650",
- .fifo_size = 1,
- .tx_loadsz = 1,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_16650V2] = {
- .name = "ST16650V2",
- .fifo_size = 32,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
- UART_FCR_T_TRIG_00,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_16750] = {
- .name = "TI16750",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
- UART_FCR7_64BYTE,
- .flags = UART_CAP_FIFO | UART_CAP_SLEEP | UART_CAP_AFE,
- },
- [PORT_STARTECH] = {
- .name = "Startech",
- .fifo_size = 1,
- .tx_loadsz = 1,
- },
- [PORT_16C950] = {
- .name = "16C950/954",
- .fifo_size = 128,
- .tx_loadsz = 128,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- /* UART_CAP_EFR breaks billionon CF bluetooth card. */
- .flags = UART_CAP_FIFO | UART_CAP_SLEEP,
- },
- [PORT_16654] = {
- .name = "ST16654",
- .fifo_size = 64,
- .tx_loadsz = 32,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
- UART_FCR_T_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_16850] = {
- .name = "XR16850",
- .fifo_size = 128,
- .tx_loadsz = 128,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
- },
- [PORT_RSA] = {
- .name = "RSA",
- .fifo_size = 2048,
- .tx_loadsz = 2048,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11,
- .flags = UART_CAP_FIFO,
- },
- [PORT_NS16550A] = {
- .name = "NS16550A",
- .fifo_size = 16,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_NATSEMI,
- },
- [PORT_XSCALE] = {
- .name = "XScale",
- .fifo_size = 32,
- .tx_loadsz = 32,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_UUE | UART_CAP_RTOIE,
- },
- [PORT_OCTEON] = {
- .name = "OCTEON",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO,
- },
- [PORT_AR7] = {
- .name = "AR7",
- .fifo_size = 16,
- .tx_loadsz = 16,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_U6_16550A] = {
- .name = "U6_16550A",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_TEGRA] = {
- .name = "Tegra",
- .fifo_size = 32,
- .tx_loadsz = 8,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
- UART_FCR_T_TRIG_01,
- .flags = UART_CAP_FIFO | UART_CAP_RTOIE,
- },
- [PORT_XR17D15X] = {
- .name = "XR17D15X",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP,
- },
- [PORT_XR17V35X] = {
- .name = "XR17V35X",
- .fifo_size = 256,
- .tx_loadsz = 256,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11 |
- UART_FCR_T_TRIG_11,
- .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP,
- },
- [PORT_LPC3220] = {
- .name = "LPC3220",
- .fifo_size = 64,
- .tx_loadsz = 32,
- .fcr = UART_FCR_DMA_SELECT | UART_FCR_ENABLE_FIFO |
- UART_FCR_R_TRIG_00 | UART_FCR_T_TRIG_00,
- .flags = UART_CAP_FIFO,
- },
- [PORT_BRCM_TRUMANAGE] = {
- .name = "TruManage",
- .fifo_size = 1,
- .tx_loadsz = 1024,
- .flags = UART_CAP_HFIFO,
- },
- [PORT_8250_CIR] = {
- .name = "CIR port"
- },
- [PORT_ALTR_16550_F32] = {
- .name = "Altera 16550 FIFO32",
- .fifo_size = 32,
- .tx_loadsz = 32,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_ALTR_16550_F64] = {
- .name = "Altera 16550 FIFO64",
- .fifo_size = 64,
- .tx_loadsz = 64,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
- [PORT_ALTR_16550_F128] = {
- .name = "Altera 16550 FIFO128",
- .fifo_size = 128,
- .tx_loadsz = 128,
- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_AFE,
- },
-};
-
-/* Uart divisor latch read */
-static int default_serial_dl_read(struct uart_8250_port *up)
-{
- return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
-}
-
-/* Uart divisor latch write */
-static void default_serial_dl_write(struct uart_8250_port *up, int value)
-{
- serial_out(up, UART_DLL, value & 0xff);
- serial_out(up, UART_DLM, value >> 8 & 0xff);
-}
-
-#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
-
-/* Au1x00/RT288x UART hardware has a weird register layout */
-static const u8 au_io_in_map[] = {
- [UART_RX] = 0,
- [UART_IER] = 2,
- [UART_IIR] = 3,
- [UART_LCR] = 5,
- [UART_MCR] = 6,
- [UART_LSR] = 7,
- [UART_MSR] = 8,
-};
-
-static const u8 au_io_out_map[] = {
- [UART_TX] = 1,
- [UART_IER] = 2,
- [UART_FCR] = 4,
- [UART_LCR] = 5,
- [UART_MCR] = 6,
-};
-
-static unsigned int au_serial_in(struct uart_port *p, int offset)
-{
- offset = au_io_in_map[offset] << p->regshift;
- return __raw_readl(p->membase + offset);
-}
-
-static void au_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = au_io_out_map[offset] << p->regshift;
- __raw_writel(value, p->membase + offset);
-}
-
-/* Au1x00 haven't got a standard divisor latch */
-static int au_serial_dl_read(struct uart_8250_port *up)
-{
- return __raw_readl(up->port.membase + 0x28);
-}
-
-static void au_serial_dl_write(struct uart_8250_port *up, int value)
-{
- __raw_writel(value, up->port.membase + 0x28);
-}
-
-#endif
-
-static unsigned int hub6_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- outb(p->hub6 - 1 + offset, p->iobase);
- return inb(p->iobase + 1);
-}
-
-static void hub6_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- outb(p->hub6 - 1 + offset, p->iobase);
- outb(value, p->iobase + 1);
-}
-
-static unsigned int mem_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- return readb(p->membase + offset);
-}
-
-static void mem_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- writeb(value, p->membase + offset);
-}
-
-static void mem32_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- writel(value, p->membase + offset);
-}
-
-static unsigned int mem32_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- return readl(p->membase + offset);
-}
-
-static unsigned int io_serial_in(struct uart_port *p, int offset)
-{
- offset = offset << p->regshift;
- return inb(p->iobase + offset);
-}
-
-static void io_serial_out(struct uart_port *p, int offset, int value)
-{
- offset = offset << p->regshift;
- outb(value, p->iobase + offset);
-}
-
-static int serial8250_default_handle_irq(struct uart_port *port);
-static int exar_handle_irq(struct uart_port *port);
-
-static void set_io_from_upio(struct uart_port *p)
-{
- struct uart_8250_port *up =
- container_of(p, struct uart_8250_port, port);
-
- up->dl_read = default_serial_dl_read;
- up->dl_write = default_serial_dl_write;
-
- switch (p->iotype) {
- case UPIO_HUB6:
- p->serial_in = hub6_serial_in;
- p->serial_out = hub6_serial_out;
- break;
-
- case UPIO_MEM:
- p->serial_in = mem_serial_in;
- p->serial_out = mem_serial_out;
- break;
-
- case UPIO_MEM32:
- p->serial_in = mem32_serial_in;
- p->serial_out = mem32_serial_out;
- break;
-
-#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
- case UPIO_AU:
- p->serial_in = au_serial_in;
- p->serial_out = au_serial_out;
- up->dl_read = au_serial_dl_read;
- up->dl_write = au_serial_dl_write;
- break;
-#endif
-
- default:
- p->serial_in = io_serial_in;
- p->serial_out = io_serial_out;
- break;
- }
- /* Remember loaded iotype */
- up->cur_iotype = p->iotype;
- p->handle_irq = serial8250_default_handle_irq;
-}
-
-static void
-serial_port_out_sync(struct uart_port *p, int offset, int value)
-{
- switch (p->iotype) {
- case UPIO_MEM:
- case UPIO_MEM32:
- case UPIO_AU:
- p->serial_out(p, offset, value);
- p->serial_in(p, UART_LCR); /* safe, no side-effects */
- break;
- default:
- p->serial_out(p, offset, value);
- }
-}
-
-/*
- * For the 16C950
- */
-static void serial_icr_write(struct uart_8250_port *up, int offset, int value)
-{
- serial_out(up, UART_SCR, offset);
- serial_out(up, UART_ICR, value);
-}
-
-static unsigned int serial_icr_read(struct uart_8250_port *up, int offset)
-{
- unsigned int value;
-
- serial_icr_write(up, UART_ACR, up->acr | UART_ACR_ICRRD);
- serial_out(up, UART_SCR, offset);
- value = serial_in(up, UART_ICR);
- serial_icr_write(up, UART_ACR, up->acr);
-
- return value;
-}
-
-/*
- * FIFO support.
- */
-static void serial8250_clear_fifos(struct uart_8250_port *p)
-{
- if (p->capabilities & UART_CAP_FIFO) {
- serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO |
- UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_out(p, UART_FCR, 0);
- }
-}
-
-void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p)
-{
- unsigned char fcr;
-
- serial8250_clear_fifos(p);
- fcr = uart_config[p->port.type].fcr;
- serial_out(p, UART_FCR, fcr);
-}
-EXPORT_SYMBOL_GPL(serial8250_clear_and_reinit_fifos);
-
-/*
- * IER sleep support. UARTs which have EFRs need the "extended
- * capability" bit enabled. Note that on XR16C850s, we need to
- * reset LCR to write to IER.
- */
-static void serial8250_set_sleep(struct uart_8250_port *p, int sleep)
-{
- /*
- * Exar UARTs have a SLEEP register that enables or disables
- * each UART to enter sleep mode separately. On the XR17V35x the
- * register is accessible to each UART at the UART_EXAR_SLEEP
- * offset but the UART channel may only write to the corresponding
- * bit.
- */
- if ((p->port.type == PORT_XR17V35X) ||
- (p->port.type == PORT_XR17D15X)) {
- serial_out(p, UART_EXAR_SLEEP, 0xff);
- return;
- }
-
- if (p->capabilities & UART_CAP_SLEEP) {
- if (p->capabilities & UART_CAP_EFR) {
- serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(p, UART_EFR, UART_EFR_ECB);
- serial_out(p, UART_LCR, 0);
- }
- serial_out(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
- if (p->capabilities & UART_CAP_EFR) {
- serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(p, UART_EFR, 0);
- serial_out(p, UART_LCR, 0);
- }
- }
-}
-
-#ifdef CONFIG_SERIAL_8250_RSA
-/*
- * Attempts to turn on the RSA FIFO. Returns zero on failure.
- * We set the port uart clock rate if we succeed.
- */
-static int __enable_rsa(struct uart_8250_port *up)
-{
- unsigned char mode;
- int result;
-
- mode = serial_in(up, UART_RSA_MSR);
- result = mode & UART_RSA_MSR_FIFO;
-
- if (!result) {
- serial_out(up, UART_RSA_MSR, mode | UART_RSA_MSR_FIFO);
- mode = serial_in(up, UART_RSA_MSR);
- result = mode & UART_RSA_MSR_FIFO;
- }
-
- if (result)
- up->port.uartclk = SERIAL_RSA_BAUD_BASE * 16;
-
- return result;
-}
-
-static void enable_rsa(struct uart_8250_port *up)
-{
- if (up->port.type == PORT_RSA) {
- if (up->port.uartclk != SERIAL_RSA_BAUD_BASE * 16) {
- spin_lock_irq(&up->port.lock);
- __enable_rsa(up);
- spin_unlock_irq(&up->port.lock);
- }
- if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16)
- serial_out(up, UART_RSA_FRR, 0);
- }
-}
-
-/*
- * Attempts to turn off the RSA FIFO. Returns zero on failure.
- * It is unknown why interrupts were disabled in here. However,
- * the caller is expected to preserve this behaviour by grabbing
- * the spinlock before calling this function.
- */
-static void disable_rsa(struct uart_8250_port *up)
-{
- unsigned char mode;
- int result;
-
- if (up->port.type == PORT_RSA &&
- up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16) {
- spin_lock_irq(&up->port.lock);
-
- mode = serial_in(up, UART_RSA_MSR);
- result = !(mode & UART_RSA_MSR_FIFO);
-
- if (!result) {
- serial_out(up, UART_RSA_MSR, mode & ~UART_RSA_MSR_FIFO);
- mode = serial_in(up, UART_RSA_MSR);
- result = !(mode & UART_RSA_MSR_FIFO);
- }
-
- if (result)
- up->port.uartclk = SERIAL_RSA_BAUD_BASE_LO * 16;
- spin_unlock_irq(&up->port.lock);
- }
-}
-#endif /* CONFIG_SERIAL_8250_RSA */
-
-/*
- * This is a quickie test to see how big the FIFO is.
- * It doesn't work at all the time, more's the pity.
- */
-static int size_fifo(struct uart_8250_port *up)
-{
- unsigned char old_fcr, old_mcr, old_lcr;
- unsigned short old_dl;
- int count;
-
- old_lcr = serial_in(up, UART_LCR);
- serial_out(up, UART_LCR, 0);
- old_fcr = serial_in(up, UART_FCR);
- old_mcr = serial_in(up, UART_MCR);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
- UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_out(up, UART_MCR, UART_MCR_LOOP);
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- old_dl = serial_dl_read(up);
- serial_dl_write(up, 0x0001);
- serial_out(up, UART_LCR, 0x03);
- for (count = 0; count < 256; count++)
- serial_out(up, UART_TX, count);
- mdelay(20);/* FIXME - schedule_timeout */
- for (count = 0; (serial_in(up, UART_LSR) & UART_LSR_DR) &&
- (count < 256); count++)
- serial_in(up, UART_RX);
- serial_out(up, UART_FCR, old_fcr);
- serial_out(up, UART_MCR, old_mcr);
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- serial_dl_write(up, old_dl);
- serial_out(up, UART_LCR, old_lcr);
-
- return count;
-}
-
-/*
- * Read UART ID using the divisor method - set DLL and DLM to zero
- * and the revision will be in DLL and device type in DLM. We
- * preserve the device state across this.
- */
-static unsigned int autoconfig_read_divisor_id(struct uart_8250_port *p)
-{
- unsigned char old_dll, old_dlm, old_lcr;
- unsigned int id;
-
- old_lcr = serial_in(p, UART_LCR);
- serial_out(p, UART_LCR, UART_LCR_CONF_MODE_A);
-
- old_dll = serial_in(p, UART_DLL);
- old_dlm = serial_in(p, UART_DLM);
-
- serial_out(p, UART_DLL, 0);
- serial_out(p, UART_DLM, 0);
-
- id = serial_in(p, UART_DLL) | serial_in(p, UART_DLM) << 8;
-
- serial_out(p, UART_DLL, old_dll);
- serial_out(p, UART_DLM, old_dlm);
- serial_out(p, UART_LCR, old_lcr);
-
- return id;
-}
-
-/*
- * This is a helper routine to autodetect StarTech/Exar/Oxsemi UART's.
- * When this function is called we know it is at least a StarTech
- * 16650 V2, but it might be one of several StarTech UARTs, or one of
- * its clones. (We treat the broken original StarTech 16650 V1 as a
- * 16550, and why not? Startech doesn't seem to even acknowledge its
- * existence.)
- *
- * What evil have men's minds wrought...
- */
-static void autoconfig_has_efr(struct uart_8250_port *up)
-{
- unsigned int id1, id2, id3, rev;
-
- /*
- * Everything with an EFR has SLEEP
- */
- up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
-
- /*
- * First we check to see if it's an Oxford Semiconductor UART.
- *
- * If we have to do this here because some non-National
- * Semiconductor clone chips lock up if you try writing to the
- * LSR register (which serial_icr_read does)
- */
-
- /*
- * Check for Oxford Semiconductor 16C950.
- *
- * EFR [4] must be set else this test fails.
- *
- * This shouldn't be necessary, but Mike Hudson (Exoray@isys.ca)
- * claims that it's needed for 952 dual UART's (which are not
- * recommended for new designs).
- */
- up->acr = 0;
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(up, UART_EFR, UART_EFR_ECB);
- serial_out(up, UART_LCR, 0x00);
- id1 = serial_icr_read(up, UART_ID1);
- id2 = serial_icr_read(up, UART_ID2);
- id3 = serial_icr_read(up, UART_ID3);
- rev = serial_icr_read(up, UART_REV);
-
- DEBUG_AUTOCONF("950id=%02x:%02x:%02x:%02x ", id1, id2, id3, rev);
-
- if (id1 == 0x16 && id2 == 0xC9 &&
- (id3 == 0x50 || id3 == 0x52 || id3 == 0x54)) {
- up->port.type = PORT_16C950;
-
- /*
- * Enable work around for the Oxford Semiconductor 952 rev B
- * chip which causes it to seriously miscalculate baud rates
- * when DLL is 0.
- */
- if (id3 == 0x52 && rev == 0x01)
- up->bugs |= UART_BUG_QUOT;
- return;
- }
-
- /*
- * We check for a XR16C850 by setting DLL and DLM to 0, and then
- * reading back DLL and DLM. The chip type depends on the DLM
- * value read back:
- * 0x10 - XR16C850 and the DLL contains the chip revision.
- * 0x12 - XR16C2850.
- * 0x14 - XR16C854.
- */
- id1 = autoconfig_read_divisor_id(up);
- DEBUG_AUTOCONF("850id=%04x ", id1);
-
- id2 = id1 >> 8;
- if (id2 == 0x10 || id2 == 0x12 || id2 == 0x14) {
- up->port.type = PORT_16850;
- return;
- }
-
- /*
- * It wasn't an XR16C850.
- *
- * We distinguish between the '654 and the '650 by counting
- * how many bytes are in the FIFO. I'm using this for now,
- * since that's the technique that was sent to me in the
- * serial driver update, but I'm not convinced this works.
- * I've had problems doing this in the past. -TYT
- */
- if (size_fifo(up) == 64)
- up->port.type = PORT_16654;
- else
- up->port.type = PORT_16650V2;
-}
-
-/*
- * We detected a chip without a FIFO. Only two fall into
- * this category - the original 8250 and the 16450. The
- * 16450 has a scratch register (accessible with LCR=0)
- */
-static void autoconfig_8250(struct uart_8250_port *up)
-{
- unsigned char scratch, status1, status2;
-
- up->port.type = PORT_8250;
-
- scratch = serial_in(up, UART_SCR);
- serial_out(up, UART_SCR, 0xa5);
- status1 = serial_in(up, UART_SCR);
- serial_out(up, UART_SCR, 0x5a);
- status2 = serial_in(up, UART_SCR);
- serial_out(up, UART_SCR, scratch);
-
- if (status1 == 0xa5 && status2 == 0x5a)
- up->port.type = PORT_16450;
-}
-
-static int broken_efr(struct uart_8250_port *up)
-{
- /*
- * Exar ST16C2550 "A2" devices incorrectly detect as
- * having an EFR, and report an ID of 0x0201. See
- * http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-11/4812.html
- */
- if (autoconfig_read_divisor_id(up) == 0x0201 && size_fifo(up) == 16)
- return 1;
-
- return 0;
-}
-
-static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
-{
- unsigned char status;
-
- status = serial_in(up, 0x04); /* EXCR2 */
-#define PRESL(x) ((x) & 0x30)
- if (PRESL(status) == 0x10) {
- /* already in high speed mode */
- return 0;
- } else {
- status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
- status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
- serial_out(up, 0x04, status);
- }
- return 1;
-}
-
-/*
- * We know that the chip has FIFOs. Does it have an EFR? The
- * EFR is located in the same register position as the IIR and
- * we know the top two bits of the IIR are currently set. The
- * EFR should contain zero. Try to read the EFR.
- */
-static void autoconfig_16550a(struct uart_8250_port *up)
-{
- unsigned char status1, status2;
- unsigned int iersave;
-
- up->port.type = PORT_16550A;
- up->capabilities |= UART_CAP_FIFO;
-
- /*
- * XR17V35x UARTs have an extra divisor register, DLD
- * that gets enabled with when DLAB is set which will
- * cause the device to incorrectly match and assign
- * port type to PORT_16650. The EFR for this UART is
- * found at offset 0x09. Instead check the Deice ID (DVID)
- * register for a 2, 4 or 8 port UART.
- */
- if (up->port.flags & UPF_EXAR_EFR) {
- status1 = serial_in(up, UART_EXAR_DVID);
- if (status1 == 0x82 || status1 == 0x84 || status1 == 0x88) {
- DEBUG_AUTOCONF("Exar XR17V35x ");
- up->port.type = PORT_XR17V35X;
- up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP;
-
- return;
- }
-
- }
-
- /*
- * Check for presence of the EFR when DLAB is set.
- * Only ST16C650V1 UARTs pass this test.
- */
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- if (serial_in(up, UART_EFR) == 0) {
- serial_out(up, UART_EFR, 0xA8);
- if (serial_in(up, UART_EFR) != 0) {
- DEBUG_AUTOCONF("EFRv1 ");
- up->port.type = PORT_16650;
- up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
- } else {
- DEBUG_AUTOCONF("Motorola 8xxx DUART ");
- }
- serial_out(up, UART_EFR, 0);
- return;
- }
-
- /*
- * Maybe it requires 0xbf to be written to the LCR.
- * (other ST16C650V2 UARTs, TI16C752A, etc)
- */
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- if (serial_in(up, UART_EFR) == 0 && !broken_efr(up)) {
- DEBUG_AUTOCONF("EFRv2 ");
- autoconfig_has_efr(up);
- return;
- }
-
- /*
- * Check for a National Semiconductor SuperIO chip.
- * Attempt to switch to bank 2, read the value of the LOOP bit
- * from EXCR1. Switch back to bank 0, change it in MCR. Then
- * switch back to bank 2, read it from EXCR1 again and check
- * it's changed. If so, set baud_base in EXCR2 to 921600. -- dwmw2
- */
- serial_out(up, UART_LCR, 0);
- status1 = serial_in(up, UART_MCR);
- serial_out(up, UART_LCR, 0xE0);
- status2 = serial_in(up, 0x02); /* EXCR1 */
-
- if (!((status2 ^ status1) & UART_MCR_LOOP)) {
- serial_out(up, UART_LCR, 0);
- serial_out(up, UART_MCR, status1 ^ UART_MCR_LOOP);
- serial_out(up, UART_LCR, 0xE0);
- status2 = serial_in(up, 0x02); /* EXCR1 */
- serial_out(up, UART_LCR, 0);
- serial_out(up, UART_MCR, status1);
-
- if ((status2 ^ status1) & UART_MCR_LOOP) {
- unsigned short quot;
-
- serial_out(up, UART_LCR, 0xE0);
-
- quot = serial_dl_read(up);
- quot <<= 3;
-
- if (ns16550a_goto_highspeed(up))
- serial_dl_write(up, quot);
-
- serial_out(up, UART_LCR, 0);
-
- up->port.uartclk = 921600*16;
- up->port.type = PORT_NS16550A;
- up->capabilities |= UART_NATSEMI;
- return;
- }
- }
-
- /*
- * No EFR. Try to detect a TI16750, which only sets bit 5 of
- * the IIR when 64 byte FIFO mode is enabled when DLAB is set.
- * Try setting it with and without DLAB set. Cheap clones
- * set bit 5 without DLAB set.
- */
- serial_out(up, UART_LCR, 0);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
- status1 = serial_in(up, UART_IIR) >> 5;
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
- status2 = serial_in(up, UART_IIR) >> 5;
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(up, UART_LCR, 0);
-
- DEBUG_AUTOCONF("iir1=%d iir2=%d ", status1, status2);
-
- if (status1 == 6 && status2 == 7) {
- up->port.type = PORT_16750;
- up->capabilities |= UART_CAP_AFE | UART_CAP_SLEEP;
- return;
- }
-
- /*
- * Try writing and reading the UART_IER_UUE bit (b6).
- * If it works, this is probably one of the Xscale platform's
- * internal UARTs.
- * We're going to explicitly set the UUE bit to 0 before
- * trying to write and read a 1 just to make sure it's not
- * already a 1 and maybe locked there before we even start start.
- */
- iersave = serial_in(up, UART_IER);
- serial_out(up, UART_IER, iersave & ~UART_IER_UUE);
- if (!(serial_in(up, UART_IER) & UART_IER_UUE)) {
- /*
- * OK it's in a known zero state, try writing and reading
- * without disturbing the current state of the other bits.
- */
- serial_out(up, UART_IER, iersave | UART_IER_UUE);
- if (serial_in(up, UART_IER) & UART_IER_UUE) {
- /*
- * It's an Xscale.
- * We'll leave the UART_IER_UUE bit set to 1 (enabled).
- */
- DEBUG_AUTOCONF("Xscale ");
- up->port.type = PORT_XSCALE;
- up->capabilities |= UART_CAP_UUE | UART_CAP_RTOIE;
- return;
- }
- } else {
- /*
- * If we got here we couldn't force the IER_UUE bit to 0.
- * Log it and continue.
- */
- DEBUG_AUTOCONF("Couldn't force IER_UUE to 0 ");
- }
- serial_out(up, UART_IER, iersave);
-
- /*
- * Exar uarts have EFR in a weird location
- */
- if (up->port.flags & UPF_EXAR_EFR) {
- DEBUG_AUTOCONF("Exar XR17D15x ");
- up->port.type = PORT_XR17D15X;
- up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
- UART_CAP_SLEEP;
-
- return;
- }
-
- /*
- * We distinguish between 16550A and U6 16550A by counting
- * how many bytes are in the FIFO.
- */
- if (up->port.type == PORT_16550A && size_fifo(up) == 64) {
- up->port.type = PORT_U6_16550A;
- up->capabilities |= UART_CAP_AFE;
- }
-}
-
-/*
- * This routine is called by rs_init() to initialize a specific serial
- * port. It determines what type of UART chip this serial port is
- * using: 8250, 16450, 16550, 16550A. The important question is
- * whether or not this UART is a 16550A or not, since this will
- * determine whether or not we can use its FIFO features or not.
- */
-static void autoconfig(struct uart_8250_port *up, unsigned int probeflags)
-{
- unsigned char status1, scratch, scratch2, scratch3;
- unsigned char save_lcr, save_mcr;
- struct uart_port *port = &up->port;
- unsigned long flags;
- unsigned int old_capabilities;
-
- if (!port->iobase && !port->mapbase && !port->membase)
- return;
-
- DEBUG_AUTOCONF("ttyS%d: autoconf (0x%04lx, 0x%p): ",
- serial_index(port), port->iobase, port->membase);
-
- /*
- * We really do need global IRQs disabled here - we're going to
- * be frobbing the chips IRQ enable register to see if it exists.
- */
- spin_lock_irqsave(&port->lock, flags);
-
- up->capabilities = 0;
- up->bugs = 0;
-
- if (!(port->flags & UPF_BUGGY_UART)) {
- /*
- * Do a simple existence test first; if we fail this,
- * there's no point trying anything else.
- *
- * 0x80 is used as a nonsense port to prevent against
- * false positives due to ISA bus float. The
- * assumption is that 0x80 is a non-existent port;
- * which should be safe since include/asm/io.h also
- * makes this assumption.
- *
- * Note: this is safe as long as MCR bit 4 is clear
- * and the device is in "PC" mode.
- */
- scratch = serial_in(up, UART_IER);
- serial_out(up, UART_IER, 0);
-#ifdef __i386__
- outb(0xff, 0x080);
-#endif
- /*
- * Mask out IER[7:4] bits for test as some UARTs (e.g. TL
- * 16C754B) allow only to modify them if an EFR bit is set.
- */
- scratch2 = serial_in(up, UART_IER) & 0x0f;
- serial_out(up, UART_IER, 0x0F);
-#ifdef __i386__
- outb(0, 0x080);
-#endif
- scratch3 = serial_in(up, UART_IER) & 0x0f;
- serial_out(up, UART_IER, scratch);
- if (scratch2 != 0 || scratch3 != 0x0F) {
- /*
- * We failed; there's nothing here
- */
- spin_unlock_irqrestore(&port->lock, flags);
- DEBUG_AUTOCONF("IER test failed (%02x, %02x) ",
- scratch2, scratch3);
- goto out;
- }
- }
-
- save_mcr = serial_in(up, UART_MCR);
- save_lcr = serial_in(up, UART_LCR);
-
- /*
- * Check to see if a UART is really there. Certain broken
- * internal modems based on the Rockwell chipset fail this
- * test, because they apparently don't implement the loopback
- * test mode. So this test is skipped on the COM 1 through
- * COM 4 ports. This *should* be safe, since no board
- * manufacturer would be stupid enough to design a board
- * that conflicts with COM 1-4 --- we hope!
- */
- if (!(port->flags & UPF_SKIP_TEST)) {
- serial_out(up, UART_MCR, UART_MCR_LOOP | 0x0A);
- status1 = serial_in(up, UART_MSR) & 0xF0;
- serial_out(up, UART_MCR, save_mcr);
- if (status1 != 0x90) {
- spin_unlock_irqrestore(&port->lock, flags);
- DEBUG_AUTOCONF("LOOP test failed (%02x) ",
- status1);
- goto out;
- }
- }
-
- /*
- * We're pretty sure there's a port here. Lets find out what
- * type of port it is. The IIR top two bits allows us to find
- * out if it's 8250 or 16450, 16550, 16550A or later. This
- * determines what we test for next.
- *
- * We also initialise the EFR (if any) to zero for later. The
- * EFR occupies the same register location as the FCR and IIR.
- */
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_out(up, UART_EFR, 0);
- serial_out(up, UART_LCR, 0);
-
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- scratch = serial_in(up, UART_IIR) >> 6;
-
- switch (scratch) {
- case 0:
- autoconfig_8250(up);
- break;
- case 1:
- port->type = PORT_UNKNOWN;
- break;
- case 2:
- port->type = PORT_16550;
- break;
- case 3:
- autoconfig_16550a(up);
- break;
- }
-
-#ifdef CONFIG_SERIAL_8250_RSA
- /*
- * Only probe for RSA ports if we got the region.
- */
- if (port->type == PORT_16550A && probeflags & PROBE_RSA) {
- int i;
-
- for (i = 0 ; i < probe_rsa_count; ++i) {
- if (probe_rsa[i] == port->iobase && __enable_rsa(up)) {
- port->type = PORT_RSA;
- break;
- }
- }
- }
-#endif
-
- serial_out(up, UART_LCR, save_lcr);
-
- port->fifosize = uart_config[up->port.type].fifo_size;
- old_capabilities = up->capabilities;
- up->capabilities = uart_config[port->type].flags;
- up->tx_loadsz = uart_config[port->type].tx_loadsz;
-
- if (port->type == PORT_UNKNOWN)
- goto out_lock;
-
- /*
- * Reset the UART.
- */
-#ifdef CONFIG_SERIAL_8250_RSA
- if (port->type == PORT_RSA)
- serial_out(up, UART_RSA_FRR, 0);
-#endif
- serial_out(up, UART_MCR, save_mcr);
- serial8250_clear_fifos(up);
- serial_in(up, UART_RX);
- if (up->capabilities & UART_CAP_UUE)
- serial_out(up, UART_IER, UART_IER_UUE);
- else
- serial_out(up, UART_IER, 0);
-
-out_lock:
- spin_unlock_irqrestore(&port->lock, flags);
- if (up->capabilities != old_capabilities) {
- printk(KERN_WARNING
- "ttyS%d: detected caps %08x should be %08x\n",
- serial_index(port), old_capabilities,
- up->capabilities);
- }
-out:
- DEBUG_AUTOCONF("iir=%d ", scratch);
- DEBUG_AUTOCONF("type=%s\n", uart_config[port->type].name);
-}
-
-static void autoconfig_irq(struct uart_8250_port *up)
-{
- struct uart_port *port = &up->port;
- unsigned char save_mcr, save_ier;
- unsigned char save_ICP = 0;
- unsigned int ICP = 0;
- unsigned long irqs;
- int irq;
-
- if (port->flags & UPF_FOURPORT) {
- ICP = (port->iobase & 0xfe0) | 0x1f;
- save_ICP = inb_p(ICP);
- outb_p(0x80, ICP);
- inb_p(ICP);
- }
-
- /* forget possible initially masked and pending IRQ */
- probe_irq_off(probe_irq_on());
- save_mcr = serial_in(up, UART_MCR);
- save_ier = serial_in(up, UART_IER);
- serial_out(up, UART_MCR, UART_MCR_OUT1 | UART_MCR_OUT2);
-
- irqs = probe_irq_on();
- serial_out(up, UART_MCR, 0);
- udelay(10);
- if (port->flags & UPF_FOURPORT) {
- serial_out(up, UART_MCR,
- UART_MCR_DTR | UART_MCR_RTS);
- } else {
- serial_out(up, UART_MCR,
- UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2);
- }
- serial_out(up, UART_IER, 0x0f); /* enable all intrs */
- serial_in(up, UART_LSR);
- serial_in(up, UART_RX);
- serial_in(up, UART_IIR);
- serial_in(up, UART_MSR);
- serial_out(up, UART_TX, 0xFF);
- udelay(20);
- irq = probe_irq_off(irqs);
-
- serial_out(up, UART_MCR, save_mcr);
- serial_out(up, UART_IER, save_ier);
-
- if (port->flags & UPF_FOURPORT)
- outb_p(save_ICP, ICP);
-
- port->irq = (irq > 0) ? irq : 0;
-}
-
-static inline void __stop_tx(struct uart_8250_port *p)
-{
- if (p->ier & UART_IER_THRI) {
- p->ier &= ~UART_IER_THRI;
- serial_out(p, UART_IER, p->ier);
- }
-}
-
-static void serial8250_stop_tx(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- __stop_tx(up);
-
- /*
- * We really want to stop the transmitter from sending.
- */
- if (port->type == PORT_16C950) {
- up->acr |= UART_ACR_TXDIS;
- serial_icr_write(up, UART_ACR, up->acr);
- }
-}
-
-static void serial8250_start_tx(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- if (up->dma && !serial8250_tx_dma(up)) {
- return;
- } else if (!(up->ier & UART_IER_THRI)) {
- up->ier |= UART_IER_THRI;
- serial_port_out(port, UART_IER, up->ier);
-
- if (up->bugs & UART_BUG_TXEN) {
- unsigned char lsr;
- lsr = serial_in(up, UART_LSR);
- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- if (lsr & UART_LSR_TEMT)
- serial8250_tx_chars(up);
- }
- }
-
- /*
- * Re-enable the transmitter if we disabled it.
- */
- if (port->type == PORT_16C950 && up->acr & UART_ACR_TXDIS) {
- up->acr &= ~UART_ACR_TXDIS;
- serial_icr_write(up, UART_ACR, up->acr);
- }
-}
-
-static void serial8250_stop_rx(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- up->ier &= ~UART_IER_RLSI;
- up->port.read_status_mask &= ~UART_LSR_DR;
- serial_port_out(port, UART_IER, up->ier);
-}
-
-static void serial8250_enable_ms(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- /* no MSR capabilities */
- if (up->bugs & UART_BUG_NOMSR)
- return;
-
- up->ier |= UART_IER_MSI;
- serial_port_out(port, UART_IER, up->ier);
-}
-
-/*
- * serial8250_rx_chars: processes according to the passed in LSR
- * value, and returns the remaining LSR bits not handled
- * by this Rx routine.
- */
-unsigned char
-serial8250_rx_chars(struct uart_8250_port *up, unsigned char lsr)
-{
- struct uart_port *port = &up->port;
- unsigned char ch;
- int max_count = 256;
- char flag;
-
- do {
- if (likely(lsr & UART_LSR_DR))
- ch = serial_in(up, UART_RX);
- else
- /*
- * Intel 82571 has a Serial Over Lan device that will
- * set UART_LSR_BI without setting UART_LSR_DR when
- * it receives a break. To avoid reading from the
- * receive buffer without UART_LSR_DR bit set, we
- * just force the read character to be 0
- */
- ch = 0;
-
- flag = TTY_NORMAL;
- port->icount.rx++;
-
- lsr |= up->lsr_saved_flags;
- up->lsr_saved_flags = 0;
-
- if (unlikely(lsr & UART_LSR_BRK_ERROR_BITS)) {
- if (lsr & UART_LSR_BI) {
- lsr &= ~(UART_LSR_FE | UART_LSR_PE);
- port->icount.brk++;
- /*
- * We do the SysRQ and SAK checking
- * here because otherwise the break
- * may get masked by ignore_status_mask
- * or read_status_mask.
- */
- if (uart_handle_break(port))
- goto ignore_char;
- } else if (lsr & UART_LSR_PE)
- port->icount.parity++;
- else if (lsr & UART_LSR_FE)
- port->icount.frame++;
- if (lsr & UART_LSR_OE)
- port->icount.overrun++;
-
- /*
- * Mask off conditions which should be ignored.
- */
- lsr &= port->read_status_mask;
-
- if (lsr & UART_LSR_BI) {
- DEBUG_INTR("handling break....");
- flag = TTY_BREAK;
- } else if (lsr & UART_LSR_PE)
- flag = TTY_PARITY;
- else if (lsr & UART_LSR_FE)
- flag = TTY_FRAME;
- }
- if (uart_handle_sysrq_char(port, ch))
- goto ignore_char;
-
- uart_insert_char(port, lsr, UART_LSR_OE, ch, flag);
-
-ignore_char:
- lsr = serial_in(up, UART_LSR);
- } while ((lsr & (UART_LSR_DR | UART_LSR_BI)) && (max_count-- > 0));
- spin_unlock(&port->lock);
- tty_flip_buffer_push(&port->state->port);
- spin_lock(&port->lock);
- return lsr;
-}
-EXPORT_SYMBOL_GPL(serial8250_rx_chars);
-
-void serial8250_tx_chars(struct uart_8250_port *up)
-{
- struct uart_port *port = &up->port;
- struct circ_buf *xmit = &port->state->xmit;
- int count;
-
- if (port->x_char) {
- serial_out(up, UART_TX, port->x_char);
- port->icount.tx++;
- port->x_char = 0;
- return;
- }
- if (uart_tx_stopped(port)) {
- serial8250_stop_tx(port);
- return;
- }
- if (uart_circ_empty(xmit)) {
- __stop_tx(up);
- return;
- }
-
- count = up->tx_loadsz;
- do {
- serial_out(up, UART_TX, xmit->buf[xmit->tail]);
- xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
- port->icount.tx++;
- if (uart_circ_empty(xmit))
- break;
- if (up->capabilities & UART_CAP_HFIFO) {
- if ((serial_port_in(port, UART_LSR) & BOTH_EMPTY) !=
- BOTH_EMPTY)
- break;
- }
- } while (--count > 0);
-
- if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- uart_write_wakeup(port);
-
- DEBUG_INTR("THRE...");
-
- if (uart_circ_empty(xmit))
- __stop_tx(up);
-}
-EXPORT_SYMBOL_GPL(serial8250_tx_chars);
-
-unsigned int serial8250_modem_status(struct uart_8250_port *up)
-{
- struct uart_port *port = &up->port;
- unsigned int status = serial_in(up, UART_MSR);
-
- status |= up->msr_saved_flags;
- up->msr_saved_flags = 0;
- if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI &&
- port->state != NULL) {
- if (status & UART_MSR_TERI)
- port->icount.rng++;
- if (status & UART_MSR_DDSR)
- port->icount.dsr++;
- if (status & UART_MSR_DDCD)
- uart_handle_dcd_change(port, status & UART_MSR_DCD);
- if (status & UART_MSR_DCTS)
- uart_handle_cts_change(port, status & UART_MSR_CTS);
-
- wake_up_interruptible(&port->state->port.delta_msr_wait);
- }
-
- return status;
-}
-EXPORT_SYMBOL_GPL(serial8250_modem_status);
-
-/*
- * This handles the interrupt from one port.
- */
-int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
-{
- unsigned char status;
- unsigned long flags;
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- int dma_err = 0;
-
- if (iir & UART_IIR_NO_INT)
- return 0;
-
- spin_lock_irqsave(&port->lock, flags);
-
- status = serial_port_in(port, UART_LSR);
-
- DEBUG_INTR("status = %x...", status);
-
- if (status & (UART_LSR_DR | UART_LSR_BI)) {
- if (up->dma)
- dma_err = serial8250_rx_dma(up, iir);
-
- if (!up->dma || dma_err)
- status = serial8250_rx_chars(up, status);
- }
- serial8250_modem_status(up);
- if (status & UART_LSR_THRE)
- serial8250_tx_chars(up);
-
- spin_unlock_irqrestore(&port->lock, flags);
- return 1;
-}
-EXPORT_SYMBOL_GPL(serial8250_handle_irq);
-
-static int serial8250_default_handle_irq(struct uart_port *port)
-{
- unsigned int iir = serial_port_in(port, UART_IIR);
-
- return serial8250_handle_irq(port, iir);
-}
-
-/*
- * These Exar UARTs have an extra interrupt indicator that could
- * fire for a few unimplemented interrupts. One of which is a
- * wakeup event when coming out of sleep. Put this here just
- * to be on the safe side that these interrupts don't go unhandled.
- */
-static int exar_handle_irq(struct uart_port *port)
-{
- unsigned char int0, int1, int2, int3;
- unsigned int iir = serial_port_in(port, UART_IIR);
- int ret;
-
- ret = serial8250_handle_irq(port, iir);
-
- if ((port->type == PORT_XR17V35X) ||
- (port->type == PORT_XR17D15X)) {
- int0 = serial_port_in(port, 0x80);
- int1 = serial_port_in(port, 0x81);
- int2 = serial_port_in(port, 0x82);
- int3 = serial_port_in(port, 0x83);
- }
-
- return ret;
-}
-
-/*
- * This is the serial driver's interrupt routine.
- *
- * Arjan thinks the old way was overly complex, so it got simplified.
- * Alan disagrees, saying that need the complexity to handle the weird
- * nature of ISA shared interrupts. (This is a special exception.)
- *
- * In order to handle ISA shared interrupts properly, we need to check
- * that all ports have been serviced, and therefore the ISA interrupt
- * line has been de-asserted.
- *
- * This means we need to loop through all ports. checking that they
- * don't have an interrupt pending.
- */
-static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
-{
- struct irq_info *i = dev_id;
- struct list_head *l, *end = NULL;
- int pass_counter = 0, handled = 0;
-
- DEBUG_INTR("serial8250_interrupt(%d)...", irq);
-
- spin_lock(&i->lock);
-
- l = i->head;
- do {
- struct uart_8250_port *up;
- struct uart_port *port;
-
- up = list_entry(l, struct uart_8250_port, list);
- port = &up->port;
-
- if (port->handle_irq(port)) {
- handled = 1;
- end = NULL;
- } else if (end == NULL)
- end = l;
-
- l = l->next;
-
- if (l == i->head && pass_counter++ > PASS_LIMIT) {
- /* If we hit this, we're dead. */
- printk_ratelimited(KERN_ERR
- "serial8250: too much work for irq%d\n", irq);
- break;
- }
- } while (l != end);
-
- spin_unlock(&i->lock);
-
- DEBUG_INTR("end.\n");
-
- return IRQ_RETVAL(handled);
-}
-
-/*
- * To support ISA shared interrupts, we need to have one interrupt
- * handler that ensures that the IRQ line has been deasserted
- * before returning. Failing to do this will result in the IRQ
- * line being stuck active, and, since ISA irqs are edge triggered,
- * no more IRQs will be seen.
- */
-static void serial_do_unlink(struct irq_info *i, struct uart_8250_port *up)
-{
- spin_lock_irq(&i->lock);
-
- if (!list_empty(i->head)) {
- if (i->head == &up->list)
- i->head = i->head->next;
- list_del(&up->list);
- } else {
- BUG_ON(i->head != &up->list);
- i->head = NULL;
- }
- spin_unlock_irq(&i->lock);
- /* List empty so throw away the hash node */
- if (i->head == NULL) {
- hlist_del(&i->node);
- kfree(i);
- }
-}
-
-static int serial_link_irq_chain(struct uart_8250_port *up)
-{
- struct hlist_head *h;
- struct hlist_node *n;
- struct irq_info *i;
- int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0;
-
- mutex_lock(&hash_mutex);
-
- h = &irq_lists[up->port.irq % NR_IRQ_HASH];
-
- hlist_for_each(n, h) {
- i = hlist_entry(n, struct irq_info, node);
- if (i->irq == up->port.irq)
- break;
- }
-
- if (n == NULL) {
- i = kzalloc(sizeof(struct irq_info), GFP_KERNEL);
- if (i == NULL) {
- mutex_unlock(&hash_mutex);
- return -ENOMEM;
- }
- spin_lock_init(&i->lock);
- i->irq = up->port.irq;
- hlist_add_head(&i->node, h);
- }
- mutex_unlock(&hash_mutex);
-
- spin_lock_irq(&i->lock);
-
- if (i->head) {
- list_add(&up->list, i->head);
- spin_unlock_irq(&i->lock);
-
- ret = 0;
- } else {
- INIT_LIST_HEAD(&up->list);
- i->head = &up->list;
- spin_unlock_irq(&i->lock);
- irq_flags |= up->port.irqflags;
- ret = request_irq(up->port.irq, serial8250_interrupt,
- irq_flags, "serial", i);
- if (ret < 0)
- serial_do_unlink(i, up);
- }
-
- return ret;
-}
-
-static void serial_unlink_irq_chain(struct uart_8250_port *up)
-{
- struct irq_info *i;
- struct hlist_node *n;
- struct hlist_head *h;
-
- mutex_lock(&hash_mutex);
-
- h = &irq_lists[up->port.irq % NR_IRQ_HASH];
-
- hlist_for_each(n, h) {
- i = hlist_entry(n, struct irq_info, node);
- if (i->irq == up->port.irq)
- break;
- }
-
- BUG_ON(n == NULL);
- BUG_ON(i->head == NULL);
-
- if (list_empty(i->head))
- free_irq(up->port.irq, i);
-
- serial_do_unlink(i, up);
- mutex_unlock(&hash_mutex);
-}
-
-/*
- * This function is used to handle ports that do not have an
- * interrupt. This doesn't work very well for 16450's, but gives
- * barely passable results for a 16550A. (Although at the expense
- * of much CPU overhead).
- */
-static void serial8250_timeout(unsigned long data)
-{
- struct uart_8250_port *up = (struct uart_8250_port *)data;
-
- up->port.handle_irq(&up->port);
- mod_timer(&up->timer, jiffies + uart_poll_timeout(&up->port));
-}
-
-static void serial8250_backup_timeout(unsigned long data)
-{
- struct uart_8250_port *up = (struct uart_8250_port *)data;
- unsigned int iir, ier = 0, lsr;
- unsigned long flags;
-
- spin_lock_irqsave(&up->port.lock, flags);
-
- /*
- * Must disable interrupts or else we risk racing with the interrupt
- * based handler.
- */
- if (up->port.irq) {
- ier = serial_in(up, UART_IER);
- serial_out(up, UART_IER, 0);
- }
-
- iir = serial_in(up, UART_IIR);
-
- /*
- * This should be a safe test for anyone who doesn't trust the
- * IIR bits on their UART, but it's specifically designed for
- * the "Diva" UART used on the management processor on many HP
- * ia64 and parisc boxes.
- */
- lsr = serial_in(up, UART_LSR);
- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) &&
- (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) &&
- (lsr & UART_LSR_THRE)) {
- iir &= ~(UART_IIR_ID | UART_IIR_NO_INT);
- iir |= UART_IIR_THRI;
- }
-
- if (!(iir & UART_IIR_NO_INT))
- serial8250_tx_chars(up);
-
- if (up->port.irq)
- serial_out(up, UART_IER, ier);
-
- spin_unlock_irqrestore(&up->port.lock, flags);
-
- /* Standard timer interval plus 0.2s to keep the port running */
- mod_timer(&up->timer,
- jiffies + uart_poll_timeout(&up->port) + HZ / 5);
-}
-
-static unsigned int serial8250_tx_empty(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
- unsigned int lsr;
-
- spin_lock_irqsave(&port->lock, flags);
- lsr = serial_port_in(port, UART_LSR);
- up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
- spin_unlock_irqrestore(&port->lock, flags);
-
- return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
-}
-
-static unsigned int serial8250_get_mctrl(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned int status;
- unsigned int ret;
-
- status = serial8250_modem_status(up);
-
- ret = 0;
- if (status & UART_MSR_DCD)
- ret |= TIOCM_CAR;
- if (status & UART_MSR_RI)
- ret |= TIOCM_RNG;
- if (status & UART_MSR_DSR)
- ret |= TIOCM_DSR;
- if (status & UART_MSR_CTS)
- ret |= TIOCM_CTS;
- return ret;
-}
-
-static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned char mcr = 0;
-
- if (mctrl & TIOCM_RTS)
- mcr |= UART_MCR_RTS;
- if (mctrl & TIOCM_DTR)
- mcr |= UART_MCR_DTR;
- if (mctrl & TIOCM_OUT1)
- mcr |= UART_MCR_OUT1;
- if (mctrl & TIOCM_OUT2)
- mcr |= UART_MCR_OUT2;
- if (mctrl & TIOCM_LOOP)
- mcr |= UART_MCR_LOOP;
-
- mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
-
- serial_port_out(port, UART_MCR, mcr);
-}
-
-static void serial8250_break_ctl(struct uart_port *port, int break_state)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
-
- spin_lock_irqsave(&port->lock, flags);
- if (break_state == -1)
- up->lcr |= UART_LCR_SBC;
- else
- up->lcr &= ~UART_LCR_SBC;
- serial_port_out(port, UART_LCR, up->lcr);
- spin_unlock_irqrestore(&port->lock, flags);
-}
-
-/*
- * Wait for transmitter & holding register to empty
- */
-static void wait_for_xmitr(struct uart_8250_port *up, int bits)
-{
- unsigned int status, tmout = 10000;
-
- /* Wait up to 10ms for the character(s) to be sent. */
- for (;;) {
- status = serial_in(up, UART_LSR);
-
- up->lsr_saved_flags |= status & LSR_SAVE_FLAGS;
-
- if ((status & bits) == bits)
- break;
- if (--tmout == 0)
- break;
- udelay(1);
- }
-
- /* Wait up to 1s for flow control if necessary */
- if (up->port.flags & UPF_CONS_FLOW) {
- unsigned int tmout;
- for (tmout = 1000000; tmout; tmout--) {
- unsigned int msr = serial_in(up, UART_MSR);
- up->msr_saved_flags |= msr & MSR_SAVE_FLAGS;
- if (msr & UART_MSR_CTS)
- break;
- udelay(1);
- touch_nmi_watchdog();
- }
- }
-}
-
-#ifdef CONFIG_CONSOLE_POLL
-/*
- * Console polling routines for writing and reading from the uart while
- * in an interrupt or debug context.
- */
-
-static int serial8250_get_poll_char(struct uart_port *port)
-{
- unsigned char lsr = serial_port_in(port, UART_LSR);
-
- if (!(lsr & UART_LSR_DR))
- return NO_POLL_CHAR;
-
- return serial_port_in(port, UART_RX);
-}
-
-
-static void serial8250_put_poll_char(struct uart_port *port,
- unsigned char c)
-{
- unsigned int ier;
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- /*
- * First save the IER then disable the interrupts
- */
- ier = serial_port_in(port, UART_IER);
- if (up->capabilities & UART_CAP_UUE)
- serial_port_out(port, UART_IER, UART_IER_UUE);
- else
- serial_port_out(port, UART_IER, 0);
-
- wait_for_xmitr(up, BOTH_EMPTY);
- /*
- * Send the character out.
- * If a LF, also do CR...
- */
- serial_port_out(port, UART_TX, c);
- if (c == 10) {
- wait_for_xmitr(up, BOTH_EMPTY);
- serial_port_out(port, UART_TX, 13);
- }
-
- /*
- * Finally, wait for transmitter to become empty
- * and restore the IER
- */
- wait_for_xmitr(up, BOTH_EMPTY);
- serial_port_out(port, UART_IER, ier);
-}
-
-#endif /* CONFIG_CONSOLE_POLL */
-
-static int serial8250_startup(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
- unsigned char lsr, iir;
- int retval;
-
- if (port->type == PORT_8250_CIR)
- return -ENODEV;
-
- if (!port->fifosize)
- port->fifosize = uart_config[port->type].fifo_size;
- if (!up->tx_loadsz)
- up->tx_loadsz = uart_config[port->type].tx_loadsz;
- if (!up->capabilities)
- up->capabilities = uart_config[port->type].flags;
- up->mcr = 0;
-
- if (port->iotype != up->cur_iotype)
- set_io_from_upio(port);
-
- if (port->type == PORT_16C950) {
- /* Wake up and initialize UART */
- up->acr = 0;
- serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_port_out(port, UART_EFR, UART_EFR_ECB);
- serial_port_out(port, UART_IER, 0);
- serial_port_out(port, UART_LCR, 0);
- serial_icr_write(up, UART_CSR, 0); /* Reset the UART */
- serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
- serial_port_out(port, UART_EFR, UART_EFR_ECB);
- serial_port_out(port, UART_LCR, 0);
- }
-
-#ifdef CONFIG_SERIAL_8250_RSA
- /*
- * If this is an RSA port, see if we can kick it up to the
- * higher speed clock.
- */
- enable_rsa(up);
-#endif
-
- /*
- * Clear the FIFO buffers and disable them.
- * (they will be reenabled in set_termios())
- */
- serial8250_clear_fifos(up);
-
- /*
- * Clear the interrupt registers.
- */
- serial_port_in(port, UART_LSR);
- serial_port_in(port, UART_RX);
- serial_port_in(port, UART_IIR);
- serial_port_in(port, UART_MSR);
-
- /*
- * At this point, there's no way the LSR could still be 0xff;
- * if it is, then bail out, because there's likely no UART
- * here.
- */
- if (!(port->flags & UPF_BUGGY_UART) &&
- (serial_port_in(port, UART_LSR) == 0xff)) {
- printk_ratelimited(KERN_INFO "ttyS%d: LSR safety check engaged!\n",
- serial_index(port));
- return -ENODEV;
- }
-
- /*
- * For a XR16C850, we need to set the trigger levels
- */
- if (port->type == PORT_16850) {
- unsigned char fctr;
-
- serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
-
- fctr = serial_in(up, UART_FCTR) & ~(UART_FCTR_RX|UART_FCTR_TX);
- serial_port_out(port, UART_FCTR,
- fctr | UART_FCTR_TRGD | UART_FCTR_RX);
- serial_port_out(port, UART_TRG, UART_TRG_96);
- serial_port_out(port, UART_FCTR,
- fctr | UART_FCTR_TRGD | UART_FCTR_TX);
- serial_port_out(port, UART_TRG, UART_TRG_96);
-
- serial_port_out(port, UART_LCR, 0);
- }
-
- if (port->irq) {
- unsigned char iir1;
- /*
- * Test for UARTs that do not reassert THRE when the
- * transmitter is idle and the interrupt has already
- * been cleared. Real 16550s should always reassert
- * this interrupt whenever the transmitter is idle and
- * the interrupt is enabled. Delays are necessary to
- * allow register changes to become visible.
- */
- spin_lock_irqsave(&port->lock, flags);
- if (up->port.irqflags & IRQF_SHARED)
- disable_irq_nosync(port->irq);
-
- wait_for_xmitr(up, UART_LSR_THRE);
- serial_port_out_sync(port, UART_IER, UART_IER_THRI);
- udelay(1); /* allow THRE to set */
- iir1 = serial_port_in(port, UART_IIR);
- serial_port_out(port, UART_IER, 0);
- serial_port_out_sync(port, UART_IER, UART_IER_THRI);
- udelay(1); /* allow a working UART time to re-assert THRE */
- iir = serial_port_in(port, UART_IIR);
- serial_port_out(port, UART_IER, 0);
-
- if (port->irqflags & IRQF_SHARED)
- enable_irq(port->irq);
- spin_unlock_irqrestore(&port->lock, flags);
-
- /*
- * If the interrupt is not reasserted, or we otherwise
- * don't trust the iir, setup a timer to kick the UART
- * on a regular basis.
- */
- if ((!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) ||
- up->port.flags & UPF_BUG_THRE) {
- up->bugs |= UART_BUG_THRE;
- pr_debug("ttyS%d - using backup timer\n",
- serial_index(port));
- }
- }
-
- /*
- * The above check will only give an accurate result the first time
- * the port is opened so this value needs to be preserved.
- */
- if (up->bugs & UART_BUG_THRE) {
- up->timer.function = serial8250_backup_timeout;
- up->timer.data = (unsigned long)up;
- mod_timer(&up->timer, jiffies +
- uart_poll_timeout(port) + HZ / 5);
- }
-
- /*
- * If the "interrupt" for this port doesn't correspond with any
- * hardware interrupt, we use a timer-based system. The original
- * driver used to do this with IRQ0.
- */
- if (!port->irq) {
- up->timer.data = (unsigned long)up;
- mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
- } else {
- retval = serial_link_irq_chain(up);
- if (retval)
- return retval;
- }
-
- /*
- * Now, initialize the UART
- */
- serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
-
- spin_lock_irqsave(&port->lock, flags);
- if (up->port.flags & UPF_FOURPORT) {
- if (!up->port.irq)
- up->port.mctrl |= TIOCM_OUT1;
- } else
- /*
- * Most PC uarts need OUT2 raised to enable interrupts.
- */
- if (port->irq)
- up->port.mctrl |= TIOCM_OUT2;
-
- serial8250_set_mctrl(port, port->mctrl);
-
- /* Serial over Lan (SoL) hack:
- Intel 8257x Gigabit ethernet chips have a
- 16550 emulation, to be used for Serial Over Lan.
- Those chips take a longer time than a normal
- serial device to signalize that a transmission
- data was queued. Due to that, the above test generally
- fails. One solution would be to delay the reading of
- iir. However, this is not reliable, since the timeout
- is variable. So, let's just don't test if we receive
- TX irq. This way, we'll never enable UART_BUG_TXEN.
- */
- if (skip_txen_test || up->port.flags & UPF_NO_TXEN_TEST)
- goto dont_test_tx_en;
-
- /*
- * Do a quick test to see if we receive an
- * interrupt when we enable the TX irq.
- */
- serial_port_out(port, UART_IER, UART_IER_THRI);
- lsr = serial_port_in(port, UART_LSR);
- iir = serial_port_in(port, UART_IIR);
- serial_port_out(port, UART_IER, 0);
-
- if (lsr & UART_LSR_TEMT && iir & UART_IIR_NO_INT) {
- if (!(up->bugs & UART_BUG_TXEN)) {
- up->bugs |= UART_BUG_TXEN;
- pr_debug("ttyS%d - enabling bad tx status workarounds\n",
- serial_index(port));
- }
- } else {
- up->bugs &= ~UART_BUG_TXEN;
- }
-
-dont_test_tx_en:
- spin_unlock_irqrestore(&port->lock, flags);
-
- /*
- * Clear the interrupt registers again for luck, and clear the
- * saved flags to avoid getting false values from polling
- * routines or the previous session.
- */
- serial_port_in(port, UART_LSR);
- serial_port_in(port, UART_RX);
- serial_port_in(port, UART_IIR);
- serial_port_in(port, UART_MSR);
- up->lsr_saved_flags = 0;
- up->msr_saved_flags = 0;
-
- /*
- * Request DMA channels for both RX and TX.
- */
- if (up->dma) {
- retval = serial8250_request_dma(up);
- if (retval) {
- pr_warn_ratelimited("ttyS%d - failed to request DMA\n",
- serial_index(port));
- up->dma = NULL;
- }
- }
-
- /*
- * Finally, enable interrupts. Note: Modem status interrupts
- * are set via set_termios(), which will be occurring imminently
- * anyway, so we don't enable them here.
- */
- up->ier = UART_IER_RLSI | UART_IER_RDI;
- serial_port_out(port, UART_IER, up->ier);
-
- if (port->flags & UPF_FOURPORT) {
- unsigned int icp;
- /*
- * Enable interrupts on the AST Fourport board
- */
- icp = (port->iobase & 0xfe0) | 0x01f;
- outb_p(0x80, icp);
- inb_p(icp);
- }
-
- return 0;
-}
-
-static void serial8250_shutdown(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned long flags;
-
- /*
- * Disable interrupts from this port
- */
- up->ier = 0;
- serial_port_out(port, UART_IER, 0);
-
- if (up->dma)
- serial8250_release_dma(up);
-
- spin_lock_irqsave(&port->lock, flags);
- if (port->flags & UPF_FOURPORT) {
- /* reset interrupts on the AST Fourport board */
- inb((port->iobase & 0xfe0) | 0x1f);
- port->mctrl |= TIOCM_OUT1;
- } else
- port->mctrl &= ~TIOCM_OUT2;
-
- serial8250_set_mctrl(port, port->mctrl);
- spin_unlock_irqrestore(&port->lock, flags);
-
- /*
- * Disable break condition and FIFOs
- */
- serial_port_out(port, UART_LCR,
- serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);
- serial8250_clear_fifos(up);
-
-#ifdef CONFIG_SERIAL_8250_RSA
- /*
- * Reset the RSA board back to 115kbps compat mode.
- */
- disable_rsa(up);
-#endif
-
- /*
- * Read data port to reset things, and then unlink from
- * the IRQ chain.
- */
- serial_port_in(port, UART_RX);
-
- del_timer_sync(&up->timer);
- up->timer.function = serial8250_timeout;
- if (port->irq)
- serial_unlink_irq_chain(up);
-}
-
-static unsigned int serial8250_get_divisor(struct uart_port *port, unsigned int baud)
-{
- unsigned int quot;
-
- /*
- * Handle magic divisors for baud rates above baud_base on
- * SMSC SuperIO chips.
- */
- if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
- baud == (port->uartclk/4))
- quot = 0x8001;
- else if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
- baud == (port->uartclk/8))
- quot = 0x8002;
- else
- quot = uart_get_divisor(port, baud);
-
- return quot;
-}
-
-void
-serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
- struct ktermios *old)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- unsigned char cval, fcr = 0;
- unsigned long flags;
- unsigned int baud, quot;
- int fifo_bug = 0;
-
- switch (termios->c_cflag & CSIZE) {
- case CS5:
- cval = UART_LCR_WLEN5;
- break;
- case CS6:
- cval = UART_LCR_WLEN6;
- break;
- case CS7:
- cval = UART_LCR_WLEN7;
- break;
- default:
- case CS8:
- cval = UART_LCR_WLEN8;
- break;
- }
-
- if (termios->c_cflag & CSTOPB)
- cval |= UART_LCR_STOP;
- if (termios->c_cflag & PARENB) {
- cval |= UART_LCR_PARITY;
- if (up->bugs & UART_BUG_PARITY)
- fifo_bug = 1;
- }
- if (!(termios->c_cflag & PARODD))
- cval |= UART_LCR_EPAR;
-#ifdef CMSPAR
- if (termios->c_cflag & CMSPAR)
- cval |= UART_LCR_SPAR;
-#endif
-
- /*
- * Ask the core to calculate the divisor for us.
- */
- baud = uart_get_baud_rate(port, termios, old,
- port->uartclk / 16 / 0xffff,
- port->uartclk / 16);
- quot = serial8250_get_divisor(port, baud);
-
- /*
- * Oxford Semi 952 rev B workaround
- */
- if (up->bugs & UART_BUG_QUOT && (quot & 0xff) == 0)
- quot++;
-
- if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
- fcr = uart_config[port->type].fcr;
- if (baud < 2400 || fifo_bug) {
- fcr &= ~UART_FCR_TRIGGER_MASK;
- fcr |= UART_FCR_TRIGGER_1;
- }
- }
-
- /*
- * MCR-based auto flow control. When AFE is enabled, RTS will be
- * deasserted when the receive FIFO contains more characters than
- * the trigger, or the MCR RTS bit is cleared. In the case where
- * the remote UART is not using CTS auto flow control, we must
- * have sufficient FIFO entries for the latency of the remote
- * UART to respond. IOW, at least 32 bytes of FIFO.
- */
- if (up->capabilities & UART_CAP_AFE && port->fifosize >= 32) {
- up->mcr &= ~UART_MCR_AFE;
- if (termios->c_cflag & CRTSCTS)
- up->mcr |= UART_MCR_AFE;
- }
-
- /*
- * Ok, we're now changing the port state. Do it with
- * interrupts disabled.
- */
- spin_lock_irqsave(&port->lock, flags);
-
- /*
- * Update the per-port timeout.
- */
- uart_update_timeout(port, termios->c_cflag, baud);
-
- port->read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
- if (termios->c_iflag & INPCK)
- port->read_status_mask |= UART_LSR_FE | UART_LSR_PE;
- if (termios->c_iflag & (BRKINT | PARMRK))
- port->read_status_mask |= UART_LSR_BI;
-
- /*
- * Characteres to ignore
- */
- port->ignore_status_mask = 0;
- if (termios->c_iflag & IGNPAR)
- port->ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
- if (termios->c_iflag & IGNBRK) {
- port->ignore_status_mask |= UART_LSR_BI;
- /*
- * If we're ignoring parity and break indicators,
- * ignore overruns too (for real raw support).
- */
- if (termios->c_iflag & IGNPAR)
- port->ignore_status_mask |= UART_LSR_OE;
- }
-
- /*
- * ignore all characters if CREAD is not set
- */
- if ((termios->c_cflag & CREAD) == 0)
- port->ignore_status_mask |= UART_LSR_DR;
-
- /*
- * CTS flow control flag and modem status interrupts
- */
- up->ier &= ~UART_IER_MSI;
- if (!(up->bugs & UART_BUG_NOMSR) &&
- UART_ENABLE_MS(&up->port, termios->c_cflag))
- up->ier |= UART_IER_MSI;
- if (up->capabilities & UART_CAP_UUE)
- up->ier |= UART_IER_UUE;
- if (up->capabilities & UART_CAP_RTOIE)
- up->ier |= UART_IER_RTOIE;
-
- serial_port_out(port, UART_IER, up->ier);
-
- if (up->capabilities & UART_CAP_EFR) {
- unsigned char efr = 0;
- /*
- * TI16C752/Startech hardware flow control. FIXME:
- * - TI16C752 requires control thresholds to be set.
- * - UART_MCR_RTS is ineffective if auto-RTS mode is enabled.
- */
- if (termios->c_cflag & CRTSCTS)
- efr |= UART_EFR_CTS;
-
- serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
- if (port->flags & UPF_EXAR_EFR)
- serial_port_out(port, UART_XR_EFR, efr);
- else
- serial_port_out(port, UART_EFR, efr);
- }
-
- /* Workaround to enable 115200 baud on OMAP1510 internal ports */
- if (is_omap1510_8250(up)) {
- if (baud == 115200) {
- quot = 1;
- serial_port_out(port, UART_OMAP_OSC_12M_SEL, 1);
- } else
- serial_port_out(port, UART_OMAP_OSC_12M_SEL, 0);
- }
-
- /*
- * For NatSemi, switch to bank 2 not bank 1, to avoid resetting EXCR2,
- * otherwise just set DLAB
- */
- if (up->capabilities & UART_NATSEMI)
- serial_port_out(port, UART_LCR, 0xe0);
- else
- serial_port_out(port, UART_LCR, cval | UART_LCR_DLAB);
-
- serial_dl_write(up, quot);
-
- /*
- * LCR DLAB must be set to enable 64-byte FIFO mode. If the FCR
- * is written without DLAB set, this mode will be disabled.
- */
- if (port->type == PORT_16750)
- serial_port_out(port, UART_FCR, fcr);
-
- serial_port_out(port, UART_LCR, cval); /* reset DLAB */
- up->lcr = cval; /* Save LCR */
- if (port->type != PORT_16750) {
- /* emulated UARTs (Lucent Venus 167x) need two steps */
- if (fcr & UART_FCR_ENABLE_FIFO)
- serial_port_out(port, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_port_out(port, UART_FCR, fcr); /* set fcr */
- }
- serial8250_set_mctrl(port, port->mctrl);
- spin_unlock_irqrestore(&port->lock, flags);
- /* Don't rewrite B0 */
- if (tty_termios_baud_rate(termios))
- tty_termios_encode_baud_rate(termios, baud, baud);
-}
-EXPORT_SYMBOL(serial8250_do_set_termios);
-
-static void
-serial8250_set_termios(struct uart_port *port, struct ktermios *termios,
- struct ktermios *old)
-{
- if (port->set_termios)
- port->set_termios(port, termios, old);
- else
- serial8250_do_set_termios(port, termios, old);
-}
-
-static void
-serial8250_set_ldisc(struct uart_port *port, int new)
-{
- if (new == N_PPS) {
- port->flags |= UPF_HARDPPS_CD;
- serial8250_enable_ms(port);
- } else
- port->flags &= ~UPF_HARDPPS_CD;
-}
-
-
-void serial8250_do_pm(struct uart_port *port, unsigned int state,
- unsigned int oldstate)
-{
- struct uart_8250_port *p =
- container_of(port, struct uart_8250_port, port);
-
- serial8250_set_sleep(p, state != 0);
-}
-EXPORT_SYMBOL(serial8250_do_pm);
-
-static void
-serial8250_pm(struct uart_port *port, unsigned int state,
- unsigned int oldstate)
-{
- if (port->pm)
- port->pm(port, state, oldstate);
- else
- serial8250_do_pm(port, state, oldstate);
-}
-
-static unsigned int serial8250_port_size(struct uart_8250_port *pt)
-{
- if (pt->port.iotype == UPIO_AU)
- return 0x1000;
- if (is_omap1_8250(pt))
- return 0x16 << pt->port.regshift;
-
- return 8 << pt->port.regshift;
-}
-
-/*
- * Resource handling.
- */
-static int serial8250_request_std_resource(struct uart_8250_port *up)
-{
- unsigned int size = serial8250_port_size(up);
- struct uart_port *port = &up->port;
- int ret = 0;
-
- switch (port->iotype) {
- case UPIO_AU:
- case UPIO_TSI:
- case UPIO_MEM32:
- case UPIO_MEM:
- if (!port->mapbase)
- break;
-
- if (!request_mem_region(port->mapbase, size, "serial")) {
- ret = -EBUSY;
- break;
- }
-
- if (port->flags & UPF_IOREMAP) {
- port->membase = ioremap_nocache(port->mapbase, size);
- if (!port->membase) {
- release_mem_region(port->mapbase, size);
- ret = -ENOMEM;
- }
- }
- break;
-
- case UPIO_HUB6:
- case UPIO_PORT:
- if (!request_region(port->iobase, size, "serial"))
- ret = -EBUSY;
- break;
- }
- return ret;
-}
-
-static void serial8250_release_std_resource(struct uart_8250_port *up)
-{
- unsigned int size = serial8250_port_size(up);
- struct uart_port *port = &up->port;
-
- switch (port->iotype) {
- case UPIO_AU:
- case UPIO_TSI:
- case UPIO_MEM32:
- case UPIO_MEM:
- if (!port->mapbase)
- break;
-
- if (port->flags & UPF_IOREMAP) {
- iounmap(port->membase);
- port->membase = NULL;
- }
-
- release_mem_region(port->mapbase, size);
- break;
-
- case UPIO_HUB6:
- case UPIO_PORT:
- release_region(port->iobase, size);
- break;
- }
-}
-
-static int serial8250_request_rsa_resource(struct uart_8250_port *up)
-{
- unsigned long start = UART_RSA_BASE << up->port.regshift;
- unsigned int size = 8 << up->port.regshift;
- struct uart_port *port = &up->port;
- int ret = -EINVAL;
-
- switch (port->iotype) {
- case UPIO_HUB6:
- case UPIO_PORT:
- start += port->iobase;
- if (request_region(start, size, "serial-rsa"))
- ret = 0;
- else
- ret = -EBUSY;
- break;
- }
-
- return ret;
-}
-
-static void serial8250_release_rsa_resource(struct uart_8250_port *up)
-{
- unsigned long offset = UART_RSA_BASE << up->port.regshift;
- unsigned int size = 8 << up->port.regshift;
- struct uart_port *port = &up->port;
-
- switch (port->iotype) {
- case UPIO_HUB6:
- case UPIO_PORT:
- release_region(port->iobase + offset, size);
- break;
- }
-}
-
-static void serial8250_release_port(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- serial8250_release_std_resource(up);
- if (port->type == PORT_RSA)
- serial8250_release_rsa_resource(up);
-}
-
-static int serial8250_request_port(struct uart_port *port)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- int ret;
-
- if (port->type == PORT_8250_CIR)
- return -ENODEV;
-
- ret = serial8250_request_std_resource(up);
- if (ret == 0 && port->type == PORT_RSA) {
- ret = serial8250_request_rsa_resource(up);
- if (ret < 0)
- serial8250_release_std_resource(up);
- }
-
- return ret;
-}
-
-static void serial8250_config_port(struct uart_port *port, int flags)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
- int probeflags = PROBE_ANY;
- int ret;
-
- if (port->type == PORT_8250_CIR)
- return;
-
- /*
- * Find the region that we can probe for. This in turn
- * tells us whether we can probe for the type of port.
- */
- ret = serial8250_request_std_resource(up);
- if (ret < 0)
- return;
-
- ret = serial8250_request_rsa_resource(up);
- if (ret < 0)
- probeflags &= ~PROBE_RSA;
-
- if (port->iotype != up->cur_iotype)
- set_io_from_upio(port);
-
- if (flags & UART_CONFIG_TYPE)
- autoconfig(up, probeflags);
-
- /* if access method is AU, it is a 16550 with a quirk */
- if (port->type == PORT_16550A && port->iotype == UPIO_AU)
- up->bugs |= UART_BUG_NOMSR;
-
- if (port->type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
- autoconfig_irq(up);
-
- if (port->type != PORT_RSA && probeflags & PROBE_RSA)
- serial8250_release_rsa_resource(up);
- if (port->type == PORT_UNKNOWN)
- serial8250_release_std_resource(up);
-
- /* Fixme: probably not the best place for this */
- if ((port->type == PORT_XR17V35X) ||
- (port->type == PORT_XR17D15X))
- port->handle_irq = exar_handle_irq;
-}
-
-static int
-serial8250_verify_port(struct uart_port *port, struct serial_struct *ser)
-{
- if (ser->irq >= nr_irqs || ser->irq < 0 ||
- ser->baud_base < 9600 || ser->type < PORT_UNKNOWN ||
- ser->type >= ARRAY_SIZE(uart_config) || ser->type == PORT_CIRRUS ||
- ser->type == PORT_STARTECH)
- return -EINVAL;
- return 0;
-}
-
-static const char *
-serial8250_type(struct uart_port *port)
-{
- int type = port->type;
-
- if (type >= ARRAY_SIZE(uart_config))
- type = 0;
- return uart_config[type].name;
-}
-
-static struct uart_ops serial8250_pops = {
- .tx_empty = serial8250_tx_empty,
- .set_mctrl = serial8250_set_mctrl,
- .get_mctrl = serial8250_get_mctrl,
- .stop_tx = serial8250_stop_tx,
- .start_tx = serial8250_start_tx,
- .stop_rx = serial8250_stop_rx,
- .enable_ms = serial8250_enable_ms,
- .break_ctl = serial8250_break_ctl,
- .startup = serial8250_startup,
- .shutdown = serial8250_shutdown,
- .set_termios = serial8250_set_termios,
- .set_ldisc = serial8250_set_ldisc,
- .pm = serial8250_pm,
- .type = serial8250_type,
- .release_port = serial8250_release_port,
- .request_port = serial8250_request_port,
- .config_port = serial8250_config_port,
- .verify_port = serial8250_verify_port,
-#ifdef CONFIG_CONSOLE_POLL
- .poll_get_char = serial8250_get_poll_char,
- .poll_put_char = serial8250_put_poll_char,
-#endif
-};
-
-static struct uart_8250_port serial8250_ports[UART_NR];
-
-static void (*serial8250_isa_config)(int port, struct uart_port *up,
- unsigned short *capabilities);
-
-void serial8250_set_isa_configurator(
- void (*v)(int port, struct uart_port *up, unsigned short *capabilities))
-{
- serial8250_isa_config = v;
-}
-EXPORT_SYMBOL(serial8250_set_isa_configurator);
-
-static void __init serial8250_isa_init_ports(void)
-{
- struct uart_8250_port *up;
- static int first = 1;
- int i, irqflag = 0;
-
- if (!first)
- return;
- first = 0;
-
- if (nr_uarts > UART_NR)
- nr_uarts = UART_NR;
-
- for (i = 0; i < nr_uarts; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
- struct uart_port *port = &up->port;
-
- port->line = i;
- spin_lock_init(&port->lock);
-
- init_timer(&up->timer);
- up->timer.function = serial8250_timeout;
- up->cur_iotype = 0xFF;
-
- /*
- * ALPHA_KLUDGE_MCR needs to be killed.
- */
- up->mcr_mask = ~ALPHA_KLUDGE_MCR;
- up->mcr_force = ALPHA_KLUDGE_MCR;
-
- port->ops = &serial8250_pops;
- }
-
- if (share_irqs)
- irqflag = IRQF_SHARED;
-
- for (i = 0, up = serial8250_ports;
- i < ARRAY_SIZE(old_serial_port) && i < nr_uarts;
- i++, up++) {
- struct uart_port *port = &up->port;
-
- port->iobase = old_serial_port[i].port;
- port->irq = irq_canonicalize(old_serial_port[i].irq);
- port->irqflags = old_serial_port[i].irqflags;
- port->uartclk = old_serial_port[i].baud_base * 16;
- port->flags = old_serial_port[i].flags;
- port->hub6 = old_serial_port[i].hub6;
- port->membase = old_serial_port[i].iomem_base;
- port->iotype = old_serial_port[i].io_type;
- port->regshift = old_serial_port[i].iomem_reg_shift;
- set_io_from_upio(port);
- port->irqflags |= irqflag;
- if (serial8250_isa_config != NULL)
- serial8250_isa_config(i, &up->port, &up->capabilities);
-
- }
-}
-
-static void
-serial8250_init_fixed_type_port(struct uart_8250_port *up, unsigned int type)
-{
- up->port.type = type;
- if (!up->port.fifosize)
- up->port.fifosize = uart_config[type].fifo_size;
- if (!up->tx_loadsz)
- up->tx_loadsz = uart_config[type].tx_loadsz;
- if (!up->capabilities)
- up->capabilities = uart_config[type].flags;
-}
-
-static void __init
-serial8250_register_ports(struct uart_driver *drv, struct device *dev)
-{
- int i;
-
- for (i = 0; i < nr_uarts; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.dev)
- continue;
-
- up->port.dev = dev;
-
- if (up->port.flags & UPF_FIXED_TYPE)
- serial8250_init_fixed_type_port(up, up->port.type);
-
- uart_add_one_port(drv, &up->port);
- }
-}
-
-#ifdef CONFIG_SERIAL_8250_CONSOLE
-
-static void serial8250_console_putchar(struct uart_port *port, int ch)
-{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
-
- wait_for_xmitr(up, UART_LSR_THRE);
- serial_port_out(port, UART_TX, ch);
-}
-
-/*
- * Print a string to the serial port trying not to disturb
- * any possible real use of the port...
- *
- * The console_lock must be held when we get here.
- */
-static void
-serial8250_console_write(struct console *co, const char *s, unsigned int count)
-{
- struct uart_8250_port *up = &serial8250_ports[co->index];
- struct uart_port *port = &up->port;
- unsigned long flags;
- unsigned int ier;
- int locked = 1;
-
- touch_nmi_watchdog();
-
- local_irq_save(flags);
- if (port->sysrq) {
- /* serial8250_handle_irq() already took the lock */
- locked = 0;
- } else if (oops_in_progress) {
- locked = spin_trylock(&port->lock);
- } else
- spin_lock(&port->lock);
-
- /*
- * First save the IER then disable the interrupts
- */
- ier = serial_port_in(port, UART_IER);
-
- if (up->capabilities & UART_CAP_UUE)
- serial_port_out(port, UART_IER, UART_IER_UUE);
- else
- serial_port_out(port, UART_IER, 0);
-
- uart_console_write(port, s, count, serial8250_console_putchar);
-
- /*
- * Finally, wait for transmitter to become empty
- * and restore the IER
- */
- wait_for_xmitr(up, BOTH_EMPTY);
- serial_port_out(port, UART_IER, ier);
-
- /*
- * The receive handling will happen properly because the
- * receive ready bit will still be set; it is not cleared
- * on read. However, modem control will not, we must
- * call it if we have saved something in the saved flags
- * while processing with interrupts off.
- */
- if (up->msr_saved_flags)
- serial8250_modem_status(up);
-
- if (locked)
- spin_unlock(&port->lock);
- local_irq_restore(flags);
-}
-
-static int __init serial8250_console_setup(struct console *co, char *options)
-{
- struct uart_port *port;
- int baud = 9600;
- int bits = 8;
- int parity = 'n';
- int flow = 'n';
-
- /*
- * Check whether an invalid uart number has been specified, and
- * if so, search for the first available port that does have
- * console support.
- */
- if (co->index >= nr_uarts)
- co->index = 0;
- port = &serial8250_ports[co->index].port;
- if (!port->iobase && !port->membase)
- return -ENODEV;
-
- if (options)
- uart_parse_options(options, &baud, &parity, &bits, &flow);
-
- return uart_set_options(port, co, baud, parity, bits, flow);
-}
-
-static int serial8250_console_early_setup(void)
-{
- return serial8250_find_port_for_earlycon();
-}
-
-static struct console serial8250_console = {
- .name = "ttyS",
- .write = serial8250_console_write,
- .device = uart_console_device,
- .setup = serial8250_console_setup,
- .early_setup = serial8250_console_early_setup,
- .flags = CON_PRINTBUFFER | CON_ANYTIME,
- .index = -1,
- .data = &serial8250_reg,
-};
-
-static int __init serial8250_console_init(void)
-{
- serial8250_isa_init_ports();
- register_console(&serial8250_console);
- return 0;
-}
-console_initcall(serial8250_console_init);
-
-int serial8250_find_port(struct uart_port *p)
-{
- int line;
- struct uart_port *port;
-
- for (line = 0; line < nr_uarts; line++) {
- port = &serial8250_ports[line].port;
- if (uart_match_port(p, port))
- return line;
- }
- return -ENODEV;
-}
-
-#define SERIAL8250_CONSOLE &serial8250_console
-#else
-#define SERIAL8250_CONSOLE NULL
-#endif
-
-static struct uart_driver serial8250_reg = {
- .owner = THIS_MODULE,
- .driver_name = "serial",
- .dev_name = "ttyS",
- .major = TTY_MAJOR,
- .minor = 64,
- .cons = SERIAL8250_CONSOLE,
-};
-
-/*
- * early_serial_setup - early registration for 8250 ports
- *
- * Setup an 8250 port structure prior to console initialisation. Use
- * after console initialisation will cause undefined behaviour.
- */
-int __init early_serial_setup(struct uart_port *port)
-{
- struct uart_port *p;
-
- if (port->line >= ARRAY_SIZE(serial8250_ports))
- return -ENODEV;
-
- serial8250_isa_init_ports();
- p = &serial8250_ports[port->line].port;
- p->iobase = port->iobase;
- p->membase = port->membase;
- p->irq = port->irq;
- p->irqflags = port->irqflags;
- p->uartclk = port->uartclk;
- p->fifosize = port->fifosize;
- p->regshift = port->regshift;
- p->iotype = port->iotype;
- p->flags = port->flags;
- p->mapbase = port->mapbase;
- p->private_data = port->private_data;
- p->type = port->type;
- p->line = port->line;
-
- set_io_from_upio(p);
- if (port->serial_in)
- p->serial_in = port->serial_in;
- if (port->serial_out)
- p->serial_out = port->serial_out;
- if (port->handle_irq)
- p->handle_irq = port->handle_irq;
- else
- p->handle_irq = serial8250_default_handle_irq;
-
- return 0;
-}
-
-/**
- * serial8250_suspend_port - suspend one serial port
- * @line: serial line number
- *
- * Suspend one serial port.
- */
-void serial8250_suspend_port(int line)
-{
- uart_suspend_port(&serial8250_reg, &serial8250_ports[line].port);
-}
-
-/**
- * serial8250_resume_port - resume one serial port
- * @line: serial line number
- *
- * Resume one serial port.
- */
-void serial8250_resume_port(int line)
-{
- struct uart_8250_port *up = &serial8250_ports[line];
- struct uart_port *port = &up->port;
-
- if (up->capabilities & UART_NATSEMI) {
- /* Ensure it's still in high speed mode */
- serial_port_out(port, UART_LCR, 0xE0);
-
- ns16550a_goto_highspeed(up);
-
- serial_port_out(port, UART_LCR, 0);
- port->uartclk = 921600*16;
- }
- uart_resume_port(&serial8250_reg, port);
-}
-
-/*
- * Register a set of serial devices attached to a platform device. The
- * list is terminated with a zero flags entry, which means we expect
- * all entries to have at least UPF_BOOT_AUTOCONF set.
- */
-static int serial8250_probe(struct platform_device *dev)
-{
- struct plat_serial8250_port *p = dev->dev.platform_data;
- struct uart_8250_port uart;
- int ret, i, irqflag = 0;
-
- memset(&uart, 0, sizeof(uart));
-
- if (share_irqs)
- irqflag = IRQF_SHARED;
-
- for (i = 0; p && p->flags != 0; p++, i++) {
- uart.port.iobase = p->iobase;
- uart.port.membase = p->membase;
- uart.port.irq = p->irq;
- uart.port.irqflags = p->irqflags;
- uart.port.uartclk = p->uartclk;
- uart.port.regshift = p->regshift;
- uart.port.iotype = p->iotype;
- uart.port.flags = p->flags;
- uart.port.mapbase = p->mapbase;
- uart.port.hub6 = p->hub6;
- uart.port.private_data = p->private_data;
- uart.port.type = p->type;
- uart.port.serial_in = p->serial_in;
- uart.port.serial_out = p->serial_out;
- uart.port.handle_irq = p->handle_irq;
- uart.port.handle_break = p->handle_break;
- uart.port.set_termios = p->set_termios;
- uart.port.pm = p->pm;
- uart.port.dev = &dev->dev;
- uart.port.irqflags |= irqflag;
- ret = serial8250_register_8250_port(&uart);
- if (ret < 0) {
- dev_err(&dev->dev, "unable to register port at index %d "
- "(IO%lx MEM%llx IRQ%d): %d\n", i,
- p->iobase, (unsigned long long)p->mapbase,
- p->irq, ret);
- }
- }
- return 0;
-}
-
-/*
- * Remove serial ports registered against a platform device.
- */
-static int serial8250_remove(struct platform_device *dev)
-{
- int i;
-
- for (i = 0; i < nr_uarts; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.dev == &dev->dev)
- serial8250_unregister_port(i);
- }
- return 0;
-}
-
-static int serial8250_suspend(struct platform_device *dev, pm_message_t state)
-{
- int i;
-
- for (i = 0; i < UART_NR; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
- uart_suspend_port(&serial8250_reg, &up->port);
- }
-
- return 0;
-}
-
-static int serial8250_resume(struct platform_device *dev)
-{
- int i;
-
- for (i = 0; i < UART_NR; i++) {
- struct uart_8250_port *up = &serial8250_ports[i];
-
- if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
- serial8250_resume_port(i);
- }
-
- return 0;
-}
-
-static struct platform_driver serial8250_isa_driver = {
- .probe = serial8250_probe,
- .remove = serial8250_remove,
- .suspend = serial8250_suspend,
- .resume = serial8250_resume,
- .driver = {
- .name = "serial8250",
- .owner = THIS_MODULE,
- },
-};
-
-/*
- * This "device" covers _all_ ISA 8250-compatible serial devices listed
- * in the table in include/asm/serial.h
- */
-static struct platform_device *serial8250_isa_devs;
-
-/*
- * serial8250_register_8250_port and serial8250_unregister_port allows for
- * 16x50 serial ports to be configured at run-time, to support PCMCIA
- * modems and PCI multiport cards.
- */
-static DEFINE_MUTEX(serial_mutex);
-
-static struct uart_8250_port *serial8250_find_match_or_unused(struct uart_port *port)
-{
- int i;
-
- /*
- * First, find a port entry which matches.
- */
- for (i = 0; i < nr_uarts; i++)
- if (uart_match_port(&serial8250_ports[i].port, port))
- return &serial8250_ports[i];
-
- /*
- * We didn't find a matching entry, so look for the first
- * free entry. We look for one which hasn't been previously
- * used (indicated by zero iobase).
- */
- for (i = 0; i < nr_uarts; i++)
- if (serial8250_ports[i].port.type == PORT_UNKNOWN &&
- serial8250_ports[i].port.iobase == 0)
- return &serial8250_ports[i];
-
- /*
- * That also failed. Last resort is to find any entry which
- * doesn't have a real port associated with it.
- */
- for (i = 0; i < nr_uarts; i++)
- if (serial8250_ports[i].port.type == PORT_UNKNOWN)
- return &serial8250_ports[i];
-
- return NULL;
-}
-
-/**
- * serial8250_register_8250_port - register a serial port
- * @up: serial port template
- *
- * Configure the serial port specified by the request. If the
- * port exists and is in use, it is hung up and unregistered
- * first.
- *
- * The port is then probed and if necessary the IRQ is autodetected
- * If this fails an error is returned.
- *
- * On success the port is ready to use and the line number is returned.
- */
-int serial8250_register_8250_port(struct uart_8250_port *up)
-{
- struct uart_8250_port *uart;
- int ret = -ENOSPC;
-
- if (up->port.uartclk == 0)
- return -EINVAL;
-
- mutex_lock(&serial_mutex);
-
- uart = serial8250_find_match_or_unused(&up->port);
- if (uart && uart->port.type != PORT_8250_CIR) {
- if (uart->port.dev)
- uart_remove_one_port(&serial8250_reg, &uart->port);
-
- uart->port.iobase = up->port.iobase;
- uart->port.membase = up->port.membase;
- uart->port.irq = up->port.irq;
- uart->port.irqflags = up->port.irqflags;
- uart->port.uartclk = up->port.uartclk;
- uart->port.fifosize = up->port.fifosize;
- uart->port.regshift = up->port.regshift;
- uart->port.iotype = up->port.iotype;
- uart->port.flags = up->port.flags | UPF_BOOT_AUTOCONF;
- uart->bugs = up->bugs;
- uart->port.mapbase = up->port.mapbase;
- uart->port.private_data = up->port.private_data;
- uart->port.fifosize = up->port.fifosize;
- uart->tx_loadsz = up->tx_loadsz;
- uart->capabilities = up->capabilities;
-
- if (up->port.dev)
- uart->port.dev = up->port.dev;
-
- if (up->port.flags & UPF_FIXED_TYPE)
- serial8250_init_fixed_type_port(uart, up->port.type);
-
- set_io_from_upio(&uart->port);
- /* Possibly override default I/O functions. */
- if (up->port.serial_in)
- uart->port.serial_in = up->port.serial_in;
- if (up->port.serial_out)
- uart->port.serial_out = up->port.serial_out;
- if (up->port.handle_irq)
- uart->port.handle_irq = up->port.handle_irq;
- /* Possibly override set_termios call */
- if (up->port.set_termios)
- uart->port.set_termios = up->port.set_termios;
- if (up->port.pm)
- uart->port.pm = up->port.pm;
- if (up->port.handle_break)
- uart->port.handle_break = up->port.handle_break;
- if (up->dl_read)
- uart->dl_read = up->dl_read;
- if (up->dl_write)
- uart->dl_write = up->dl_write;
- if (up->dma)
- uart->dma = up->dma;
-
- if (serial8250_isa_config != NULL)
- serial8250_isa_config(0, &uart->port,
- &uart->capabilities);
-
- ret = uart_add_one_port(&serial8250_reg, &uart->port);
- if (ret == 0)
- ret = uart->port.line;
- }
- mutex_unlock(&serial_mutex);
-
- return ret;
-}
-EXPORT_SYMBOL(serial8250_register_8250_port);
-
-/**
- * serial8250_unregister_port - remove a 16x50 serial port at runtime
- * @line: serial line number
- *
- * Remove one serial port. This may not be called from interrupt
- * context. We hand the port back to the our control.
- */
-void serial8250_unregister_port(int line)
-{
- struct uart_8250_port *uart = &serial8250_ports[line];
-
- mutex_lock(&serial_mutex);
- uart_remove_one_port(&serial8250_reg, &uart->port);
- if (serial8250_isa_devs) {
- uart->port.flags &= ~UPF_BOOT_AUTOCONF;
- uart->port.type = PORT_UNKNOWN;
- uart->port.dev = &serial8250_isa_devs->dev;
- uart->capabilities = uart_config[uart->port.type].flags;
- uart_add_one_port(&serial8250_reg, &uart->port);
- } else {
- uart->port.dev = NULL;
- }
- mutex_unlock(&serial_mutex);
-}
-EXPORT_SYMBOL(serial8250_unregister_port);
-
-static int __init serial8250_init(void)
-{
- int ret;
-
- serial8250_isa_init_ports();
-
- printk(KERN_INFO "Serial: 8250/16550 driver, "
- "%d ports, IRQ sharing %sabled\n", nr_uarts,
- share_irqs ? "en" : "dis");
-
-#ifdef CONFIG_SPARC
- ret = sunserial_register_minors(&serial8250_reg, UART_NR);
-#else
- serial8250_reg.nr = UART_NR;
- ret = uart_register_driver(&serial8250_reg);
-#endif
- if (ret)
- goto out;
-
- ret = serial8250_pnp_init();
- if (ret)
- goto unreg_uart_drv;
-
- serial8250_isa_devs = platform_device_alloc("serial8250",
- PLAT8250_DEV_LEGACY);
- if (!serial8250_isa_devs) {
- ret = -ENOMEM;
- goto unreg_pnp;
- }
-
- ret = platform_device_add(serial8250_isa_devs);
- if (ret)
- goto put_dev;
-
- serial8250_register_ports(&serial8250_reg, &serial8250_isa_devs->dev);
-
- ret = platform_driver_register(&serial8250_isa_driver);
- if (ret == 0)
- goto out;
-
- platform_device_del(serial8250_isa_devs);
-put_dev:
- platform_device_put(serial8250_isa_devs);
-unreg_pnp:
- serial8250_pnp_exit();
-unreg_uart_drv:
-#ifdef CONFIG_SPARC
- sunserial_unregister_minors(&serial8250_reg, UART_NR);
-#else
- uart_unregister_driver(&serial8250_reg);
-#endif
-out:
- return ret;
-}
-
-static void __exit serial8250_exit(void)
-{
- struct platform_device *isa_dev = serial8250_isa_devs;
-
- /*
- * This tells serial8250_unregister_port() not to re-register
- * the ports (thereby making serial8250_isa_driver permanently
- * in use.)
- */
- serial8250_isa_devs = NULL;
-
- platform_driver_unregister(&serial8250_isa_driver);
- platform_device_unregister(isa_dev);
-
- serial8250_pnp_exit();
-
-#ifdef CONFIG_SPARC
- sunserial_unregister_minors(&serial8250_reg, UART_NR);
-#else
- uart_unregister_driver(&serial8250_reg);
-#endif
-}
-
-module_init(serial8250_init);
-module_exit(serial8250_exit);
-
-EXPORT_SYMBOL(serial8250_suspend_port);
-EXPORT_SYMBOL(serial8250_resume_port);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Generic 8250/16x50 serial driver");
-
-module_param(share_irqs, uint, 0644);
-MODULE_PARM_DESC(share_irqs, "Share IRQs with other non-8250/16x50 devices"
- " (unsafe)");
-
-module_param(nr_uarts, uint, 0644);
-MODULE_PARM_DESC(nr_uarts, "Maximum number of UARTs supported. (1-" __MODULE_STRING(CONFIG_SERIAL_8250_NR_UARTS) ")");
-
-module_param(skip_txen_test, uint, 0644);
-MODULE_PARM_DESC(skip_txen_test, "Skip checking for the TXEN bug at init time");
-
-#ifdef CONFIG_SERIAL_8250_RSA
-module_param_array(probe_rsa, ulong, &probe_rsa_count, 0444);
-MODULE_PARM_DESC(probe_rsa, "Probe I/O ports for RSA");
-#endif
-MODULE_ALIAS_CHARDEV_MAJOR(TTY_MAJOR);
-
-#ifndef MODULE
-/* This module was renamed to 8250_core in 3.7. Keep the old "8250" name
- * working as well for the module options so we don't break people. We
- * need to keep the names identical and the convenient macros will happily
- * refuse to let us do that by failing the build with redefinition errors
- * of global variables. So we stick them inside a dummy function to avoid
- * those conflicts. The options still get parsed, and the redefined
- * MODULE_PARAM_PREFIX lets us keep the "8250." syntax alive.
- *
- * This is hacky. I'm sorry.
- */
-static void __used s8250_options(void)
-{
-#undef MODULE_PARAM_PREFIX
-#define MODULE_PARAM_PREFIX "8250."
-
- module_param_cb(share_irqs, ¶m_ops_uint, &share_irqs, 0644);
- module_param_cb(nr_uarts, ¶m_ops_uint, &nr_uarts, 0644);
- module_param_cb(skip_txen_test, ¶m_ops_uint, &skip_txen_test, 0644);
-#ifdef CONFIG_SERIAL_8250_RSA
- __module_param_call(MODULE_PARAM_PREFIX, probe_rsa,
- ¶m_array_ops, .arr = &__param_arr_probe_rsa,
- 0444, -1);
-#endif
-}
-#else
-MODULE_ALIAS("8250");
-#endif
--- /dev/null
+/*
+ * Driver for 8250/16550-type serial ports
+ *
+ * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
+ *
+ * Copyright (C) 2001 Russell King.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * A note about mapbase / membase
+ *
+ * mapbase is the physical address of the IO port.
+ * membase is an 'ioremapped' cookie.
+ */
+
+#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/delay.h>
+#include <linux/platform_device.h>
+#include <linux/tty.h>
+#include <linux/ratelimit.h>
+#include <linux/tty_flip.h>
+#include <linux/serial_reg.h>
+#include <linux/serial_core.h>
+#include <linux/serial.h>
+#include <linux/serial_8250.h>
+#include <linux/nmi.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#ifdef CONFIG_SPARC
+#include <linux/sunserialcore.h>
+#endif
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "8250.h"
+
+/*
+ * Configuration:
+ * share_irqs - whether we pass IRQF_SHARED to request_irq(). This option
+ * is unsafe when used on edge-triggered interrupts.
+ */
+static unsigned int share_irqs = SERIAL8250_SHARE_IRQS;
+
+static unsigned int nr_uarts = CONFIG_SERIAL_8250_RUNTIME_UARTS;
+
+static struct uart_driver serial8250_reg;
+
+static int serial_index(struct uart_port *port)
+{
+ return (serial8250_reg.minor - 64) + port->line;
+}
+
+static unsigned int skip_txen_test; /* force skip of txen test at init time */
+
+/*
+ * Debugging.
+ */
+#if 0
+#define DEBUG_AUTOCONF(fmt...) printk(fmt)
+#else
+#define DEBUG_AUTOCONF(fmt...) do { } while (0)
+#endif
+
+#if 0
+#define DEBUG_INTR(fmt...) printk(fmt)
+#else
+#define DEBUG_INTR(fmt...) do { } while (0)
+#endif
+
+#define PASS_LIMIT 512
+
+#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
+
+
+#ifdef CONFIG_SERIAL_8250_DETECT_IRQ
+#define CONFIG_SERIAL_DETECT_IRQ 1
+#endif
+#ifdef CONFIG_SERIAL_8250_MANY_PORTS
+#define CONFIG_SERIAL_MANY_PORTS 1
+#endif
+
+/*
+ * HUB6 is always on. This will be removed once the header
+ * files have been cleaned.
+ */
+#define CONFIG_HUB6 1
+
+#include <asm/serial.h>
+/*
+ * SERIAL_PORT_DFNS tells us about built-in ports that have no
+ * standard enumeration mechanism. Platforms that can find all
+ * serial ports via mechanisms like ACPI or PCI need not supply it.
+ */
+#ifndef SERIAL_PORT_DFNS
+#define SERIAL_PORT_DFNS
+#endif
+
+static const struct old_serial_port old_serial_port[] = {
+ SERIAL_PORT_DFNS /* defined in asm/serial.h */
+};
+
+#define UART_NR CONFIG_SERIAL_8250_NR_UARTS
+
+#ifdef CONFIG_SERIAL_8250_RSA
+
+#define PORT_RSA_MAX 4
+static unsigned long probe_rsa[PORT_RSA_MAX];
+static unsigned int probe_rsa_count;
+#endif /* CONFIG_SERIAL_8250_RSA */
+
+struct irq_info {
+ struct hlist_node node;
+ int irq;
+ spinlock_t lock; /* Protects list not the hash */
+ struct list_head *head;
+};
+
+#define NR_IRQ_HASH 32 /* Can be adjusted later */
+static struct hlist_head irq_lists[NR_IRQ_HASH];
+static DEFINE_MUTEX(hash_mutex); /* Used to walk the hash */
+
+/*
+ * Here we define the default xmit fifo size used for each type of UART.
+ */
+static const struct serial8250_config uart_config[] = {
+ [PORT_UNKNOWN] = {
+ .name = "unknown",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_8250] = {
+ .name = "8250",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16450] = {
+ .name = "16450",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16550] = {
+ .name = "16550",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16550A] = {
+ .name = "16550A",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_CIRRUS] = {
+ .name = "Cirrus",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16650] = {
+ .name = "ST16650",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_16650V2] = {
+ .name = "ST16650V2",
+ .fifo_size = 32,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
+ UART_FCR_T_TRIG_00,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_16750] = {
+ .name = "TI16750",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
+ UART_FCR7_64BYTE,
+ .flags = UART_CAP_FIFO | UART_CAP_SLEEP | UART_CAP_AFE,
+ },
+ [PORT_STARTECH] = {
+ .name = "Startech",
+ .fifo_size = 1,
+ .tx_loadsz = 1,
+ },
+ [PORT_16C950] = {
+ .name = "16C950/954",
+ .fifo_size = 128,
+ .tx_loadsz = 128,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ /* UART_CAP_EFR breaks billionon CF bluetooth card. */
+ .flags = UART_CAP_FIFO | UART_CAP_SLEEP,
+ },
+ [PORT_16654] = {
+ .name = "ST16654",
+ .fifo_size = 64,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
+ UART_FCR_T_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_16850] = {
+ .name = "XR16850",
+ .fifo_size = 128,
+ .tx_loadsz = 128,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ },
+ [PORT_RSA] = {
+ .name = "RSA",
+ .fifo_size = 2048,
+ .tx_loadsz = 2048,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_NS16550A] = {
+ .name = "NS16550A",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_NATSEMI,
+ },
+ [PORT_XSCALE] = {
+ .name = "XScale",
+ .fifo_size = 32,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_UUE | UART_CAP_RTOIE,
+ },
+ [PORT_OCTEON] = {
+ .name = "OCTEON",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_AR7] = {
+ .name = "AR7",
+ .fifo_size = 16,
+ .tx_loadsz = 16,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_U6_16550A] = {
+ .name = "U6_16550A",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_TEGRA] = {
+ .name = "Tegra",
+ .fifo_size = 32,
+ .tx_loadsz = 8,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01 |
+ UART_FCR_T_TRIG_01,
+ .flags = UART_CAP_FIFO | UART_CAP_RTOIE,
+ },
+ [PORT_XR17D15X] = {
+ .name = "XR17D15X",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP,
+ },
+ [PORT_XR17V35X] = {
+ .name = "XR17V35X",
+ .fifo_size = 256,
+ .tx_loadsz = 256,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_11 |
+ UART_FCR_T_TRIG_11,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP,
+ },
+ [PORT_LPC3220] = {
+ .name = "LPC3220",
+ .fifo_size = 64,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_DMA_SELECT | UART_FCR_ENABLE_FIFO |
+ UART_FCR_R_TRIG_00 | UART_FCR_T_TRIG_00,
+ .flags = UART_CAP_FIFO,
+ },
+ [PORT_BRCM_TRUMANAGE] = {
+ .name = "TruManage",
+ .fifo_size = 1,
+ .tx_loadsz = 1024,
+ .flags = UART_CAP_HFIFO,
+ },
+ [PORT_8250_CIR] = {
+ .name = "CIR port"
+ },
+ [PORT_ALTR_16550_F32] = {
+ .name = "Altera 16550 FIFO32",
+ .fifo_size = 32,
+ .tx_loadsz = 32,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_ALTR_16550_F64] = {
+ .name = "Altera 16550 FIFO64",
+ .fifo_size = 64,
+ .tx_loadsz = 64,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+ [PORT_ALTR_16550_F128] = {
+ .name = "Altera 16550 FIFO128",
+ .fifo_size = 128,
+ .tx_loadsz = 128,
+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ .flags = UART_CAP_FIFO | UART_CAP_AFE,
+ },
+};
+
+/* Uart divisor latch read */
+static int default_serial_dl_read(struct uart_8250_port *up)
+{
+ return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
+}
+
+/* Uart divisor latch write */
+static void default_serial_dl_write(struct uart_8250_port *up, int value)
+{
+ serial_out(up, UART_DLL, value & 0xff);
+ serial_out(up, UART_DLM, value >> 8 & 0xff);
+}
+
+#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
+
+/* Au1x00/RT288x UART hardware has a weird register layout */
+static const u8 au_io_in_map[] = {
+ [UART_RX] = 0,
+ [UART_IER] = 2,
+ [UART_IIR] = 3,
+ [UART_LCR] = 5,
+ [UART_MCR] = 6,
+ [UART_LSR] = 7,
+ [UART_MSR] = 8,
+};
+
+static const u8 au_io_out_map[] = {
+ [UART_TX] = 1,
+ [UART_IER] = 2,
+ [UART_FCR] = 4,
+ [UART_LCR] = 5,
+ [UART_MCR] = 6,
+};
+
+static unsigned int au_serial_in(struct uart_port *p, int offset)
+{
+ offset = au_io_in_map[offset] << p->regshift;
+ return __raw_readl(p->membase + offset);
+}
+
+static void au_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = au_io_out_map[offset] << p->regshift;
+ __raw_writel(value, p->membase + offset);
+}
+
+/* Au1x00 haven't got a standard divisor latch */
+static int au_serial_dl_read(struct uart_8250_port *up)
+{
+ return __raw_readl(up->port.membase + 0x28);
+}
+
+static void au_serial_dl_write(struct uart_8250_port *up, int value)
+{
+ __raw_writel(value, up->port.membase + 0x28);
+}
+
+#endif
+
+static unsigned int hub6_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ outb(p->hub6 - 1 + offset, p->iobase);
+ return inb(p->iobase + 1);
+}
+
+static void hub6_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ outb(p->hub6 - 1 + offset, p->iobase);
+ outb(value, p->iobase + 1);
+}
+
+static unsigned int mem_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ return readb(p->membase + offset);
+}
+
+static void mem_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ writeb(value, p->membase + offset);
+}
+
+static void mem32_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ writel(value, p->membase + offset);
+}
+
+static unsigned int mem32_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ return readl(p->membase + offset);
+}
+
+static unsigned int io_serial_in(struct uart_port *p, int offset)
+{
+ offset = offset << p->regshift;
+ return inb(p->iobase + offset);
+}
+
+static void io_serial_out(struct uart_port *p, int offset, int value)
+{
+ offset = offset << p->regshift;
+ outb(value, p->iobase + offset);
+}
+
+static int serial8250_default_handle_irq(struct uart_port *port);
+static int exar_handle_irq(struct uart_port *port);
+
+static void set_io_from_upio(struct uart_port *p)
+{
+ struct uart_8250_port *up =
+ container_of(p, struct uart_8250_port, port);
+
+ up->dl_read = default_serial_dl_read;
+ up->dl_write = default_serial_dl_write;
+
+ switch (p->iotype) {
+ case UPIO_HUB6:
+ p->serial_in = hub6_serial_in;
+ p->serial_out = hub6_serial_out;
+ break;
+
+ case UPIO_MEM:
+ p->serial_in = mem_serial_in;
+ p->serial_out = mem_serial_out;
+ break;
+
+ case UPIO_MEM32:
+ p->serial_in = mem32_serial_in;
+ p->serial_out = mem32_serial_out;
+ break;
+
+#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
+ case UPIO_AU:
+ p->serial_in = au_serial_in;
+ p->serial_out = au_serial_out;
+ up->dl_read = au_serial_dl_read;
+ up->dl_write = au_serial_dl_write;
+ break;
+#endif
+
+ default:
+ p->serial_in = io_serial_in;
+ p->serial_out = io_serial_out;
+ break;
+ }
+ /* Remember loaded iotype */
+ up->cur_iotype = p->iotype;
+ p->handle_irq = serial8250_default_handle_irq;
+}
+
+static void
+serial_port_out_sync(struct uart_port *p, int offset, int value)
+{
+ switch (p->iotype) {
+ case UPIO_MEM:
+ case UPIO_MEM32:
+ case UPIO_AU:
+ p->serial_out(p, offset, value);
+ p->serial_in(p, UART_LCR); /* safe, no side-effects */
+ break;
+ default:
+ p->serial_out(p, offset, value);
+ }
+}
+
+/*
+ * For the 16C950
+ */
+static void serial_icr_write(struct uart_8250_port *up, int offset, int value)
+{
+ serial_out(up, UART_SCR, offset);
+ serial_out(up, UART_ICR, value);
+}
+
+static unsigned int serial_icr_read(struct uart_8250_port *up, int offset)
+{
+ unsigned int value;
+
+ serial_icr_write(up, UART_ACR, up->acr | UART_ACR_ICRRD);
+ serial_out(up, UART_SCR, offset);
+ value = serial_in(up, UART_ICR);
+ serial_icr_write(up, UART_ACR, up->acr);
+
+ return value;
+}
+
+/*
+ * FIFO support.
+ */
+static void serial8250_clear_fifos(struct uart_8250_port *p)
+{
+ if (p->capabilities & UART_CAP_FIFO) {
+ serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+ serial_out(p, UART_FCR, 0);
+ }
+}
+
+void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p)
+{
+ unsigned char fcr;
+
+ serial8250_clear_fifos(p);
+ fcr = uart_config[p->port.type].fcr;
+ serial_out(p, UART_FCR, fcr);
+}
+EXPORT_SYMBOL_GPL(serial8250_clear_and_reinit_fifos);
+
+/*
+ * IER sleep support. UARTs which have EFRs need the "extended
+ * capability" bit enabled. Note that on XR16C850s, we need to
+ * reset LCR to write to IER.
+ */
+static void serial8250_set_sleep(struct uart_8250_port *p, int sleep)
+{
+ /*
+ * Exar UARTs have a SLEEP register that enables or disables
+ * each UART to enter sleep mode separately. On the XR17V35x the
+ * register is accessible to each UART at the UART_EXAR_SLEEP
+ * offset but the UART channel may only write to the corresponding
+ * bit.
+ */
+ if ((p->port.type == PORT_XR17V35X) ||
+ (p->port.type == PORT_XR17D15X)) {
+ serial_out(p, UART_EXAR_SLEEP, 0xff);
+ return;
+ }
+
+ if (p->capabilities & UART_CAP_SLEEP) {
+ if (p->capabilities & UART_CAP_EFR) {
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(p, UART_EFR, UART_EFR_ECB);
+ serial_out(p, UART_LCR, 0);
+ }
+ serial_out(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
+ if (p->capabilities & UART_CAP_EFR) {
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(p, UART_EFR, 0);
+ serial_out(p, UART_LCR, 0);
+ }
+ }
+}
+
+#ifdef CONFIG_SERIAL_8250_RSA
+/*
+ * Attempts to turn on the RSA FIFO. Returns zero on failure.
+ * We set the port uart clock rate if we succeed.
+ */
+static int __enable_rsa(struct uart_8250_port *up)
+{
+ unsigned char mode;
+ int result;
+
+ mode = serial_in(up, UART_RSA_MSR);
+ result = mode & UART_RSA_MSR_FIFO;
+
+ if (!result) {
+ serial_out(up, UART_RSA_MSR, mode | UART_RSA_MSR_FIFO);
+ mode = serial_in(up, UART_RSA_MSR);
+ result = mode & UART_RSA_MSR_FIFO;
+ }
+
+ if (result)
+ up->port.uartclk = SERIAL_RSA_BAUD_BASE * 16;
+
+ return result;
+}
+
+static void enable_rsa(struct uart_8250_port *up)
+{
+ if (up->port.type == PORT_RSA) {
+ if (up->port.uartclk != SERIAL_RSA_BAUD_BASE * 16) {
+ spin_lock_irq(&up->port.lock);
+ __enable_rsa(up);
+ spin_unlock_irq(&up->port.lock);
+ }
+ if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16)
+ serial_out(up, UART_RSA_FRR, 0);
+ }
+}
+
+/*
+ * Attempts to turn off the RSA FIFO. Returns zero on failure.
+ * It is unknown why interrupts were disabled in here. However,
+ * the caller is expected to preserve this behaviour by grabbing
+ * the spinlock before calling this function.
+ */
+static void disable_rsa(struct uart_8250_port *up)
+{
+ unsigned char mode;
+ int result;
+
+ if (up->port.type == PORT_RSA &&
+ up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16) {
+ spin_lock_irq(&up->port.lock);
+
+ mode = serial_in(up, UART_RSA_MSR);
+ result = !(mode & UART_RSA_MSR_FIFO);
+
+ if (!result) {
+ serial_out(up, UART_RSA_MSR, mode & ~UART_RSA_MSR_FIFO);
+ mode = serial_in(up, UART_RSA_MSR);
+ result = !(mode & UART_RSA_MSR_FIFO);
+ }
+
+ if (result)
+ up->port.uartclk = SERIAL_RSA_BAUD_BASE_LO * 16;
+ spin_unlock_irq(&up->port.lock);
+ }
+}
+#endif /* CONFIG_SERIAL_8250_RSA */
+
+/*
+ * This is a quickie test to see how big the FIFO is.
+ * It doesn't work at all the time, more's the pity.
+ */
+static int size_fifo(struct uart_8250_port *up)
+{
+ unsigned char old_fcr, old_mcr, old_lcr;
+ unsigned short old_dl;
+ int count;
+
+ old_lcr = serial_in(up, UART_LCR);
+ serial_out(up, UART_LCR, 0);
+ old_fcr = serial_in(up, UART_FCR);
+ old_mcr = serial_in(up, UART_MCR);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
+ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+ serial_out(up, UART_MCR, UART_MCR_LOOP);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ old_dl = serial_dl_read(up);
+ serial_dl_write(up, 0x0001);
+ serial_out(up, UART_LCR, 0x03);
+ for (count = 0; count < 256; count++)
+ serial_out(up, UART_TX, count);
+ mdelay(20);/* FIXME - schedule_timeout */
+ for (count = 0; (serial_in(up, UART_LSR) & UART_LSR_DR) &&
+ (count < 256); count++)
+ serial_in(up, UART_RX);
+ serial_out(up, UART_FCR, old_fcr);
+ serial_out(up, UART_MCR, old_mcr);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_dl_write(up, old_dl);
+ serial_out(up, UART_LCR, old_lcr);
+
+ return count;
+}
+
+/*
+ * Read UART ID using the divisor method - set DLL and DLM to zero
+ * and the revision will be in DLL and device type in DLM. We
+ * preserve the device state across this.
+ */
+static unsigned int autoconfig_read_divisor_id(struct uart_8250_port *p)
+{
+ unsigned char old_dll, old_dlm, old_lcr;
+ unsigned int id;
+
+ old_lcr = serial_in(p, UART_LCR);
+ serial_out(p, UART_LCR, UART_LCR_CONF_MODE_A);
+
+ old_dll = serial_in(p, UART_DLL);
+ old_dlm = serial_in(p, UART_DLM);
+
+ serial_out(p, UART_DLL, 0);
+ serial_out(p, UART_DLM, 0);
+
+ id = serial_in(p, UART_DLL) | serial_in(p, UART_DLM) << 8;
+
+ serial_out(p, UART_DLL, old_dll);
+ serial_out(p, UART_DLM, old_dlm);
+ serial_out(p, UART_LCR, old_lcr);
+
+ return id;
+}
+
+/*
+ * This is a helper routine to autodetect StarTech/Exar/Oxsemi UART's.
+ * When this function is called we know it is at least a StarTech
+ * 16650 V2, but it might be one of several StarTech UARTs, or one of
+ * its clones. (We treat the broken original StarTech 16650 V1 as a
+ * 16550, and why not? Startech doesn't seem to even acknowledge its
+ * existence.)
+ *
+ * What evil have men's minds wrought...
+ */
+static void autoconfig_has_efr(struct uart_8250_port *up)
+{
+ unsigned int id1, id2, id3, rev;
+
+ /*
+ * Everything with an EFR has SLEEP
+ */
+ up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
+
+ /*
+ * First we check to see if it's an Oxford Semiconductor UART.
+ *
+ * If we have to do this here because some non-National
+ * Semiconductor clone chips lock up if you try writing to the
+ * LSR register (which serial_icr_read does)
+ */
+
+ /*
+ * Check for Oxford Semiconductor 16C950.
+ *
+ * EFR [4] must be set else this test fails.
+ *
+ * This shouldn't be necessary, but Mike Hudson (Exoray@isys.ca)
+ * claims that it's needed for 952 dual UART's (which are not
+ * recommended for new designs).
+ */
+ up->acr = 0;
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_EFR, UART_EFR_ECB);
+ serial_out(up, UART_LCR, 0x00);
+ id1 = serial_icr_read(up, UART_ID1);
+ id2 = serial_icr_read(up, UART_ID2);
+ id3 = serial_icr_read(up, UART_ID3);
+ rev = serial_icr_read(up, UART_REV);
+
+ DEBUG_AUTOCONF("950id=%02x:%02x:%02x:%02x ", id1, id2, id3, rev);
+
+ if (id1 == 0x16 && id2 == 0xC9 &&
+ (id3 == 0x50 || id3 == 0x52 || id3 == 0x54)) {
+ up->port.type = PORT_16C950;
+
+ /*
+ * Enable work around for the Oxford Semiconductor 952 rev B
+ * chip which causes it to seriously miscalculate baud rates
+ * when DLL is 0.
+ */
+ if (id3 == 0x52 && rev == 0x01)
+ up->bugs |= UART_BUG_QUOT;
+ return;
+ }
+
+ /*
+ * We check for a XR16C850 by setting DLL and DLM to 0, and then
+ * reading back DLL and DLM. The chip type depends on the DLM
+ * value read back:
+ * 0x10 - XR16C850 and the DLL contains the chip revision.
+ * 0x12 - XR16C2850.
+ * 0x14 - XR16C854.
+ */
+ id1 = autoconfig_read_divisor_id(up);
+ DEBUG_AUTOCONF("850id=%04x ", id1);
+
+ id2 = id1 >> 8;
+ if (id2 == 0x10 || id2 == 0x12 || id2 == 0x14) {
+ up->port.type = PORT_16850;
+ return;
+ }
+
+ /*
+ * It wasn't an XR16C850.
+ *
+ * We distinguish between the '654 and the '650 by counting
+ * how many bytes are in the FIFO. I'm using this for now,
+ * since that's the technique that was sent to me in the
+ * serial driver update, but I'm not convinced this works.
+ * I've had problems doing this in the past. -TYT
+ */
+ if (size_fifo(up) == 64)
+ up->port.type = PORT_16654;
+ else
+ up->port.type = PORT_16650V2;
+}
+
+/*
+ * We detected a chip without a FIFO. Only two fall into
+ * this category - the original 8250 and the 16450. The
+ * 16450 has a scratch register (accessible with LCR=0)
+ */
+static void autoconfig_8250(struct uart_8250_port *up)
+{
+ unsigned char scratch, status1, status2;
+
+ up->port.type = PORT_8250;
+
+ scratch = serial_in(up, UART_SCR);
+ serial_out(up, UART_SCR, 0xa5);
+ status1 = serial_in(up, UART_SCR);
+ serial_out(up, UART_SCR, 0x5a);
+ status2 = serial_in(up, UART_SCR);
+ serial_out(up, UART_SCR, scratch);
+
+ if (status1 == 0xa5 && status2 == 0x5a)
+ up->port.type = PORT_16450;
+}
+
+static int broken_efr(struct uart_8250_port *up)
+{
+ /*
+ * Exar ST16C2550 "A2" devices incorrectly detect as
+ * having an EFR, and report an ID of 0x0201. See
+ * http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-11/4812.html
+ */
+ if (autoconfig_read_divisor_id(up) == 0x0201 && size_fifo(up) == 16)
+ return 1;
+
+ return 0;
+}
+
+static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+{
+ unsigned char status;
+
+ status = serial_in(up, 0x04); /* EXCR2 */
+#define PRESL(x) ((x) & 0x30)
+ if (PRESL(status) == 0x10) {
+ /* already in high speed mode */
+ return 0;
+ } else {
+ status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
+ status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
+ serial_out(up, 0x04, status);
+ }
+ return 1;
+}
+
+/*
+ * We know that the chip has FIFOs. Does it have an EFR? The
+ * EFR is located in the same register position as the IIR and
+ * we know the top two bits of the IIR are currently set. The
+ * EFR should contain zero. Try to read the EFR.
+ */
+static void autoconfig_16550a(struct uart_8250_port *up)
+{
+ unsigned char status1, status2;
+ unsigned int iersave;
+
+ up->port.type = PORT_16550A;
+ up->capabilities |= UART_CAP_FIFO;
+
+ /*
+ * XR17V35x UARTs have an extra divisor register, DLD
+ * that gets enabled with when DLAB is set which will
+ * cause the device to incorrectly match and assign
+ * port type to PORT_16650. The EFR for this UART is
+ * found at offset 0x09. Instead check the Deice ID (DVID)
+ * register for a 2, 4 or 8 port UART.
+ */
+ if (up->port.flags & UPF_EXAR_EFR) {
+ status1 = serial_in(up, UART_EXAR_DVID);
+ if (status1 == 0x82 || status1 == 0x84 || status1 == 0x88) {
+ DEBUG_AUTOCONF("Exar XR17V35x ");
+ up->port.type = PORT_XR17V35X;
+ up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP;
+
+ return;
+ }
+
+ }
+
+ /*
+ * Check for presence of the EFR when DLAB is set.
+ * Only ST16C650V1 UARTs pass this test.
+ */
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ if (serial_in(up, UART_EFR) == 0) {
+ serial_out(up, UART_EFR, 0xA8);
+ if (serial_in(up, UART_EFR) != 0) {
+ DEBUG_AUTOCONF("EFRv1 ");
+ up->port.type = PORT_16650;
+ up->capabilities |= UART_CAP_EFR | UART_CAP_SLEEP;
+ } else {
+ DEBUG_AUTOCONF("Motorola 8xxx DUART ");
+ }
+ serial_out(up, UART_EFR, 0);
+ return;
+ }
+
+ /*
+ * Maybe it requires 0xbf to be written to the LCR.
+ * (other ST16C650V2 UARTs, TI16C752A, etc)
+ */
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ if (serial_in(up, UART_EFR) == 0 && !broken_efr(up)) {
+ DEBUG_AUTOCONF("EFRv2 ");
+ autoconfig_has_efr(up);
+ return;
+ }
+
+ /*
+ * Check for a National Semiconductor SuperIO chip.
+ * Attempt to switch to bank 2, read the value of the LOOP bit
+ * from EXCR1. Switch back to bank 0, change it in MCR. Then
+ * switch back to bank 2, read it from EXCR1 again and check
+ * it's changed. If so, set baud_base in EXCR2 to 921600. -- dwmw2
+ */
+ serial_out(up, UART_LCR, 0);
+ status1 = serial_in(up, UART_MCR);
+ serial_out(up, UART_LCR, 0xE0);
+ status2 = serial_in(up, 0x02); /* EXCR1 */
+
+ if (!((status2 ^ status1) & UART_MCR_LOOP)) {
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_MCR, status1 ^ UART_MCR_LOOP);
+ serial_out(up, UART_LCR, 0xE0);
+ status2 = serial_in(up, 0x02); /* EXCR1 */
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_MCR, status1);
+
+ if ((status2 ^ status1) & UART_MCR_LOOP) {
+ unsigned short quot;
+
+ serial_out(up, UART_LCR, 0xE0);
+
+ quot = serial_dl_read(up);
+ quot <<= 3;
+
+ if (ns16550a_goto_highspeed(up))
+ serial_dl_write(up, quot);
+
+ serial_out(up, UART_LCR, 0);
+
+ up->port.uartclk = 921600*16;
+ up->port.type = PORT_NS16550A;
+ up->capabilities |= UART_NATSEMI;
+ return;
+ }
+ }
+
+ /*
+ * No EFR. Try to detect a TI16750, which only sets bit 5 of
+ * the IIR when 64 byte FIFO mode is enabled when DLAB is set.
+ * Try setting it with and without DLAB set. Cheap clones
+ * set bit 5 without DLAB set.
+ */
+ serial_out(up, UART_LCR, 0);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
+ status1 = serial_in(up, UART_IIR) >> 5;
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_A);
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR7_64BYTE);
+ status2 = serial_in(up, UART_IIR) >> 5;
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_out(up, UART_LCR, 0);
+
+ DEBUG_AUTOCONF("iir1=%d iir2=%d ", status1, status2);
+
+ if (status1 == 6 && status2 == 7) {
+ up->port.type = PORT_16750;
+ up->capabilities |= UART_CAP_AFE | UART_CAP_SLEEP;
+ return;
+ }
+
+ /*
+ * Try writing and reading the UART_IER_UUE bit (b6).
+ * If it works, this is probably one of the Xscale platform's
+ * internal UARTs.
+ * We're going to explicitly set the UUE bit to 0 before
+ * trying to write and read a 1 just to make sure it's not
+ * already a 1 and maybe locked there before we even start start.
+ */
+ iersave = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, iersave & ~UART_IER_UUE);
+ if (!(serial_in(up, UART_IER) & UART_IER_UUE)) {
+ /*
+ * OK it's in a known zero state, try writing and reading
+ * without disturbing the current state of the other bits.
+ */
+ serial_out(up, UART_IER, iersave | UART_IER_UUE);
+ if (serial_in(up, UART_IER) & UART_IER_UUE) {
+ /*
+ * It's an Xscale.
+ * We'll leave the UART_IER_UUE bit set to 1 (enabled).
+ */
+ DEBUG_AUTOCONF("Xscale ");
+ up->port.type = PORT_XSCALE;
+ up->capabilities |= UART_CAP_UUE | UART_CAP_RTOIE;
+ return;
+ }
+ } else {
+ /*
+ * If we got here we couldn't force the IER_UUE bit to 0.
+ * Log it and continue.
+ */
+ DEBUG_AUTOCONF("Couldn't force IER_UUE to 0 ");
+ }
+ serial_out(up, UART_IER, iersave);
+
+ /*
+ * Exar uarts have EFR in a weird location
+ */
+ if (up->port.flags & UPF_EXAR_EFR) {
+ DEBUG_AUTOCONF("Exar XR17D15x ");
+ up->port.type = PORT_XR17D15X;
+ up->capabilities |= UART_CAP_AFE | UART_CAP_EFR |
+ UART_CAP_SLEEP;
+
+ return;
+ }
+
+ /*
+ * We distinguish between 16550A and U6 16550A by counting
+ * how many bytes are in the FIFO.
+ */
+ if (up->port.type == PORT_16550A && size_fifo(up) == 64) {
+ up->port.type = PORT_U6_16550A;
+ up->capabilities |= UART_CAP_AFE;
+ }
+}
+
+/*
+ * This routine is called by rs_init() to initialize a specific serial
+ * port. It determines what type of UART chip this serial port is
+ * using: 8250, 16450, 16550, 16550A. The important question is
+ * whether or not this UART is a 16550A or not, since this will
+ * determine whether or not we can use its FIFO features or not.
+ */
+static void autoconfig(struct uart_8250_port *up, unsigned int probeflags)
+{
+ unsigned char status1, scratch, scratch2, scratch3;
+ unsigned char save_lcr, save_mcr;
+ struct uart_port *port = &up->port;
+ unsigned long flags;
+ unsigned int old_capabilities;
+
+ if (!port->iobase && !port->mapbase && !port->membase)
+ return;
+
+ DEBUG_AUTOCONF("ttyS%d: autoconf (0x%04lx, 0x%p): ",
+ serial_index(port), port->iobase, port->membase);
+
+ /*
+ * We really do need global IRQs disabled here - we're going to
+ * be frobbing the chips IRQ enable register to see if it exists.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+
+ up->capabilities = 0;
+ up->bugs = 0;
+
+ if (!(port->flags & UPF_BUGGY_UART)) {
+ /*
+ * Do a simple existence test first; if we fail this,
+ * there's no point trying anything else.
+ *
+ * 0x80 is used as a nonsense port to prevent against
+ * false positives due to ISA bus float. The
+ * assumption is that 0x80 is a non-existent port;
+ * which should be safe since include/asm/io.h also
+ * makes this assumption.
+ *
+ * Note: this is safe as long as MCR bit 4 is clear
+ * and the device is in "PC" mode.
+ */
+ scratch = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, 0);
+#ifdef __i386__
+ outb(0xff, 0x080);
+#endif
+ /*
+ * Mask out IER[7:4] bits for test as some UARTs (e.g. TL
+ * 16C754B) allow only to modify them if an EFR bit is set.
+ */
+ scratch2 = serial_in(up, UART_IER) & 0x0f;
+ serial_out(up, UART_IER, 0x0F);
+#ifdef __i386__
+ outb(0, 0x080);
+#endif
+ scratch3 = serial_in(up, UART_IER) & 0x0f;
+ serial_out(up, UART_IER, scratch);
+ if (scratch2 != 0 || scratch3 != 0x0F) {
+ /*
+ * We failed; there's nothing here
+ */
+ spin_unlock_irqrestore(&port->lock, flags);
+ DEBUG_AUTOCONF("IER test failed (%02x, %02x) ",
+ scratch2, scratch3);
+ goto out;
+ }
+ }
+
+ save_mcr = serial_in(up, UART_MCR);
+ save_lcr = serial_in(up, UART_LCR);
+
+ /*
+ * Check to see if a UART is really there. Certain broken
+ * internal modems based on the Rockwell chipset fail this
+ * test, because they apparently don't implement the loopback
+ * test mode. So this test is skipped on the COM 1 through
+ * COM 4 ports. This *should* be safe, since no board
+ * manufacturer would be stupid enough to design a board
+ * that conflicts with COM 1-4 --- we hope!
+ */
+ if (!(port->flags & UPF_SKIP_TEST)) {
+ serial_out(up, UART_MCR, UART_MCR_LOOP | 0x0A);
+ status1 = serial_in(up, UART_MSR) & 0xF0;
+ serial_out(up, UART_MCR, save_mcr);
+ if (status1 != 0x90) {
+ spin_unlock_irqrestore(&port->lock, flags);
+ DEBUG_AUTOCONF("LOOP test failed (%02x) ",
+ status1);
+ goto out;
+ }
+ }
+
+ /*
+ * We're pretty sure there's a port here. Lets find out what
+ * type of port it is. The IIR top two bits allows us to find
+ * out if it's 8250 or 16450, 16550, 16550A or later. This
+ * determines what we test for next.
+ *
+ * We also initialise the EFR (if any) to zero for later. The
+ * EFR occupies the same register location as the FCR and IIR.
+ */
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_out(up, UART_EFR, 0);
+ serial_out(up, UART_LCR, 0);
+
+ serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+ scratch = serial_in(up, UART_IIR) >> 6;
+
+ switch (scratch) {
+ case 0:
+ autoconfig_8250(up);
+ break;
+ case 1:
+ port->type = PORT_UNKNOWN;
+ break;
+ case 2:
+ port->type = PORT_16550;
+ break;
+ case 3:
+ autoconfig_16550a(up);
+ break;
+ }
+
+#ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * Only probe for RSA ports if we got the region.
+ */
+ if (port->type == PORT_16550A && probeflags & PROBE_RSA) {
+ int i;
+
+ for (i = 0 ; i < probe_rsa_count; ++i) {
+ if (probe_rsa[i] == port->iobase && __enable_rsa(up)) {
+ port->type = PORT_RSA;
+ break;
+ }
+ }
+ }
+#endif
+
+ serial_out(up, UART_LCR, save_lcr);
+
+ port->fifosize = uart_config[up->port.type].fifo_size;
+ old_capabilities = up->capabilities;
+ up->capabilities = uart_config[port->type].flags;
+ up->tx_loadsz = uart_config[port->type].tx_loadsz;
+
+ if (port->type == PORT_UNKNOWN)
+ goto out_lock;
+
+ /*
+ * Reset the UART.
+ */
+#ifdef CONFIG_SERIAL_8250_RSA
+ if (port->type == PORT_RSA)
+ serial_out(up, UART_RSA_FRR, 0);
+#endif
+ serial_out(up, UART_MCR, save_mcr);
+ serial8250_clear_fifos(up);
+ serial_in(up, UART_RX);
+ if (up->capabilities & UART_CAP_UUE)
+ serial_out(up, UART_IER, UART_IER_UUE);
+ else
+ serial_out(up, UART_IER, 0);
+
+out_lock:
+ spin_unlock_irqrestore(&port->lock, flags);
+ if (up->capabilities != old_capabilities) {
+ printk(KERN_WARNING
+ "ttyS%d: detected caps %08x should be %08x\n",
+ serial_index(port), old_capabilities,
+ up->capabilities);
+ }
+out:
+ DEBUG_AUTOCONF("iir=%d ", scratch);
+ DEBUG_AUTOCONF("type=%s\n", uart_config[port->type].name);
+}
+
+static void autoconfig_irq(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ unsigned char save_mcr, save_ier;
+ unsigned char save_ICP = 0;
+ unsigned int ICP = 0;
+ unsigned long irqs;
+ int irq;
+
+ if (port->flags & UPF_FOURPORT) {
+ ICP = (port->iobase & 0xfe0) | 0x1f;
+ save_ICP = inb_p(ICP);
+ outb_p(0x80, ICP);
+ inb_p(ICP);
+ }
+
+ /* forget possible initially masked and pending IRQ */
+ probe_irq_off(probe_irq_on());
+ save_mcr = serial_in(up, UART_MCR);
+ save_ier = serial_in(up, UART_IER);
+ serial_out(up, UART_MCR, UART_MCR_OUT1 | UART_MCR_OUT2);
+
+ irqs = probe_irq_on();
+ serial_out(up, UART_MCR, 0);
+ udelay(10);
+ if (port->flags & UPF_FOURPORT) {
+ serial_out(up, UART_MCR,
+ UART_MCR_DTR | UART_MCR_RTS);
+ } else {
+ serial_out(up, UART_MCR,
+ UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2);
+ }
+ serial_out(up, UART_IER, 0x0f); /* enable all intrs */
+ serial_in(up, UART_LSR);
+ serial_in(up, UART_RX);
+ serial_in(up, UART_IIR);
+ serial_in(up, UART_MSR);
+ serial_out(up, UART_TX, 0xFF);
+ udelay(20);
+ irq = probe_irq_off(irqs);
+
+ serial_out(up, UART_MCR, save_mcr);
+ serial_out(up, UART_IER, save_ier);
+
+ if (port->flags & UPF_FOURPORT)
+ outb_p(save_ICP, ICP);
+
+ port->irq = (irq > 0) ? irq : 0;
+}
+
+static inline void __stop_tx(struct uart_8250_port *p)
+{
+ if (p->ier & UART_IER_THRI) {
+ p->ier &= ~UART_IER_THRI;
+ serial_out(p, UART_IER, p->ier);
+ }
+}
+
+static void serial8250_stop_tx(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ __stop_tx(up);
+
+ /*
+ * We really want to stop the transmitter from sending.
+ */
+ if (port->type == PORT_16C950) {
+ up->acr |= UART_ACR_TXDIS;
+ serial_icr_write(up, UART_ACR, up->acr);
+ }
+}
+
+static void serial8250_start_tx(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ if (up->dma && !serial8250_tx_dma(up)) {
+ return;
+ } else if (!(up->ier & UART_IER_THRI)) {
+ up->ier |= UART_IER_THRI;
+ serial_port_out(port, UART_IER, up->ier);
+
+ if (up->bugs & UART_BUG_TXEN) {
+ unsigned char lsr;
+ lsr = serial_in(up, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ if (lsr & UART_LSR_TEMT)
+ serial8250_tx_chars(up);
+ }
+ }
+
+ /*
+ * Re-enable the transmitter if we disabled it.
+ */
+ if (port->type == PORT_16C950 && up->acr & UART_ACR_TXDIS) {
+ up->acr &= ~UART_ACR_TXDIS;
+ serial_icr_write(up, UART_ACR, up->acr);
+ }
+}
+
+static void serial8250_stop_rx(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ up->ier &= ~UART_IER_RLSI;
+ up->port.read_status_mask &= ~UART_LSR_DR;
+ serial_port_out(port, UART_IER, up->ier);
+}
+
+static void serial8250_enable_ms(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ /* no MSR capabilities */
+ if (up->bugs & UART_BUG_NOMSR)
+ return;
+
+ up->ier |= UART_IER_MSI;
+ serial_port_out(port, UART_IER, up->ier);
+}
+
+/*
+ * serial8250_rx_chars: processes according to the passed in LSR
+ * value, and returns the remaining LSR bits not handled
+ * by this Rx routine.
+ */
+unsigned char
+serial8250_rx_chars(struct uart_8250_port *up, unsigned char lsr)
+{
+ struct uart_port *port = &up->port;
+ unsigned char ch;
+ int max_count = 256;
+ char flag;
+
+ do {
+ if (likely(lsr & UART_LSR_DR))
+ ch = serial_in(up, UART_RX);
+ else
+ /*
+ * Intel 82571 has a Serial Over Lan device that will
+ * set UART_LSR_BI without setting UART_LSR_DR when
+ * it receives a break. To avoid reading from the
+ * receive buffer without UART_LSR_DR bit set, we
+ * just force the read character to be 0
+ */
+ ch = 0;
+
+ flag = TTY_NORMAL;
+ port->icount.rx++;
+
+ lsr |= up->lsr_saved_flags;
+ up->lsr_saved_flags = 0;
+
+ if (unlikely(lsr & UART_LSR_BRK_ERROR_BITS)) {
+ if (lsr & UART_LSR_BI) {
+ lsr &= ~(UART_LSR_FE | UART_LSR_PE);
+ port->icount.brk++;
+ /*
+ * We do the SysRQ and SAK checking
+ * here because otherwise the break
+ * may get masked by ignore_status_mask
+ * or read_status_mask.
+ */
+ if (uart_handle_break(port))
+ goto ignore_char;
+ } else if (lsr & UART_LSR_PE)
+ port->icount.parity++;
+ else if (lsr & UART_LSR_FE)
+ port->icount.frame++;
+ if (lsr & UART_LSR_OE)
+ port->icount.overrun++;
+
+ /*
+ * Mask off conditions which should be ignored.
+ */
+ lsr &= port->read_status_mask;
+
+ if (lsr & UART_LSR_BI) {
+ DEBUG_INTR("handling break....");
+ flag = TTY_BREAK;
+ } else if (lsr & UART_LSR_PE)
+ flag = TTY_PARITY;
+ else if (lsr & UART_LSR_FE)
+ flag = TTY_FRAME;
+ }
+ if (uart_handle_sysrq_char(port, ch))
+ goto ignore_char;
+
+ uart_insert_char(port, lsr, UART_LSR_OE, ch, flag);
+
+ignore_char:
+ lsr = serial_in(up, UART_LSR);
+ } while ((lsr & (UART_LSR_DR | UART_LSR_BI)) && (max_count-- > 0));
+ spin_unlock(&port->lock);
+ tty_flip_buffer_push(&port->state->port);
+ spin_lock(&port->lock);
+ return lsr;
+}
+EXPORT_SYMBOL_GPL(serial8250_rx_chars);
+
+void serial8250_tx_chars(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ struct circ_buf *xmit = &port->state->xmit;
+ int count;
+
+ if (port->x_char) {
+ serial_out(up, UART_TX, port->x_char);
+ port->icount.tx++;
+ port->x_char = 0;
+ return;
+ }
+ if (uart_tx_stopped(port)) {
+ serial8250_stop_tx(port);
+ return;
+ }
+ if (uart_circ_empty(xmit)) {
+ __stop_tx(up);
+ return;
+ }
+
+ count = up->tx_loadsz;
+ do {
+ serial_out(up, UART_TX, xmit->buf[xmit->tail]);
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ port->icount.tx++;
+ if (uart_circ_empty(xmit))
+ break;
+ if (up->capabilities & UART_CAP_HFIFO) {
+ if ((serial_port_in(port, UART_LSR) & BOTH_EMPTY) !=
+ BOTH_EMPTY)
+ break;
+ }
+ } while (--count > 0);
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
+ DEBUG_INTR("THRE...");
+
+ if (uart_circ_empty(xmit))
+ __stop_tx(up);
+}
+EXPORT_SYMBOL_GPL(serial8250_tx_chars);
+
+unsigned int serial8250_modem_status(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ unsigned int status = serial_in(up, UART_MSR);
+
+ status |= up->msr_saved_flags;
+ up->msr_saved_flags = 0;
+ if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI &&
+ port->state != NULL) {
+ if (status & UART_MSR_TERI)
+ port->icount.rng++;
+ if (status & UART_MSR_DDSR)
+ port->icount.dsr++;
+ if (status & UART_MSR_DDCD)
+ uart_handle_dcd_change(port, status & UART_MSR_DCD);
+ if (status & UART_MSR_DCTS)
+ uart_handle_cts_change(port, status & UART_MSR_CTS);
+
+ wake_up_interruptible(&port->state->port.delta_msr_wait);
+ }
+
+ return status;
+}
+EXPORT_SYMBOL_GPL(serial8250_modem_status);
+
+/*
+ * This handles the interrupt from one port.
+ */
+int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+{
+ unsigned char status;
+ unsigned long flags;
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ int dma_err = 0;
+
+ if (iir & UART_IIR_NO_INT)
+ return 0;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ status = serial_port_in(port, UART_LSR);
+
+ DEBUG_INTR("status = %x...", status);
+
+ if (status & (UART_LSR_DR | UART_LSR_BI)) {
+ if (up->dma)
+ dma_err = serial8250_rx_dma(up, iir);
+
+ if (!up->dma || dma_err)
+ status = serial8250_rx_chars(up, status);
+ }
+ serial8250_modem_status(up);
+ if (status & UART_LSR_THRE)
+ serial8250_tx_chars(up);
+
+ spin_unlock_irqrestore(&port->lock, flags);
+ return 1;
+}
+EXPORT_SYMBOL_GPL(serial8250_handle_irq);
+
+static int serial8250_default_handle_irq(struct uart_port *port)
+{
+ unsigned int iir = serial_port_in(port, UART_IIR);
+
+ return serial8250_handle_irq(port, iir);
+}
+
+/*
+ * These Exar UARTs have an extra interrupt indicator that could
+ * fire for a few unimplemented interrupts. One of which is a
+ * wakeup event when coming out of sleep. Put this here just
+ * to be on the safe side that these interrupts don't go unhandled.
+ */
+static int exar_handle_irq(struct uart_port *port)
+{
+ unsigned char int0, int1, int2, int3;
+ unsigned int iir = serial_port_in(port, UART_IIR);
+ int ret;
+
+ ret = serial8250_handle_irq(port, iir);
+
+ if ((port->type == PORT_XR17V35X) ||
+ (port->type == PORT_XR17D15X)) {
+ int0 = serial_port_in(port, 0x80);
+ int1 = serial_port_in(port, 0x81);
+ int2 = serial_port_in(port, 0x82);
+ int3 = serial_port_in(port, 0x83);
+ }
+
+ return ret;
+}
+
+/*
+ * This is the serial driver's interrupt routine.
+ *
+ * Arjan thinks the old way was overly complex, so it got simplified.
+ * Alan disagrees, saying that need the complexity to handle the weird
+ * nature of ISA shared interrupts. (This is a special exception.)
+ *
+ * In order to handle ISA shared interrupts properly, we need to check
+ * that all ports have been serviced, and therefore the ISA interrupt
+ * line has been de-asserted.
+ *
+ * This means we need to loop through all ports. checking that they
+ * don't have an interrupt pending.
+ */
+static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
+{
+ struct irq_info *i = dev_id;
+ struct list_head *l, *end = NULL;
+ int pass_counter = 0, handled = 0;
+
+ DEBUG_INTR("serial8250_interrupt(%d)...", irq);
+
+ spin_lock(&i->lock);
+
+ l = i->head;
+ do {
+ struct uart_8250_port *up;
+ struct uart_port *port;
+
+ up = list_entry(l, struct uart_8250_port, list);
+ port = &up->port;
+
+ if (port->handle_irq(port)) {
+ handled = 1;
+ end = NULL;
+ } else if (end == NULL)
+ end = l;
+
+ l = l->next;
+
+ if (l == i->head && pass_counter++ > PASS_LIMIT) {
+ /* If we hit this, we're dead. */
+ printk_ratelimited(KERN_ERR
+ "serial8250: too much work for irq%d\n", irq);
+ break;
+ }
+ } while (l != end);
+
+ spin_unlock(&i->lock);
+
+ DEBUG_INTR("end.\n");
+
+ return IRQ_RETVAL(handled);
+}
+
+/*
+ * To support ISA shared interrupts, we need to have one interrupt
+ * handler that ensures that the IRQ line has been deasserted
+ * before returning. Failing to do this will result in the IRQ
+ * line being stuck active, and, since ISA irqs are edge triggered,
+ * no more IRQs will be seen.
+ */
+static void serial_do_unlink(struct irq_info *i, struct uart_8250_port *up)
+{
+ spin_lock_irq(&i->lock);
+
+ if (!list_empty(i->head)) {
+ if (i->head == &up->list)
+ i->head = i->head->next;
+ list_del(&up->list);
+ } else {
+ BUG_ON(i->head != &up->list);
+ i->head = NULL;
+ }
+ spin_unlock_irq(&i->lock);
+ /* List empty so throw away the hash node */
+ if (i->head == NULL) {
+ hlist_del(&i->node);
+ kfree(i);
+ }
+}
+
+static int serial_link_irq_chain(struct uart_8250_port *up)
+{
+ struct hlist_head *h;
+ struct hlist_node *n;
+ struct irq_info *i;
+ int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0;
+
+ mutex_lock(&hash_mutex);
+
+ h = &irq_lists[up->port.irq % NR_IRQ_HASH];
+
+ hlist_for_each(n, h) {
+ i = hlist_entry(n, struct irq_info, node);
+ if (i->irq == up->port.irq)
+ break;
+ }
+
+ if (n == NULL) {
+ i = kzalloc(sizeof(struct irq_info), GFP_KERNEL);
+ if (i == NULL) {
+ mutex_unlock(&hash_mutex);
+ return -ENOMEM;
+ }
+ spin_lock_init(&i->lock);
+ i->irq = up->port.irq;
+ hlist_add_head(&i->node, h);
+ }
+ mutex_unlock(&hash_mutex);
+
+ spin_lock_irq(&i->lock);
+
+ if (i->head) {
+ list_add(&up->list, i->head);
+ spin_unlock_irq(&i->lock);
+
+ ret = 0;
+ } else {
+ INIT_LIST_HEAD(&up->list);
+ i->head = &up->list;
+ spin_unlock_irq(&i->lock);
+ irq_flags |= up->port.irqflags;
+ ret = request_irq(up->port.irq, serial8250_interrupt,
+ irq_flags, "serial", i);
+ if (ret < 0)
+ serial_do_unlink(i, up);
+ }
+
+ return ret;
+}
+
+static void serial_unlink_irq_chain(struct uart_8250_port *up)
+{
+ struct irq_info *i;
+ struct hlist_node *n;
+ struct hlist_head *h;
+
+ mutex_lock(&hash_mutex);
+
+ h = &irq_lists[up->port.irq % NR_IRQ_HASH];
+
+ hlist_for_each(n, h) {
+ i = hlist_entry(n, struct irq_info, node);
+ if (i->irq == up->port.irq)
+ break;
+ }
+
+ BUG_ON(n == NULL);
+ BUG_ON(i->head == NULL);
+
+ if (list_empty(i->head))
+ free_irq(up->port.irq, i);
+
+ serial_do_unlink(i, up);
+ mutex_unlock(&hash_mutex);
+}
+
+/*
+ * This function is used to handle ports that do not have an
+ * interrupt. This doesn't work very well for 16450's, but gives
+ * barely passable results for a 16550A. (Although at the expense
+ * of much CPU overhead).
+ */
+static void serial8250_timeout(unsigned long data)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)data;
+
+ up->port.handle_irq(&up->port);
+ mod_timer(&up->timer, jiffies + uart_poll_timeout(&up->port));
+}
+
+static void serial8250_backup_timeout(unsigned long data)
+{
+ struct uart_8250_port *up = (struct uart_8250_port *)data;
+ unsigned int iir, ier = 0, lsr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ /*
+ * Must disable interrupts or else we risk racing with the interrupt
+ * based handler.
+ */
+ if (up->port.irq) {
+ ier = serial_in(up, UART_IER);
+ serial_out(up, UART_IER, 0);
+ }
+
+ iir = serial_in(up, UART_IIR);
+
+ /*
+ * This should be a safe test for anyone who doesn't trust the
+ * IIR bits on their UART, but it's specifically designed for
+ * the "Diva" UART used on the management processor on many HP
+ * ia64 and parisc boxes.
+ */
+ lsr = serial_in(up, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) &&
+ (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) &&
+ (lsr & UART_LSR_THRE)) {
+ iir &= ~(UART_IIR_ID | UART_IIR_NO_INT);
+ iir |= UART_IIR_THRI;
+ }
+
+ if (!(iir & UART_IIR_NO_INT))
+ serial8250_tx_chars(up);
+
+ if (up->port.irq)
+ serial_out(up, UART_IER, ier);
+
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ /* Standard timer interval plus 0.2s to keep the port running */
+ mod_timer(&up->timer,
+ jiffies + uart_poll_timeout(&up->port) + HZ / 5);
+}
+
+static unsigned int serial8250_tx_empty(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+ unsigned int lsr;
+
+ spin_lock_irqsave(&port->lock, flags);
+ lsr = serial_port_in(port, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
+}
+
+static unsigned int serial8250_get_mctrl(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned int status;
+ unsigned int ret;
+
+ status = serial8250_modem_status(up);
+
+ ret = 0;
+ if (status & UART_MSR_DCD)
+ ret |= TIOCM_CAR;
+ if (status & UART_MSR_RI)
+ ret |= TIOCM_RNG;
+ if (status & UART_MSR_DSR)
+ ret |= TIOCM_DSR;
+ if (status & UART_MSR_CTS)
+ ret |= TIOCM_CTS;
+ return ret;
+}
+
+static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned char mcr = 0;
+
+ if (mctrl & TIOCM_RTS)
+ mcr |= UART_MCR_RTS;
+ if (mctrl & TIOCM_DTR)
+ mcr |= UART_MCR_DTR;
+ if (mctrl & TIOCM_OUT1)
+ mcr |= UART_MCR_OUT1;
+ if (mctrl & TIOCM_OUT2)
+ mcr |= UART_MCR_OUT2;
+ if (mctrl & TIOCM_LOOP)
+ mcr |= UART_MCR_LOOP;
+
+ mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
+
+ serial_port_out(port, UART_MCR, mcr);
+}
+
+static void serial8250_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (break_state == -1)
+ up->lcr |= UART_LCR_SBC;
+ else
+ up->lcr &= ~UART_LCR_SBC;
+ serial_port_out(port, UART_LCR, up->lcr);
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+/*
+ * Wait for transmitter & holding register to empty
+ */
+static void wait_for_xmitr(struct uart_8250_port *up, int bits)
+{
+ unsigned int status, tmout = 10000;
+
+ /* Wait up to 10ms for the character(s) to be sent. */
+ for (;;) {
+ status = serial_in(up, UART_LSR);
+
+ up->lsr_saved_flags |= status & LSR_SAVE_FLAGS;
+
+ if ((status & bits) == bits)
+ break;
+ if (--tmout == 0)
+ break;
+ udelay(1);
+ }
+
+ /* Wait up to 1s for flow control if necessary */
+ if (up->port.flags & UPF_CONS_FLOW) {
+ unsigned int tmout;
+ for (tmout = 1000000; tmout; tmout--) {
+ unsigned int msr = serial_in(up, UART_MSR);
+ up->msr_saved_flags |= msr & MSR_SAVE_FLAGS;
+ if (msr & UART_MSR_CTS)
+ break;
+ udelay(1);
+ touch_nmi_watchdog();
+ }
+ }
+}
+
+#ifdef CONFIG_CONSOLE_POLL
+/*
+ * Console polling routines for writing and reading from the uart while
+ * in an interrupt or debug context.
+ */
+
+static int serial8250_get_poll_char(struct uart_port *port)
+{
+ unsigned char lsr = serial_port_in(port, UART_LSR);
+
+ if (!(lsr & UART_LSR_DR))
+ return NO_POLL_CHAR;
+
+ return serial_port_in(port, UART_RX);
+}
+
+
+static void serial8250_put_poll_char(struct uart_port *port,
+ unsigned char c)
+{
+ unsigned int ier;
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ /*
+ * First save the IER then disable the interrupts
+ */
+ ier = serial_port_in(port, UART_IER);
+ if (up->capabilities & UART_CAP_UUE)
+ serial_port_out(port, UART_IER, UART_IER_UUE);
+ else
+ serial_port_out(port, UART_IER, 0);
+
+ wait_for_xmitr(up, BOTH_EMPTY);
+ /*
+ * Send the character out.
+ * If a LF, also do CR...
+ */
+ serial_port_out(port, UART_TX, c);
+ if (c == 10) {
+ wait_for_xmitr(up, BOTH_EMPTY);
+ serial_port_out(port, UART_TX, 13);
+ }
+
+ /*
+ * Finally, wait for transmitter to become empty
+ * and restore the IER
+ */
+ wait_for_xmitr(up, BOTH_EMPTY);
+ serial_port_out(port, UART_IER, ier);
+}
+
+#endif /* CONFIG_CONSOLE_POLL */
+
+static int serial8250_startup(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+ unsigned char lsr, iir;
+ int retval;
+
+ if (port->type == PORT_8250_CIR)
+ return -ENODEV;
+
+ if (!port->fifosize)
+ port->fifosize = uart_config[port->type].fifo_size;
+ if (!up->tx_loadsz)
+ up->tx_loadsz = uart_config[port->type].tx_loadsz;
+ if (!up->capabilities)
+ up->capabilities = uart_config[port->type].flags;
+ up->mcr = 0;
+
+ if (port->iotype != up->cur_iotype)
+ set_io_from_upio(port);
+
+ if (port->type == PORT_16C950) {
+ /* Wake up and initialize UART */
+ up->acr = 0;
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_port_out(port, UART_EFR, UART_EFR_ECB);
+ serial_port_out(port, UART_IER, 0);
+ serial_port_out(port, UART_LCR, 0);
+ serial_icr_write(up, UART_CSR, 0); /* Reset the UART */
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ serial_port_out(port, UART_EFR, UART_EFR_ECB);
+ serial_port_out(port, UART_LCR, 0);
+ }
+
+#ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * If this is an RSA port, see if we can kick it up to the
+ * higher speed clock.
+ */
+ enable_rsa(up);
+#endif
+
+ /*
+ * Clear the FIFO buffers and disable them.
+ * (they will be reenabled in set_termios())
+ */
+ serial8250_clear_fifos(up);
+
+ /*
+ * Clear the interrupt registers.
+ */
+ serial_port_in(port, UART_LSR);
+ serial_port_in(port, UART_RX);
+ serial_port_in(port, UART_IIR);
+ serial_port_in(port, UART_MSR);
+
+ /*
+ * At this point, there's no way the LSR could still be 0xff;
+ * if it is, then bail out, because there's likely no UART
+ * here.
+ */
+ if (!(port->flags & UPF_BUGGY_UART) &&
+ (serial_port_in(port, UART_LSR) == 0xff)) {
+ printk_ratelimited(KERN_INFO "ttyS%d: LSR safety check engaged!\n",
+ serial_index(port));
+ return -ENODEV;
+ }
+
+ /*
+ * For a XR16C850, we need to set the trigger levels
+ */
+ if (port->type == PORT_16850) {
+ unsigned char fctr;
+
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+
+ fctr = serial_in(up, UART_FCTR) & ~(UART_FCTR_RX|UART_FCTR_TX);
+ serial_port_out(port, UART_FCTR,
+ fctr | UART_FCTR_TRGD | UART_FCTR_RX);
+ serial_port_out(port, UART_TRG, UART_TRG_96);
+ serial_port_out(port, UART_FCTR,
+ fctr | UART_FCTR_TRGD | UART_FCTR_TX);
+ serial_port_out(port, UART_TRG, UART_TRG_96);
+
+ serial_port_out(port, UART_LCR, 0);
+ }
+
+ if (port->irq) {
+ unsigned char iir1;
+ /*
+ * Test for UARTs that do not reassert THRE when the
+ * transmitter is idle and the interrupt has already
+ * been cleared. Real 16550s should always reassert
+ * this interrupt whenever the transmitter is idle and
+ * the interrupt is enabled. Delays are necessary to
+ * allow register changes to become visible.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+ if (up->port.irqflags & IRQF_SHARED)
+ disable_irq_nosync(port->irq);
+
+ wait_for_xmitr(up, UART_LSR_THRE);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
+ udelay(1); /* allow THRE to set */
+ iir1 = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
+ udelay(1); /* allow a working UART time to re-assert THRE */
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+
+ if (port->irqflags & IRQF_SHARED)
+ enable_irq(port->irq);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * If the interrupt is not reasserted, or we otherwise
+ * don't trust the iir, setup a timer to kick the UART
+ * on a regular basis.
+ */
+ if ((!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) ||
+ up->port.flags & UPF_BUG_THRE) {
+ up->bugs |= UART_BUG_THRE;
+ pr_debug("ttyS%d - using backup timer\n",
+ serial_index(port));
+ }
+ }
+
+ /*
+ * The above check will only give an accurate result the first time
+ * the port is opened so this value needs to be preserved.
+ */
+ if (up->bugs & UART_BUG_THRE) {
+ up->timer.function = serial8250_backup_timeout;
+ up->timer.data = (unsigned long)up;
+ mod_timer(&up->timer, jiffies +
+ uart_poll_timeout(port) + HZ / 5);
+ }
+
+ /*
+ * If the "interrupt" for this port doesn't correspond with any
+ * hardware interrupt, we use a timer-based system. The original
+ * driver used to do this with IRQ0.
+ */
+ if (!port->irq) {
+ up->timer.data = (unsigned long)up;
+ mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
+ } else {
+ retval = serial_link_irq_chain(up);
+ if (retval)
+ return retval;
+ }
+
+ /*
+ * Now, initialize the UART
+ */
+ serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (up->port.flags & UPF_FOURPORT) {
+ if (!up->port.irq)
+ up->port.mctrl |= TIOCM_OUT1;
+ } else
+ /*
+ * Most PC uarts need OUT2 raised to enable interrupts.
+ */
+ if (port->irq)
+ up->port.mctrl |= TIOCM_OUT2;
+
+ serial8250_set_mctrl(port, port->mctrl);
+
+ /* Serial over Lan (SoL) hack:
+ Intel 8257x Gigabit ethernet chips have a
+ 16550 emulation, to be used for Serial Over Lan.
+ Those chips take a longer time than a normal
+ serial device to signalize that a transmission
+ data was queued. Due to that, the above test generally
+ fails. One solution would be to delay the reading of
+ iir. However, this is not reliable, since the timeout
+ is variable. So, let's just don't test if we receive
+ TX irq. This way, we'll never enable UART_BUG_TXEN.
+ */
+ if (skip_txen_test || up->port.flags & UPF_NO_TXEN_TEST)
+ goto dont_test_tx_en;
+
+ /*
+ * Do a quick test to see if we receive an
+ * interrupt when we enable the TX irq.
+ */
+ serial_port_out(port, UART_IER, UART_IER_THRI);
+ lsr = serial_port_in(port, UART_LSR);
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+
+ if (lsr & UART_LSR_TEMT && iir & UART_IIR_NO_INT) {
+ if (!(up->bugs & UART_BUG_TXEN)) {
+ up->bugs |= UART_BUG_TXEN;
+ pr_debug("ttyS%d - enabling bad tx status workarounds\n",
+ serial_index(port));
+ }
+ } else {
+ up->bugs &= ~UART_BUG_TXEN;
+ }
+
+dont_test_tx_en:
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * Clear the interrupt registers again for luck, and clear the
+ * saved flags to avoid getting false values from polling
+ * routines or the previous session.
+ */
+ serial_port_in(port, UART_LSR);
+ serial_port_in(port, UART_RX);
+ serial_port_in(port, UART_IIR);
+ serial_port_in(port, UART_MSR);
+ up->lsr_saved_flags = 0;
+ up->msr_saved_flags = 0;
+
+ /*
+ * Request DMA channels for both RX and TX.
+ */
+ if (up->dma) {
+ retval = serial8250_request_dma(up);
+ if (retval) {
+ pr_warn_ratelimited("ttyS%d - failed to request DMA\n",
+ serial_index(port));
+ up->dma = NULL;
+ }
+ }
+
+ /*
+ * Finally, enable interrupts. Note: Modem status interrupts
+ * are set via set_termios(), which will be occurring imminently
+ * anyway, so we don't enable them here.
+ */
+ up->ier = UART_IER_RLSI | UART_IER_RDI;
+ serial_port_out(port, UART_IER, up->ier);
+
+ if (port->flags & UPF_FOURPORT) {
+ unsigned int icp;
+ /*
+ * Enable interrupts on the AST Fourport board
+ */
+ icp = (port->iobase & 0xfe0) | 0x01f;
+ outb_p(0x80, icp);
+ inb_p(icp);
+ }
+
+ return 0;
+}
+
+static void serial8250_shutdown(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned long flags;
+
+ /*
+ * Disable interrupts from this port
+ */
+ up->ier = 0;
+ serial_port_out(port, UART_IER, 0);
+
+ if (up->dma)
+ serial8250_release_dma(up);
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (port->flags & UPF_FOURPORT) {
+ /* reset interrupts on the AST Fourport board */
+ inb((port->iobase & 0xfe0) | 0x1f);
+ port->mctrl |= TIOCM_OUT1;
+ } else
+ port->mctrl &= ~TIOCM_OUT2;
+
+ serial8250_set_mctrl(port, port->mctrl);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * Disable break condition and FIFOs
+ */
+ serial_port_out(port, UART_LCR,
+ serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);
+ serial8250_clear_fifos(up);
+
+#ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * Reset the RSA board back to 115kbps compat mode.
+ */
+ disable_rsa(up);
+#endif
+
+ /*
+ * Read data port to reset things, and then unlink from
+ * the IRQ chain.
+ */
+ serial_port_in(port, UART_RX);
+
+ del_timer_sync(&up->timer);
+ up->timer.function = serial8250_timeout;
+ if (port->irq)
+ serial_unlink_irq_chain(up);
+}
+
+static unsigned int serial8250_get_divisor(struct uart_port *port, unsigned int baud)
+{
+ unsigned int quot;
+
+ /*
+ * Handle magic divisors for baud rates above baud_base on
+ * SMSC SuperIO chips.
+ */
+ if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
+ baud == (port->uartclk/4))
+ quot = 0x8001;
+ else if ((port->flags & UPF_MAGIC_MULTIPLIER) &&
+ baud == (port->uartclk/8))
+ quot = 0x8002;
+ else
+ quot = uart_get_divisor(port, baud);
+
+ return quot;
+}
+
+void
+serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
+ struct ktermios *old)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ unsigned char cval, fcr = 0;
+ unsigned long flags;
+ unsigned int baud, quot;
+ int fifo_bug = 0;
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ cval = UART_LCR_WLEN5;
+ break;
+ case CS6:
+ cval = UART_LCR_WLEN6;
+ break;
+ case CS7:
+ cval = UART_LCR_WLEN7;
+ break;
+ default:
+ case CS8:
+ cval = UART_LCR_WLEN8;
+ break;
+ }
+
+ if (termios->c_cflag & CSTOPB)
+ cval |= UART_LCR_STOP;
+ if (termios->c_cflag & PARENB) {
+ cval |= UART_LCR_PARITY;
+ if (up->bugs & UART_BUG_PARITY)
+ fifo_bug = 1;
+ }
+ if (!(termios->c_cflag & PARODD))
+ cval |= UART_LCR_EPAR;
+#ifdef CMSPAR
+ if (termios->c_cflag & CMSPAR)
+ cval |= UART_LCR_SPAR;
+#endif
+
+ /*
+ * Ask the core to calculate the divisor for us.
+ */
+ baud = uart_get_baud_rate(port, termios, old,
+ port->uartclk / 16 / 0xffff,
+ port->uartclk / 16);
+ quot = serial8250_get_divisor(port, baud);
+
+ /*
+ * Oxford Semi 952 rev B workaround
+ */
+ if (up->bugs & UART_BUG_QUOT && (quot & 0xff) == 0)
+ quot++;
+
+ if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
+ fcr = uart_config[port->type].fcr;
+ if (baud < 2400 || fifo_bug) {
+ fcr &= ~UART_FCR_TRIGGER_MASK;
+ fcr |= UART_FCR_TRIGGER_1;
+ }
+ }
+
+ /*
+ * MCR-based auto flow control. When AFE is enabled, RTS will be
+ * deasserted when the receive FIFO contains more characters than
+ * the trigger, or the MCR RTS bit is cleared. In the case where
+ * the remote UART is not using CTS auto flow control, we must
+ * have sufficient FIFO entries for the latency of the remote
+ * UART to respond. IOW, at least 32 bytes of FIFO.
+ */
+ if (up->capabilities & UART_CAP_AFE && port->fifosize >= 32) {
+ up->mcr &= ~UART_MCR_AFE;
+ if (termios->c_cflag & CRTSCTS)
+ up->mcr |= UART_MCR_AFE;
+ }
+
+ /*
+ * Ok, we're now changing the port state. Do it with
+ * interrupts disabled.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+
+ /*
+ * Update the per-port timeout.
+ */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ port->read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= UART_LSR_FE | UART_LSR_PE;
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ port->read_status_mask |= UART_LSR_BI;
+
+ /*
+ * Characteres to ignore
+ */
+ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
+ if (termios->c_iflag & IGNBRK) {
+ port->ignore_status_mask |= UART_LSR_BI;
+ /*
+ * If we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= UART_LSR_OE;
+ }
+
+ /*
+ * ignore all characters if CREAD is not set
+ */
+ if ((termios->c_cflag & CREAD) == 0)
+ port->ignore_status_mask |= UART_LSR_DR;
+
+ /*
+ * CTS flow control flag and modem status interrupts
+ */
+ up->ier &= ~UART_IER_MSI;
+ if (!(up->bugs & UART_BUG_NOMSR) &&
+ UART_ENABLE_MS(&up->port, termios->c_cflag))
+ up->ier |= UART_IER_MSI;
+ if (up->capabilities & UART_CAP_UUE)
+ up->ier |= UART_IER_UUE;
+ if (up->capabilities & UART_CAP_RTOIE)
+ up->ier |= UART_IER_RTOIE;
+
+ serial_port_out(port, UART_IER, up->ier);
+
+ if (up->capabilities & UART_CAP_EFR) {
+ unsigned char efr = 0;
+ /*
+ * TI16C752/Startech hardware flow control. FIXME:
+ * - TI16C752 requires control thresholds to be set.
+ * - UART_MCR_RTS is ineffective if auto-RTS mode is enabled.
+ */
+ if (termios->c_cflag & CRTSCTS)
+ efr |= UART_EFR_CTS;
+
+ serial_port_out(port, UART_LCR, UART_LCR_CONF_MODE_B);
+ if (port->flags & UPF_EXAR_EFR)
+ serial_port_out(port, UART_XR_EFR, efr);
+ else
+ serial_port_out(port, UART_EFR, efr);
+ }
+
+ /* Workaround to enable 115200 baud on OMAP1510 internal ports */
+ if (is_omap1510_8250(up)) {
+ if (baud == 115200) {
+ quot = 1;
+ serial_port_out(port, UART_OMAP_OSC_12M_SEL, 1);
+ } else
+ serial_port_out(port, UART_OMAP_OSC_12M_SEL, 0);
+ }
+
+ /*
+ * For NatSemi, switch to bank 2 not bank 1, to avoid resetting EXCR2,
+ * otherwise just set DLAB
+ */
+ if (up->capabilities & UART_NATSEMI)
+ serial_port_out(port, UART_LCR, 0xe0);
+ else
+ serial_port_out(port, UART_LCR, cval | UART_LCR_DLAB);
+
+ serial_dl_write(up, quot);
+
+ /*
+ * LCR DLAB must be set to enable 64-byte FIFO mode. If the FCR
+ * is written without DLAB set, this mode will be disabled.
+ */
+ if (port->type == PORT_16750)
+ serial_port_out(port, UART_FCR, fcr);
+
+ serial_port_out(port, UART_LCR, cval); /* reset DLAB */
+ up->lcr = cval; /* Save LCR */
+ if (port->type != PORT_16750) {
+ /* emulated UARTs (Lucent Venus 167x) need two steps */
+ if (fcr & UART_FCR_ENABLE_FIFO)
+ serial_port_out(port, UART_FCR, UART_FCR_ENABLE_FIFO);
+ serial_port_out(port, UART_FCR, fcr); /* set fcr */
+ }
+ serial8250_set_mctrl(port, port->mctrl);
+ spin_unlock_irqrestore(&port->lock, flags);
+ /* Don't rewrite B0 */
+ if (tty_termios_baud_rate(termios))
+ tty_termios_encode_baud_rate(termios, baud, baud);
+}
+EXPORT_SYMBOL(serial8250_do_set_termios);
+
+static void
+serial8250_set_termios(struct uart_port *port, struct ktermios *termios,
+ struct ktermios *old)
+{
+ if (port->set_termios)
+ port->set_termios(port, termios, old);
+ else
+ serial8250_do_set_termios(port, termios, old);
+}
+
+static void
+serial8250_set_ldisc(struct uart_port *port, int new)
+{
+ if (new == N_PPS) {
+ port->flags |= UPF_HARDPPS_CD;
+ serial8250_enable_ms(port);
+ } else
+ port->flags &= ~UPF_HARDPPS_CD;
+}
+
+
+void serial8250_do_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
+{
+ struct uart_8250_port *p =
+ container_of(port, struct uart_8250_port, port);
+
+ serial8250_set_sleep(p, state != 0);
+}
+EXPORT_SYMBOL(serial8250_do_pm);
+
+static void
+serial8250_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
+{
+ if (port->pm)
+ port->pm(port, state, oldstate);
+ else
+ serial8250_do_pm(port, state, oldstate);
+}
+
+static unsigned int serial8250_port_size(struct uart_8250_port *pt)
+{
+ if (pt->port.iotype == UPIO_AU)
+ return 0x1000;
+ if (is_omap1_8250(pt))
+ return 0x16 << pt->port.regshift;
+
+ return 8 << pt->port.regshift;
+}
+
+/*
+ * Resource handling.
+ */
+static int serial8250_request_std_resource(struct uart_8250_port *up)
+{
+ unsigned int size = serial8250_port_size(up);
+ struct uart_port *port = &up->port;
+ int ret = 0;
+
+ switch (port->iotype) {
+ case UPIO_AU:
+ case UPIO_TSI:
+ case UPIO_MEM32:
+ case UPIO_MEM:
+ if (!port->mapbase)
+ break;
+
+ if (!request_mem_region(port->mapbase, size, "serial")) {
+ ret = -EBUSY;
+ break;
+ }
+
+ if (port->flags & UPF_IOREMAP) {
+ port->membase = ioremap_nocache(port->mapbase, size);
+ if (!port->membase) {
+ release_mem_region(port->mapbase, size);
+ ret = -ENOMEM;
+ }
+ }
+ break;
+
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ if (!request_region(port->iobase, size, "serial"))
+ ret = -EBUSY;
+ break;
+ }
+ return ret;
+}
+
+static void serial8250_release_std_resource(struct uart_8250_port *up)
+{
+ unsigned int size = serial8250_port_size(up);
+ struct uart_port *port = &up->port;
+
+ switch (port->iotype) {
+ case UPIO_AU:
+ case UPIO_TSI:
+ case UPIO_MEM32:
+ case UPIO_MEM:
+ if (!port->mapbase)
+ break;
+
+ if (port->flags & UPF_IOREMAP) {
+ iounmap(port->membase);
+ port->membase = NULL;
+ }
+
+ release_mem_region(port->mapbase, size);
+ break;
+
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ release_region(port->iobase, size);
+ break;
+ }
+}
+
+static int serial8250_request_rsa_resource(struct uart_8250_port *up)
+{
+ unsigned long start = UART_RSA_BASE << up->port.regshift;
+ unsigned int size = 8 << up->port.regshift;
+ struct uart_port *port = &up->port;
+ int ret = -EINVAL;
+
+ switch (port->iotype) {
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ start += port->iobase;
+ if (request_region(start, size, "serial-rsa"))
+ ret = 0;
+ else
+ ret = -EBUSY;
+ break;
+ }
+
+ return ret;
+}
+
+static void serial8250_release_rsa_resource(struct uart_8250_port *up)
+{
+ unsigned long offset = UART_RSA_BASE << up->port.regshift;
+ unsigned int size = 8 << up->port.regshift;
+ struct uart_port *port = &up->port;
+
+ switch (port->iotype) {
+ case UPIO_HUB6:
+ case UPIO_PORT:
+ release_region(port->iobase + offset, size);
+ break;
+ }
+}
+
+static void serial8250_release_port(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ serial8250_release_std_resource(up);
+ if (port->type == PORT_RSA)
+ serial8250_release_rsa_resource(up);
+}
+
+static int serial8250_request_port(struct uart_port *port)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ int ret;
+
+ if (port->type == PORT_8250_CIR)
+ return -ENODEV;
+
+ ret = serial8250_request_std_resource(up);
+ if (ret == 0 && port->type == PORT_RSA) {
+ ret = serial8250_request_rsa_resource(up);
+ if (ret < 0)
+ serial8250_release_std_resource(up);
+ }
+
+ return ret;
+}
+
+static void serial8250_config_port(struct uart_port *port, int flags)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+ int probeflags = PROBE_ANY;
+ int ret;
+
+ if (port->type == PORT_8250_CIR)
+ return;
+
+ /*
+ * Find the region that we can probe for. This in turn
+ * tells us whether we can probe for the type of port.
+ */
+ ret = serial8250_request_std_resource(up);
+ if (ret < 0)
+ return;
+
+ ret = serial8250_request_rsa_resource(up);
+ if (ret < 0)
+ probeflags &= ~PROBE_RSA;
+
+ if (port->iotype != up->cur_iotype)
+ set_io_from_upio(port);
+
+ if (flags & UART_CONFIG_TYPE)
+ autoconfig(up, probeflags);
+
+ /* if access method is AU, it is a 16550 with a quirk */
+ if (port->type == PORT_16550A && port->iotype == UPIO_AU)
+ up->bugs |= UART_BUG_NOMSR;
+
+ if (port->type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
+ autoconfig_irq(up);
+
+ if (port->type != PORT_RSA && probeflags & PROBE_RSA)
+ serial8250_release_rsa_resource(up);
+ if (port->type == PORT_UNKNOWN)
+ serial8250_release_std_resource(up);
+
+ /* Fixme: probably not the best place for this */
+ if ((port->type == PORT_XR17V35X) ||
+ (port->type == PORT_XR17D15X))
+ port->handle_irq = exar_handle_irq;
+}
+
+static int
+serial8250_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ if (ser->irq >= nr_irqs || ser->irq < 0 ||
+ ser->baud_base < 9600 || ser->type < PORT_UNKNOWN ||
+ ser->type >= ARRAY_SIZE(uart_config) || ser->type == PORT_CIRRUS ||
+ ser->type == PORT_STARTECH)
+ return -EINVAL;
+ return 0;
+}
+
+static const char *
+serial8250_type(struct uart_port *port)
+{
+ int type = port->type;
+
+ if (type >= ARRAY_SIZE(uart_config))
+ type = 0;
+ return uart_config[type].name;
+}
+
+static struct uart_ops serial8250_pops = {
+ .tx_empty = serial8250_tx_empty,
+ .set_mctrl = serial8250_set_mctrl,
+ .get_mctrl = serial8250_get_mctrl,
+ .stop_tx = serial8250_stop_tx,
+ .start_tx = serial8250_start_tx,
+ .stop_rx = serial8250_stop_rx,
+ .enable_ms = serial8250_enable_ms,
+ .break_ctl = serial8250_break_ctl,
+ .startup = serial8250_startup,
+ .shutdown = serial8250_shutdown,
+ .set_termios = serial8250_set_termios,
+ .set_ldisc = serial8250_set_ldisc,
+ .pm = serial8250_pm,
+ .type = serial8250_type,
+ .release_port = serial8250_release_port,
+ .request_port = serial8250_request_port,
+ .config_port = serial8250_config_port,
+ .verify_port = serial8250_verify_port,
+#ifdef CONFIG_CONSOLE_POLL
+ .poll_get_char = serial8250_get_poll_char,
+ .poll_put_char = serial8250_put_poll_char,
+#endif
+};
+
+static struct uart_8250_port serial8250_ports[UART_NR];
+
+static void (*serial8250_isa_config)(int port, struct uart_port *up,
+ unsigned short *capabilities);
+
+void serial8250_set_isa_configurator(
+ void (*v)(int port, struct uart_port *up, unsigned short *capabilities))
+{
+ serial8250_isa_config = v;
+}
+EXPORT_SYMBOL(serial8250_set_isa_configurator);
+
+static void __init serial8250_isa_init_ports(void)
+{
+ struct uart_8250_port *up;
+ static int first = 1;
+ int i, irqflag = 0;
+
+ if (!first)
+ return;
+ first = 0;
+
+ if (nr_uarts > UART_NR)
+ nr_uarts = UART_NR;
+
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+ struct uart_port *port = &up->port;
+
+ port->line = i;
+ spin_lock_init(&port->lock);
+
+ init_timer(&up->timer);
+ up->timer.function = serial8250_timeout;
+ up->cur_iotype = 0xFF;
+
+ /*
+ * ALPHA_KLUDGE_MCR needs to be killed.
+ */
+ up->mcr_mask = ~ALPHA_KLUDGE_MCR;
+ up->mcr_force = ALPHA_KLUDGE_MCR;
+
+ port->ops = &serial8250_pops;
+ }
+
+ if (share_irqs)
+ irqflag = IRQF_SHARED;
+
+ for (i = 0, up = serial8250_ports;
+ i < ARRAY_SIZE(old_serial_port) && i < nr_uarts;
+ i++, up++) {
+ struct uart_port *port = &up->port;
+
+ port->iobase = old_serial_port[i].port;
+ port->irq = irq_canonicalize(old_serial_port[i].irq);
+ port->irqflags = old_serial_port[i].irqflags;
+ port->uartclk = old_serial_port[i].baud_base * 16;
+ port->flags = old_serial_port[i].flags;
+ port->hub6 = old_serial_port[i].hub6;
+ port->membase = old_serial_port[i].iomem_base;
+ port->iotype = old_serial_port[i].io_type;
+ port->regshift = old_serial_port[i].iomem_reg_shift;
+ set_io_from_upio(port);
+ port->irqflags |= irqflag;
+ if (serial8250_isa_config != NULL)
+ serial8250_isa_config(i, &up->port, &up->capabilities);
+
+ }
+}
+
+static void
+serial8250_init_fixed_type_port(struct uart_8250_port *up, unsigned int type)
+{
+ up->port.type = type;
+ if (!up->port.fifosize)
+ up->port.fifosize = uart_config[type].fifo_size;
+ if (!up->tx_loadsz)
+ up->tx_loadsz = uart_config[type].tx_loadsz;
+ if (!up->capabilities)
+ up->capabilities = uart_config[type].flags;
+}
+
+static void __init
+serial8250_register_ports(struct uart_driver *drv, struct device *dev)
+{
+ int i;
+
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.dev)
+ continue;
+
+ up->port.dev = dev;
+
+ if (up->port.flags & UPF_FIXED_TYPE)
+ serial8250_init_fixed_type_port(up, up->port.type);
+
+ uart_add_one_port(drv, &up->port);
+ }
+}
+
+#ifdef CONFIG_SERIAL_8250_CONSOLE
+
+static void serial8250_console_putchar(struct uart_port *port, int ch)
+{
+ struct uart_8250_port *up =
+ container_of(port, struct uart_8250_port, port);
+
+ wait_for_xmitr(up, UART_LSR_THRE);
+ serial_port_out(port, UART_TX, ch);
+}
+
+/*
+ * Print a string to the serial port trying not to disturb
+ * any possible real use of the port...
+ *
+ * The console_lock must be held when we get here.
+ */
+static void
+serial8250_console_write(struct console *co, const char *s, unsigned int count)
+{
+ struct uart_8250_port *up = &serial8250_ports[co->index];
+ struct uart_port *port = &up->port;
+ unsigned long flags;
+ unsigned int ier;
+ int locked = 1;
+
+ touch_nmi_watchdog();
+
+ local_irq_save(flags);
+ if (port->sysrq) {
+ /* serial8250_handle_irq() already took the lock */
+ locked = 0;
+ } else if (oops_in_progress) {
+ locked = spin_trylock(&port->lock);
+ } else
+ spin_lock(&port->lock);
+
+ /*
+ * First save the IER then disable the interrupts
+ */
+ ier = serial_port_in(port, UART_IER);
+
+ if (up->capabilities & UART_CAP_UUE)
+ serial_port_out(port, UART_IER, UART_IER_UUE);
+ else
+ serial_port_out(port, UART_IER, 0);
+
+ uart_console_write(port, s, count, serial8250_console_putchar);
+
+ /*
+ * Finally, wait for transmitter to become empty
+ * and restore the IER
+ */
+ wait_for_xmitr(up, BOTH_EMPTY);
+ serial_port_out(port, UART_IER, ier);
+
+ /*
+ * The receive handling will happen properly because the
+ * receive ready bit will still be set; it is not cleared
+ * on read. However, modem control will not, we must
+ * call it if we have saved something in the saved flags
+ * while processing with interrupts off.
+ */
+ if (up->msr_saved_flags)
+ serial8250_modem_status(up);
+
+ if (locked)
+ spin_unlock(&port->lock);
+ local_irq_restore(flags);
+}
+
+static int __init serial8250_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ /*
+ * Check whether an invalid uart number has been specified, and
+ * if so, search for the first available port that does have
+ * console support.
+ */
+ if (co->index >= nr_uarts)
+ co->index = 0;
+ port = &serial8250_ports[co->index].port;
+ if (!port->iobase && !port->membase)
+ return -ENODEV;
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ return uart_set_options(port, co, baud, parity, bits, flow);
+}
+
+static int serial8250_console_early_setup(void)
+{
+ return serial8250_find_port_for_earlycon();
+}
+
+static struct console serial8250_console = {
+ .name = "ttyS",
+ .write = serial8250_console_write,
+ .device = uart_console_device,
+ .setup = serial8250_console_setup,
+ .early_setup = serial8250_console_early_setup,
+ .flags = CON_PRINTBUFFER | CON_ANYTIME,
+ .index = -1,
+ .data = &serial8250_reg,
+};
+
+static int __init serial8250_console_init(void)
+{
+ serial8250_isa_init_ports();
+ register_console(&serial8250_console);
+ return 0;
+}
+console_initcall(serial8250_console_init);
+
+int serial8250_find_port(struct uart_port *p)
+{
+ int line;
+ struct uart_port *port;
+
+ for (line = 0; line < nr_uarts; line++) {
+ port = &serial8250_ports[line].port;
+ if (uart_match_port(p, port))
+ return line;
+ }
+ return -ENODEV;
+}
+
+#define SERIAL8250_CONSOLE &serial8250_console
+#else
+#define SERIAL8250_CONSOLE NULL
+#endif
+
+static struct uart_driver serial8250_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "serial",
+ .dev_name = "ttyS",
+ .major = TTY_MAJOR,
+ .minor = 64,
+ .cons = SERIAL8250_CONSOLE,
+};
+
+/*
+ * early_serial_setup - early registration for 8250 ports
+ *
+ * Setup an 8250 port structure prior to console initialisation. Use
+ * after console initialisation will cause undefined behaviour.
+ */
+int __init early_serial_setup(struct uart_port *port)
+{
+ struct uart_port *p;
+
+ if (port->line >= ARRAY_SIZE(serial8250_ports))
+ return -ENODEV;
+
+ serial8250_isa_init_ports();
+ p = &serial8250_ports[port->line].port;
+ p->iobase = port->iobase;
+ p->membase = port->membase;
+ p->irq = port->irq;
+ p->irqflags = port->irqflags;
+ p->uartclk = port->uartclk;
+ p->fifosize = port->fifosize;
+ p->regshift = port->regshift;
+ p->iotype = port->iotype;
+ p->flags = port->flags;
+ p->mapbase = port->mapbase;
+ p->private_data = port->private_data;
+ p->type = port->type;
+ p->line = port->line;
+
+ set_io_from_upio(p);
+ if (port->serial_in)
+ p->serial_in = port->serial_in;
+ if (port->serial_out)
+ p->serial_out = port->serial_out;
+ if (port->handle_irq)
+ p->handle_irq = port->handle_irq;
+ else
+ p->handle_irq = serial8250_default_handle_irq;
+
+ return 0;
+}
+
+/**
+ * serial8250_suspend_port - suspend one serial port
+ * @line: serial line number
+ *
+ * Suspend one serial port.
+ */
+void serial8250_suspend_port(int line)
+{
+ uart_suspend_port(&serial8250_reg, &serial8250_ports[line].port);
+}
+
+/**
+ * serial8250_resume_port - resume one serial port
+ * @line: serial line number
+ *
+ * Resume one serial port.
+ */
+void serial8250_resume_port(int line)
+{
+ struct uart_8250_port *up = &serial8250_ports[line];
+ struct uart_port *port = &up->port;
+
+ if (up->capabilities & UART_NATSEMI) {
+ /* Ensure it's still in high speed mode */
+ serial_port_out(port, UART_LCR, 0xE0);
+
+ ns16550a_goto_highspeed(up);
+
+ serial_port_out(port, UART_LCR, 0);
+ port->uartclk = 921600*16;
+ }
+ uart_resume_port(&serial8250_reg, port);
+}
+
+/*
+ * Register a set of serial devices attached to a platform device. The
+ * list is terminated with a zero flags entry, which means we expect
+ * all entries to have at least UPF_BOOT_AUTOCONF set.
+ */
+static int serial8250_probe(struct platform_device *dev)
+{
+ struct plat_serial8250_port *p = dev->dev.platform_data;
+ struct uart_8250_port uart;
+ int ret, i, irqflag = 0;
+
+ memset(&uart, 0, sizeof(uart));
+
+ if (share_irqs)
+ irqflag = IRQF_SHARED;
+
+ for (i = 0; p && p->flags != 0; p++, i++) {
+ uart.port.iobase = p->iobase;
+ uart.port.membase = p->membase;
+ uart.port.irq = p->irq;
+ uart.port.irqflags = p->irqflags;
+ uart.port.uartclk = p->uartclk;
+ uart.port.regshift = p->regshift;
+ uart.port.iotype = p->iotype;
+ uart.port.flags = p->flags;
+ uart.port.mapbase = p->mapbase;
+ uart.port.hub6 = p->hub6;
+ uart.port.private_data = p->private_data;
+ uart.port.type = p->type;
+ uart.port.serial_in = p->serial_in;
+ uart.port.serial_out = p->serial_out;
+ uart.port.handle_irq = p->handle_irq;
+ uart.port.handle_break = p->handle_break;
+ uart.port.set_termios = p->set_termios;
+ uart.port.pm = p->pm;
+ uart.port.dev = &dev->dev;
+ uart.port.irqflags |= irqflag;
+ ret = serial8250_register_8250_port(&uart);
+ if (ret < 0) {
+ dev_err(&dev->dev, "unable to register port at index %d "
+ "(IO%lx MEM%llx IRQ%d): %d\n", i,
+ p->iobase, (unsigned long long)p->mapbase,
+ p->irq, ret);
+ }
+ }
+ return 0;
+}
+
+/*
+ * Remove serial ports registered against a platform device.
+ */
+static int serial8250_remove(struct platform_device *dev)
+{
+ int i;
+
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.dev == &dev->dev)
+ serial8250_unregister_port(i);
+ }
+ return 0;
+}
+
+static int serial8250_suspend(struct platform_device *dev, pm_message_t state)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
+ uart_suspend_port(&serial8250_reg, &up->port);
+ }
+
+ return 0;
+}
+
+static int serial8250_resume(struct platform_device *dev)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++) {
+ struct uart_8250_port *up = &serial8250_ports[i];
+
+ if (up->port.type != PORT_UNKNOWN && up->port.dev == &dev->dev)
+ serial8250_resume_port(i);
+ }
+
+ return 0;
+}
+
+static struct platform_driver serial8250_isa_driver = {
+ .probe = serial8250_probe,
+ .remove = serial8250_remove,
+ .suspend = serial8250_suspend,
+ .resume = serial8250_resume,
+ .driver = {
+ .name = "serial8250",
+ .owner = THIS_MODULE,
+ },
+};
+
+/*
+ * This "device" covers _all_ ISA 8250-compatible serial devices listed
+ * in the table in include/asm/serial.h
+ */
+static struct platform_device *serial8250_isa_devs;
+
+/*
+ * serial8250_register_8250_port and serial8250_unregister_port allows for
+ * 16x50 serial ports to be configured at run-time, to support PCMCIA
+ * modems and PCI multiport cards.
+ */
+static DEFINE_MUTEX(serial_mutex);
+
+static struct uart_8250_port *serial8250_find_match_or_unused(struct uart_port *port)
+{
+ int i;
+
+ /*
+ * First, find a port entry which matches.
+ */
+ for (i = 0; i < nr_uarts; i++)
+ if (uart_match_port(&serial8250_ports[i].port, port))
+ return &serial8250_ports[i];
+
+ /*
+ * We didn't find a matching entry, so look for the first
+ * free entry. We look for one which hasn't been previously
+ * used (indicated by zero iobase).
+ */
+ for (i = 0; i < nr_uarts; i++)
+ if (serial8250_ports[i].port.type == PORT_UNKNOWN &&
+ serial8250_ports[i].port.iobase == 0)
+ return &serial8250_ports[i];
+
+ /*
+ * That also failed. Last resort is to find any entry which
+ * doesn't have a real port associated with it.
+ */
+ for (i = 0; i < nr_uarts; i++)
+ if (serial8250_ports[i].port.type == PORT_UNKNOWN)
+ return &serial8250_ports[i];
+
+ return NULL;
+}
+
+/**
+ * serial8250_register_8250_port - register a serial port
+ * @up: serial port template
+ *
+ * Configure the serial port specified by the request. If the
+ * port exists and is in use, it is hung up and unregistered
+ * first.
+ *
+ * The port is then probed and if necessary the IRQ is autodetected
+ * If this fails an error is returned.
+ *
+ * On success the port is ready to use and the line number is returned.
+ */
+int serial8250_register_8250_port(struct uart_8250_port *up)
+{
+ struct uart_8250_port *uart;
+ int ret = -ENOSPC;
+
+ if (up->port.uartclk == 0)
+ return -EINVAL;
+
+ mutex_lock(&serial_mutex);
+
+ uart = serial8250_find_match_or_unused(&up->port);
+ if (uart && uart->port.type != PORT_8250_CIR) {
+ if (uart->port.dev)
+ uart_remove_one_port(&serial8250_reg, &uart->port);
+
+ uart->port.iobase = up->port.iobase;
+ uart->port.membase = up->port.membase;
+ uart->port.irq = up->port.irq;
+ uart->port.irqflags = up->port.irqflags;
+ uart->port.uartclk = up->port.uartclk;
+ uart->port.fifosize = up->port.fifosize;
+ uart->port.regshift = up->port.regshift;
+ uart->port.iotype = up->port.iotype;
+ uart->port.flags = up->port.flags | UPF_BOOT_AUTOCONF;
+ uart->bugs = up->bugs;
+ uart->port.mapbase = up->port.mapbase;
+ uart->port.private_data = up->port.private_data;
+ uart->port.fifosize = up->port.fifosize;
+ uart->tx_loadsz = up->tx_loadsz;
+ uart->capabilities = up->capabilities;
+
+ if (up->port.dev)
+ uart->port.dev = up->port.dev;
+
+ if (up->port.flags & UPF_FIXED_TYPE)
+ serial8250_init_fixed_type_port(uart, up->port.type);
+
+ set_io_from_upio(&uart->port);
+ /* Possibly override default I/O functions. */
+ if (up->port.serial_in)
+ uart->port.serial_in = up->port.serial_in;
+ if (up->port.serial_out)
+ uart->port.serial_out = up->port.serial_out;
+ if (up->port.handle_irq)
+ uart->port.handle_irq = up->port.handle_irq;
+ /* Possibly override set_termios call */
+ if (up->port.set_termios)
+ uart->port.set_termios = up->port.set_termios;
+ if (up->port.pm)
+ uart->port.pm = up->port.pm;
+ if (up->port.handle_break)
+ uart->port.handle_break = up->port.handle_break;
+ if (up->dl_read)
+ uart->dl_read = up->dl_read;
+ if (up->dl_write)
+ uart->dl_write = up->dl_write;
+ if (up->dma)
+ uart->dma = up->dma;
+
+ if (serial8250_isa_config != NULL)
+ serial8250_isa_config(0, &uart->port,
+ &uart->capabilities);
+
+ ret = uart_add_one_port(&serial8250_reg, &uart->port);
+ if (ret == 0)
+ ret = uart->port.line;
+ }
+ mutex_unlock(&serial_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL(serial8250_register_8250_port);
+
+/**
+ * serial8250_unregister_port - remove a 16x50 serial port at runtime
+ * @line: serial line number
+ *
+ * Remove one serial port. This may not be called from interrupt
+ * context. We hand the port back to the our control.
+ */
+void serial8250_unregister_port(int line)
+{
+ struct uart_8250_port *uart = &serial8250_ports[line];
+
+ mutex_lock(&serial_mutex);
+ uart_remove_one_port(&serial8250_reg, &uart->port);
+ if (serial8250_isa_devs) {
+ uart->port.flags &= ~UPF_BOOT_AUTOCONF;
+ uart->port.type = PORT_UNKNOWN;
+ uart->port.dev = &serial8250_isa_devs->dev;
+ uart->capabilities = uart_config[uart->port.type].flags;
+ uart_add_one_port(&serial8250_reg, &uart->port);
+ } else {
+ uart->port.dev = NULL;
+ }
+ mutex_unlock(&serial_mutex);
+}
+EXPORT_SYMBOL(serial8250_unregister_port);
+
+static int __init serial8250_init(void)
+{
+ int ret;
+
+ serial8250_isa_init_ports();
+
+ printk(KERN_INFO "Serial: 8250/16550 driver, "
+ "%d ports, IRQ sharing %sabled\n", nr_uarts,
+ share_irqs ? "en" : "dis");
+
+#ifdef CONFIG_SPARC
+ ret = sunserial_register_minors(&serial8250_reg, UART_NR);
+#else
+ serial8250_reg.nr = UART_NR;
+ ret = uart_register_driver(&serial8250_reg);
+#endif
+ if (ret)
+ goto out;
+
+ ret = serial8250_pnp_init();
+ if (ret)
+ goto unreg_uart_drv;
+
+ serial8250_isa_devs = platform_device_alloc("serial8250",
+ PLAT8250_DEV_LEGACY);
+ if (!serial8250_isa_devs) {
+ ret = -ENOMEM;
+ goto unreg_pnp;
+ }
+
+ ret = platform_device_add(serial8250_isa_devs);
+ if (ret)
+ goto put_dev;
+
+ serial8250_register_ports(&serial8250_reg, &serial8250_isa_devs->dev);
+
+ ret = platform_driver_register(&serial8250_isa_driver);
+ if (ret == 0)
+ goto out;
+
+ platform_device_del(serial8250_isa_devs);
+put_dev:
+ platform_device_put(serial8250_isa_devs);
+unreg_pnp:
+ serial8250_pnp_exit();
+unreg_uart_drv:
+#ifdef CONFIG_SPARC
+ sunserial_unregister_minors(&serial8250_reg, UART_NR);
+#else
+ uart_unregister_driver(&serial8250_reg);
+#endif
+out:
+ return ret;
+}
+
+static void __exit serial8250_exit(void)
+{
+ struct platform_device *isa_dev = serial8250_isa_devs;
+
+ /*
+ * This tells serial8250_unregister_port() not to re-register
+ * the ports (thereby making serial8250_isa_driver permanently
+ * in use.)
+ */
+ serial8250_isa_devs = NULL;
+
+ platform_driver_unregister(&serial8250_isa_driver);
+ platform_device_unregister(isa_dev);
+
+ serial8250_pnp_exit();
+
+#ifdef CONFIG_SPARC
+ sunserial_unregister_minors(&serial8250_reg, UART_NR);
+#else
+ uart_unregister_driver(&serial8250_reg);
+#endif
+}
+
+module_init(serial8250_init);
+module_exit(serial8250_exit);
+
+EXPORT_SYMBOL(serial8250_suspend_port);
+EXPORT_SYMBOL(serial8250_resume_port);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Generic 8250/16x50 serial driver");
+
+module_param(share_irqs, uint, 0644);
+MODULE_PARM_DESC(share_irqs, "Share IRQs with other non-8250/16x50 devices"
+ " (unsafe)");
+
+module_param(nr_uarts, uint, 0644);
+MODULE_PARM_DESC(nr_uarts, "Maximum number of UARTs supported. (1-" __MODULE_STRING(CONFIG_SERIAL_8250_NR_UARTS) ")");
+
+module_param(skip_txen_test, uint, 0644);
+MODULE_PARM_DESC(skip_txen_test, "Skip checking for the TXEN bug at init time");
+
+#ifdef CONFIG_SERIAL_8250_RSA
+module_param_array(probe_rsa, ulong, &probe_rsa_count, 0444);
+MODULE_PARM_DESC(probe_rsa, "Probe I/O ports for RSA");
+#endif
+MODULE_ALIAS_CHARDEV_MAJOR(TTY_MAJOR);
+
+#ifdef CONFIG_SERIAL_8250_DEPRECATED_OPTIONS
+#ifndef MODULE
+/* This module was renamed to 8250_core in 3.7. Keep the old "8250" name
+ * working as well for the module options so we don't break people. We
+ * need to keep the names identical and the convenient macros will happily
+ * refuse to let us do that by failing the build with redefinition errors
+ * of global variables. So we stick them inside a dummy function to avoid
+ * those conflicts. The options still get parsed, and the redefined
+ * MODULE_PARAM_PREFIX lets us keep the "8250." syntax alive.
+ *
+ * This is hacky. I'm sorry.
+ */
+static void __used s8250_options(void)
+{
+#undef MODULE_PARAM_PREFIX
+#define MODULE_PARAM_PREFIX "8250_core."
+
+ module_param_cb(share_irqs, ¶m_ops_uint, &share_irqs, 0644);
+ module_param_cb(nr_uarts, ¶m_ops_uint, &nr_uarts, 0644);
+ module_param_cb(skip_txen_test, ¶m_ops_uint, &skip_txen_test, 0644);
+#ifdef CONFIG_SERIAL_8250_RSA
+ __module_param_call(MODULE_PARAM_PREFIX, probe_rsa,
+ ¶m_array_ops, .arr = &__param_arr_probe_rsa,
+ 0444, -1);
+#endif
+}
+#else
+MODULE_ALIAS("8250_core");
+#endif
+#endif
#define PCI_DEVICE_ID_PLX_CRONYX_OMEGA 0xc001
#define PCI_DEVICE_ID_INTEL_PATSBURG_KT 0x1d3d
#define PCI_VENDOR_ID_WCH 0x4348
+#define PCI_DEVICE_ID_WCH_CH352_2S 0x3253
#define PCI_DEVICE_ID_WCH_CH353_4S 0x3453
#define PCI_DEVICE_ID_WCH_CH353_2S1PF 0x5046
#define PCI_DEVICE_ID_WCH_CH353_2S1P 0x7053
.subdevice = PCI_ANY_ID,
.setup = pci_wch_ch353_setup,
},
+ /* WCH CH352 2S card (16550 clone) */
+ {
+ .vendor = PCI_VENDOR_ID_WCH,
+ .device = PCI_DEVICE_ID_WCH_CH352_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
/*
* ASIX devices with FIFO bug
*/
PCI_ANY_ID, PCI_ANY_ID,
0, 0, pbn_b0_bt_2_115200 },
+ { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH352_2S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_2_115200 },
+
/*
* Commtech, Inc. Fastcom adapters
*/
{
struct uart_8250_port uart;
int ret, line, flags = dev_id->driver_data;
- struct resource *res = NULL;
if (flags & UNKNOWN_DEV) {
ret = serial_pnp_guess_board(dev);
memset(&uart, 0, sizeof(uart));
if (pnp_irq_valid(dev, 0))
uart.port.irq = pnp_irq(dev, 0);
- if ((flags & CIR_PORT) && pnp_port_valid(dev, 2))
- res = pnp_get_resource(dev, IORESOURCE_IO, 2);
- else if (pnp_port_valid(dev, 0))
- res = pnp_get_resource(dev, IORESOURCE_IO, 0);
- if (pnp_resource_enabled(res)) {
- uart.port.iobase = res->start;
+ if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) {
+ uart.port.iobase = pnp_port_start(dev, 2);
+ uart.port.iotype = UPIO_PORT;
+ } else if (pnp_port_valid(dev, 0)) {
+ uart.port.iobase = pnp_port_start(dev, 0);
uart.port.iotype = UPIO_PORT;
} else if (pnp_mem_valid(dev, 0)) {
uart.port.mapbase = pnp_mem_start(dev, 0);
Most people will say Y or M here, so that they can use serial mice,
modems and similar devices connecting to the standard serial ports.
+config SERIAL_8250_DEPRECATED_OPTIONS
+ bool "Support 8250_core.* kernel options (DEPRECATED)"
+ depends on SERIAL_8250
+ default y
+ ---help---
+ In 3.7 we renamed 8250 to 8250_core by mistake, so now we have to
+ accept kernel parameters in both forms like 8250_core.nr_uarts=4 and
+ 8250.nr_uarts=4. We now renamed the module back to 8250, but if
+ anybody noticed in 3.7 and changed their userspace we still have to
+ keep the 8350_core.* options around until they revert the changes
+ they already did.
+
+ If 8250 is built as a module, this adds 8250_core alias instead.
+
+ If you did not notice yet and/or you have userspace from pre-3.7, it
+ is safe (and recommended) to say N here.
+
config SERIAL_8250_PNP
bool "8250/16550 PNP device support" if EXPERT
depends on SERIAL_8250 && PNP
# Makefile for the 8250 serial device drivers.
#
-obj-$(CONFIG_SERIAL_8250) += 8250_core.o
-8250_core-y := 8250.o
-8250_core-$(CONFIG_SERIAL_8250_PNP) += 8250_pnp.o
-8250_core-$(CONFIG_SERIAL_8250_DMA) += 8250_dma.o
+obj-$(CONFIG_SERIAL_8250) += 8250.o
+8250-y := 8250_core.o
+8250-$(CONFIG_SERIAL_8250_PNP) += 8250_pnp.o
+8250-$(CONFIG_SERIAL_8250_DMA) += 8250_dma.o
obj-$(CONFIG_SERIAL_8250_GSC) += 8250_gsc.o
obj-$(CONFIG_SERIAL_8250_PCI) += 8250_pci.o
obj-$(CONFIG_SERIAL_8250_HP300) += 8250_hp300.o
};
static struct atmel_uart_port atmel_ports[ATMEL_MAX_UART];
-static unsigned long atmel_ports_in_use;
+static DECLARE_BITMAP(atmel_ports_in_use, ATMEL_MAX_UART);
#ifdef SUPPORT_SYSRQ
static struct console atmel_console;
if (ret < 0)
/* port id not found in platform data nor device-tree aliases:
* auto-enumerate it */
- ret = find_first_zero_bit(&atmel_ports_in_use,
- sizeof(atmel_ports_in_use));
+ ret = find_first_zero_bit(atmel_ports_in_use, ATMEL_MAX_UART);
- if (ret > ATMEL_MAX_UART) {
+ if (ret >= ATMEL_MAX_UART) {
ret = -ENODEV;
goto err;
}
- if (test_and_set_bit(ret, &atmel_ports_in_use)) {
+ if (test_and_set_bit(ret, atmel_ports_in_use)) {
/* port already in use */
ret = -EBUSY;
goto err;
/* "port" is allocated statically, so we shouldn't free it */
- clear_bit(port->line, &atmel_ports_in_use);
+ clear_bit(port->line, atmel_ports_in_use);
clk_put(atmel_port->clk);
serial_out(up, UART_MCR, up->mcr | UART_MCR_TCRTLR);
/* FIFO ENABLE, DMA MODE */
+ up->scr |= OMAP_UART_SCR_RX_TRIG_GRANU1_MASK;
+ /*
+ * NOTE: Setting OMAP_UART_SCR_RX_TRIG_GRANU1_MASK
+ * sets Enables the granularity of 1 for TRIGGER RX
+ * level. Along with setting RX FIFO trigger level
+ * to 1 (as noted below, 16 characters) and TLR[3:0]
+ * to zero this will result RX FIFO threshold level
+ * to 1 character, instead of 16 as noted in comment
+ * below.
+ */
+
/* Set receive FIFO threshold to 16 characters and
* transmit FIFO threshold to 16 spaces
*/
/* Receive Timeout register is enabled with value of 10 */
xuartps_writel(10, XUARTPS_RXTOUT_OFFSET);
+ /* Clear out any pending interrupts before enabling them */
+ xuartps_writel(xuartps_readl(XUARTPS_ISR_OFFSET), XUARTPS_ISR_OFFSET);
/* Set the Interrupt Registers with desired interrupts */
xuartps_writel(XUARTPS_IXR_TXEMPTY | XUARTPS_IXR_PARITY |
static struct vcs_poll_data *
vcs_poll_data_get(struct file *file)
{
- struct vcs_poll_data *poll = file->private_data;
+ struct vcs_poll_data *poll = file->private_data, *kill = NULL;
if (poll)
return poll;
file->private_data = poll;
} else {
/* someone else raced ahead of us */
- vcs_poll_data_free(poll);
+ kill = poll;
poll = file->private_data;
}
spin_unlock(&file->f_lock);
+ if (kill)
+ vcs_poll_data_free(kill);
return poll;
}
dev_dbg(&acm->control->dev, "%s\n", __func__);
- tty_unregister_device(acm_tty_driver, acm->minor);
acm_release_minor(acm);
usb_put_intf(acm->control);
kfree(acm->country_codes);
int num_rx_buf;
int i;
int combined_interfaces = 0;
+ struct device *tty_dev;
+ int rv = -ENOMEM;
/* normal quirks */
quirks = (unsigned long)id->driver_info;
usb_set_intfdata(data_interface, acm);
usb_get_intf(control_interface);
- tty_port_register_device(&acm->port, acm_tty_driver, minor,
+ tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
&control_interface->dev);
+ if (IS_ERR(tty_dev)) {
+ rv = PTR_ERR(tty_dev);
+ goto alloc_fail8;
+ }
return 0;
+alloc_fail8:
+ if (acm->country_codes) {
+ device_remove_file(&acm->control->dev,
+ &dev_attr_wCountryCodes);
+ device_remove_file(&acm->control->dev,
+ &dev_attr_iCountryCodeRelDate);
+ }
+ device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
alloc_fail7:
+ usb_set_intfdata(intf, NULL);
for (i = 0; i < ACM_NW; i++)
usb_free_urb(acm->wb[i].urb);
alloc_fail6:
acm_release_minor(acm);
kfree(acm);
alloc_fail:
- return -ENOMEM;
+ return rv;
}
static void stop_data_traffic(struct acm *acm)
stop_data_traffic(acm);
+ tty_unregister_device(acm_tty_driver, acm->minor);
+
usb_free_urb(acm->ctrlurb);
for (i = 0; i < ACM_NW; i++)
usb_free_urb(acm->wb[i].urb);
struct hc_driver *driver;
struct usb_hcd *hcd;
int retval;
+ int hcd_irq = 0;
if (usb_disabled())
return -ENODEV;
return -ENODEV;
dev->current_state = PCI_D0;
- /* The xHCI driver supports MSI and MSI-X,
- * so don't fail if the BIOS doesn't provide a legacy IRQ.
+ /*
+ * The xHCI driver has its own irq management
+ * make sure irq setup is not touched for xhci in generic hcd code
*/
- if (!dev->irq && (driver->flags & HCD_MASK) != HCD_USB3) {
- dev_err(&dev->dev,
- "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
- pci_name(dev));
- retval = -ENODEV;
- goto disable_pci;
+ if ((driver->flags & HCD_MASK) != HCD_USB3) {
+ if (!dev->irq) {
+ dev_err(&dev->dev,
+ "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
+ pci_name(dev));
+ retval = -ENODEV;
+ goto disable_pci;
+ }
+ hcd_irq = dev->irq;
}
hcd = usb_create_hcd(driver, &dev->dev, pci_name(dev));
pci_set_master(dev);
- retval = usb_add_hcd(hcd, dev->irq, IRQF_SHARED);
+ retval = usb_add_hcd(hcd, hcd_irq, IRQF_SHARED);
if (retval != 0)
goto unmap_registers;
set_hs_companion(dev, hcd);
}
EXPORT_SYMBOL_GPL(usb_hcd_is_primary_hcd);
+int usb_hcd_find_raw_port_number(struct usb_hcd *hcd, int port1)
+{
+ if (!hcd->driver->find_raw_port_number)
+ return port1;
+
+ return hcd->driver->find_raw_port_number(hcd, port1);
+}
+
static int usb_hcd_request_irqs(struct usb_hcd *hcd,
unsigned int irqnum, unsigned long irqflags)
{
{
struct usb_port *port_dev = to_usb_port(dev);
- dev_pm_qos_hide_flags(dev);
kfree(port_dev);
}
#include <linux/kernel.h>
#include <linux/acpi.h>
#include <linux/pci.h>
+#include <linux/usb/hcd.h>
#include <acpi/acpi_bus.h>
#include "usb.h"
* connected to.
*/
if (!udev->parent) {
- *handle = acpi_get_child(DEVICE_ACPI_HANDLE(&udev->dev),
+ struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+ int raw_port_num;
+
+ raw_port_num = usb_hcd_find_raw_port_number(hcd,
port_num);
+ *handle = acpi_get_child(DEVICE_ACPI_HANDLE(&udev->dev),
+ raw_port_num);
if (!*handle)
return -ENODEV;
} else {
tristate "LPC32XX USB Peripheral Controller"
depends on ARCH_LPC32XX
select USB_ISP1301
+ select USB_OTG_UTILS
help
This option selects the USB device controller in the LPC32xx SoC.
static void rndis_command_complete(struct usb_ep *ep, struct usb_request *req)
{
struct f_rndis *rndis = req->context;
- struct usb_composite_dev *cdev = rndis->port.func.config->cdev;
int status;
/* received RNDIS command from USB_CDC_SEND_ENCAPSULATED_COMMAND */
// spin_lock(&dev->lock);
status = rndis_msg_parser(rndis->config, (u8 *) req->buf);
if (status < 0)
- ERROR(cdev, "RNDIS command error %d, %d/%d\n",
+ pr_err("RNDIS command error %d, %d/%d\n",
status, req->actual, req->length);
// spin_unlock(&dev->lock);
}
goto error;
gfs_dev_desc.iProduct = gfs_strings[USB_GADGET_PRODUCT_IDX].id;
- for (i = func_num; --i; ) {
+ for (i = func_num; i--; ) {
ret = functionfs_bind(ffs_tab[i].ffs_data, cdev);
if (unlikely(ret < 0)) {
while (++i < func_num)
gether_cleanup();
gfs_ether_setup = false;
- for (i = func_num; --i; )
+ for (i = func_num; i--; )
if (ffs_tab[i].ffs_data)
functionfs_unbind(ffs_tab[i].ffs_data);
};
#define DMA_ADDR_INVALID (~(dma_addr_t)0)
-#ifdef CONFIG_USB_GADGET_NET2272_DMA
+#ifdef CONFIG_USB_NET2272_DMA
/*
* use_dma: the NET2272 can use an external DMA controller.
* Note that since there is no generic DMA api, some functions,
for (i = 0; i < 4; ++i)
net2272_dequeue_all(&dev->ep[i]);
+ /* report disconnect; the driver is already quiesced */
+ if (driver) {
+ spin_unlock(&dev->lock);
+ driver->disconnect(&dev->gadget);
+ spin_lock(&dev->lock);
+ }
+
net2272_usb_reinit(dev);
}
err_func:
device_remove_file (&dev->pdev->dev, &dev_attr_function);
err_unbind:
- driver->unbind (&dev->gadget);
dev->gadget.dev.driver = NULL;
dev->driver = NULL;
return retval;
for (i = 0; i < 7; i++)
nuke (&dev->ep [i]);
+ /* report disconnect; the driver is already quiesced */
+ if (driver) {
+ spin_unlock(&dev->lock);
+ driver->disconnect(&dev->gadget);
+ spin_lock(&dev->lock);
+ }
+
usb_reinit (dev);
}
pr_debug(fmt, ##arg)
#endif /* pr_vdebug */
#else
-#ifndef pr_vdebig
+#ifndef pr_vdebug
#define pr_vdebug(fmt, arg...) \
({ if (0) pr_debug(fmt, ##arg); })
#endif /* pr_vdebug */
usb_gadget_disconnect(udc->gadget);
udc->driver->disconnect(udc->gadget);
udc->driver->unbind(udc->gadget);
- usb_gadget_udc_stop(udc->gadget, udc->driver);
+ usb_gadget_udc_stop(udc->gadget, NULL);
udc->driver = NULL;
udc->dev.driver = NULL;
static void end_unlink_async(struct ehci_hcd *ehci);
static void unlink_empty_async(struct ehci_hcd *ehci);
+static void unlink_empty_async_suspended(struct ehci_hcd *ehci);
static void ehci_work(struct ehci_hcd *ehci);
static void start_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh);
static void end_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh);
ehci->rh_state = EHCI_RH_SUSPENDED;
end_unlink_async(ehci);
- unlink_empty_async(ehci);
+ unlink_empty_async_suspended(ehci);
ehci_handle_intr_unlinks(ehci);
end_free_itds(ehci);
}
}
+/* The root hub is suspended; unlink all the async QHs */
+static void unlink_empty_async_suspended(struct ehci_hcd *ehci)
+{
+ struct ehci_qh *qh;
+
+ while (ehci->async->qh_next.qh) {
+ qh = ehci->async->qh_next.qh;
+ WARN_ON(!list_empty(&qh->qtd_list));
+ single_unlink_async(ehci, qh);
+ }
+ start_iaa_cycle(ehci, false);
+}
+
/* makes sure the async qh will become idle */
/* caller must own ehci->lock */
memset (itd, 0, sizeof *itd);
itd->itd_dma = itd_dma;
+ itd->frame = 9999; /* an invalid value */
list_add (&itd->itd_list, &sched->td_list);
}
spin_unlock_irqrestore (&ehci->lock, flags);
memset (sitd, 0, sizeof *sitd);
sitd->sitd_dma = sitd_dma;
+ sitd->frame = 9999; /* an invalid value */
list_add (&sitd->sitd_list, &iso_sched->td_list);
}
* (a) SMP races against real IAA firing and retriggering, and
* (b) clean HC shutdown, when IAA watchdog was pending.
*/
- if (ehci->async_iaa) {
+ if (1) {
u32 cmd, status;
/* If we get here, IAA is *REALLY* late. It's barely
* is attached to (or the roothub port its ancestor hub is attached to). All we
* know is the index of that port under either the USB 2.0 or the USB 3.0
* roothub, but that doesn't give us the real index into the HW port status
- * registers. Scan through the xHCI roothub port array, looking for the Nth
- * entry of the correct port speed. Return the port number of that entry.
+ * registers. Call xhci_find_raw_port_number() to get real index.
*/
static u32 xhci_find_real_port_number(struct xhci_hcd *xhci,
struct usb_device *udev)
{
struct usb_device *top_dev;
- unsigned int num_similar_speed_ports;
- unsigned int faked_port_num;
- int i;
+ struct usb_hcd *hcd;
+
+ if (udev->speed == USB_SPEED_SUPER)
+ hcd = xhci->shared_hcd;
+ else
+ hcd = xhci->main_hcd;
for (top_dev = udev; top_dev->parent && top_dev->parent->parent;
top_dev = top_dev->parent)
/* Found device below root hub */;
- faked_port_num = top_dev->portnum;
- for (i = 0, num_similar_speed_ports = 0;
- i < HCS_MAX_PORTS(xhci->hcs_params1); i++) {
- u8 port_speed = xhci->port_array[i];
-
- /*
- * Skip ports that don't have known speeds, or have duplicate
- * Extended Capabilities port speed entries.
- */
- if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)
- continue;
- /*
- * USB 3.0 ports are always under a USB 3.0 hub. USB 2.0 and
- * 1.1 ports are under the USB 2.0 hub. If the port speed
- * matches the device speed, it's a similar speed port.
- */
- if ((port_speed == 0x03) == (udev->speed == USB_SPEED_SUPER))
- num_similar_speed_ports++;
- if (num_similar_speed_ports == faked_port_num)
- /* Roothub ports are numbered from 1 to N */
- return i+1;
- }
- return 0;
+ return xhci_find_raw_port_number(hcd, top_dev->portnum);
}
/* Setup an xHCI virtual device for a Set Address command */
.set_usb2_hw_lpm = xhci_set_usb2_hardware_lpm,
.enable_usb3_lpm_timeout = xhci_enable_usb3_lpm_timeout,
.disable_usb3_lpm_timeout = xhci_disable_usb3_lpm_timeout,
+ .find_raw_port_number = xhci_find_raw_port_number,
};
/*-------------------------------------------------------------------------*/
max_ports = HCS_MAX_PORTS(xhci->hcs_params1);
if ((port_id <= 0) || (port_id > max_ports)) {
xhci_warn(xhci, "Invalid port id %d\n", port_id);
- bogus_port_status = true;
- goto cleanup;
+ inc_deq(xhci, xhci->event_ring);
+ return;
}
/* Figure out which usb_hcd this port is attached to:
* is it a USB 3.0 port or a USB 2.0/1.1 port?
*/
major_revision = xhci->port_array[port_id - 1];
+
+ /* Find the right roothub. */
+ hcd = xhci_to_hcd(xhci);
+ if ((major_revision == 0x03) != (hcd->speed == HCD_USB3))
+ hcd = xhci->shared_hcd;
+
if (major_revision == 0) {
xhci_warn(xhci, "Event for port %u not in "
"Extended Capabilities, ignoring.\n",
* into the index into the ports on the correct split roothub, and the
* correct bus_state structure.
*/
- /* Find the right roothub. */
- hcd = xhci_to_hcd(xhci);
- if ((major_revision == 0x03) != (hcd->speed == HCD_USB3))
- hcd = xhci->shared_hcd;
bus_state = &xhci->bus_state[hcd_index(hcd)];
if (hcd->speed == HCD_USB3)
port_array = xhci->usb3_ports;
if (event_trb != ep_ring->dequeue &&
event_trb != td->last_trb)
td->urb->actual_length =
- td->urb->transfer_buffer_length
- - TRB_LEN(le32_to_cpu(event->transfer_len));
+ td->urb->transfer_buffer_length -
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
else
td->urb->actual_length = 0;
/* Maybe the event was for the data stage? */
td->urb->actual_length =
td->urb->transfer_buffer_length -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
xhci_dbg(xhci, "Waiting for status "
"stage event\n");
return 0;
/* handle completion code */
switch (trb_comp_code) {
case COMP_SUCCESS:
- if (TRB_LEN(le32_to_cpu(event->transfer_len)) == 0) {
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0) {
frame->status = 0;
break;
}
len += TRB_LEN(le32_to_cpu(cur_trb->generic.field[2]));
}
len += TRB_LEN(le32_to_cpu(cur_trb->generic.field[2])) -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
if (trb_comp_code != COMP_STOP_INVAL) {
frame->actual_length = len;
case COMP_SUCCESS:
/* Double check that the HW transferred everything. */
if (event_trb != td->last_trb ||
- TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
xhci_warn(xhci, "WARN Successful completion "
"on short TX\n");
if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
"%d bytes untransferred\n",
td->urb->ep->desc.bEndpointAddress,
td->urb->transfer_buffer_length,
- TRB_LEN(le32_to_cpu(event->transfer_len)));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)));
/* Fast path - was this the last TRB in the TD for this URB? */
if (event_trb == td->last_trb) {
- if (TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
td->urb->actual_length =
td->urb->transfer_buffer_length -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
if (td->urb->transfer_buffer_length <
td->urb->actual_length) {
xhci_warn(xhci, "HC gave bad length "
"of %d bytes left\n",
- TRB_LEN(le32_to_cpu(event->transfer_len)));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)));
td->urb->actual_length = 0;
if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
*status = -EREMOTEIO;
if (trb_comp_code != COMP_STOP_INVAL)
td->urb->actual_length +=
TRB_LEN(le32_to_cpu(cur_trb->generic.field[2])) -
- TRB_LEN(le32_to_cpu(event->transfer_len));
+ EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
}
return finish_td(xhci, td, event_trb, event, ep, status, false);
* transfer type
*/
case COMP_SUCCESS:
- if (TRB_LEN(le32_to_cpu(event->transfer_len)) == 0)
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0)
break;
if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
trb_comp_code = COMP_SHORT_TX;
* TD list.
*/
if (list_empty(&ep_ring->td_list)) {
- xhci_warn(xhci, "WARN Event TRB for slot %d ep %d "
- "with no TDs queued?\n",
- TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
- ep_index);
- xhci_dbg(xhci, "Event TRB with TRB type ID %u\n",
- (le32_to_cpu(event->flags) &
- TRB_TYPE_BITMASK)>>10);
- xhci_print_trb_offsets(xhci, (union xhci_trb *) event);
+ /*
+ * A stopped endpoint may generate an extra completion
+ * event if the device was suspended. Don't print
+ * warnings.
+ */
+ if (!(trb_comp_code == COMP_STOP ||
+ trb_comp_code == COMP_STOP_INVAL)) {
+ xhci_warn(xhci, "WARN Event TRB for slot %d ep %d with no TDs queued?\n",
+ TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
+ ep_index);
+ xhci_dbg(xhci, "Event TRB with TRB type ID %u\n",
+ (le32_to_cpu(event->flags) &
+ TRB_TYPE_BITMASK)>>10);
+ xhci_print_trb_offsets(xhci, (union xhci_trb *) event);
+ }
if (ep->skip) {
ep->skip = false;
xhci_dbg(xhci, "td_list is empty while skip "
* generate interrupts. Don't even try to enable MSI.
*/
if (xhci->quirks & XHCI_BROKEN_MSI)
- return 0;
+ goto legacy_irq;
/* unregister the legacy interrupt */
if (hcd->irq)
return -EINVAL;
}
+ legacy_irq:
/* fall back to legacy interrupt*/
ret = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED,
hcd->irq_descr, hcd);
return 0;
}
+/*
+ * Transfer the port index into real index in the HW port status
+ * registers. Caculate offset between the port's PORTSC register
+ * and port status base. Divide the number of per port register
+ * to get the real index. The raw port number bases 1.
+ */
+int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1)
+{
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ __le32 __iomem *base_addr = &xhci->op_regs->port_status_base;
+ __le32 __iomem *addr;
+ int raw_port;
+
+ if (hcd->speed != HCD_USB3)
+ addr = xhci->usb2_ports[port1 - 1];
+ else
+ addr = xhci->usb3_ports[port1 - 1];
+
+ raw_port = (addr - base_addr)/NUM_PORT_REGS + 1;
+ return raw_port;
+}
+
#ifdef CONFIG_USB_SUSPEND
/* BESL to HIRD Encoding array for USB2 LPM */
/* bits 12:31 are reserved (and should be preserved on writes). */
/* IMAN - Interrupt Management Register */
-#define IMAN_IP (1 << 1)
-#define IMAN_IE (1 << 0)
+#define IMAN_IE (1 << 1)
+#define IMAN_IP (1 << 0)
/* USBSTS - USB status - status bitmasks */
/* HC not running - set to 1 when run/stop bit is cleared. */
__le32 flags;
};
+/* Transfer event TRB length bit mask */
+/* bits 0:23 */
+#define EVENT_TRB_LEN(p) ((p) & 0xffffff)
+
/** Transfer Event bit fields **/
#define TRB_TO_EP_ID(p) (((p) >> 16) & 0x1f)
int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
char *buf, u16 wLength);
int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
+int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
#ifdef CONFIG_PM
int xhci_bus_suspend(struct usb_hcd *hcd);
u8 devctl = musb_readb(mregs, MUSB_DEVCTL);
int err;
- err = musb->int_usb & USB_INTR_VBUSERROR;
+ err = musb->int_usb & MUSB_INTR_VBUSERROR;
if (err) {
/*
* The Mentor core doesn't debounce VBUS as needed
static inline void unmap_dma_buffer(struct musb_request *request,
struct musb *musb)
{
- if (!is_buffer_mapped(request))
+ struct musb_ep *musb_ep = request->ep;
+
+ if (!is_buffer_mapped(request) || !musb_ep->dma)
return;
if (request->request.dma == DMA_ADDR_INVALID) {
ep->busy = 1;
spin_unlock(&musb->lock);
- unmap_dma_buffer(req, musb);
+
+ if (!dma_mapping_error(&musb->g.dev, request->dma))
+ unmap_dma_buffer(req, musb);
+
if (request->status == 0)
dev_dbg(musb->controller, "%s done request %p, %d/%d\n",
ep->end_point.name, request,
tristate "NXP ISP1301 USB transceiver support"
depends on USB || USB_GADGET
depends on I2C
+ select USB_OTG_UTILS
help
Say Y here to add support for the NXP ISP1301 USB transceiver driver.
This chip is typically used as USB transceiver for USB host, gadget
}
struct ark3116_private {
- wait_queue_head_t delta_msr_wait;
struct async_icount icount;
int irda; /* 1 for irda device */
if (!priv)
return -ENOMEM;
- init_waitqueue_head(&priv->delta_msr_wait);
mutex_init(&priv->hw_lock);
spin_lock_init(&priv->status_lock);
case TIOCMIWAIT:
for (;;) {
struct async_icount prev = priv->icount;
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
if ((prev.rng == priv->icount.rng) &&
(prev.dsr == priv->icount.dsr) &&
(prev.dcd == priv->icount.dcd) &&
priv->icount.dcd++;
if (msr & UART_MSR_TERI)
priv->icount.rng++;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
struct ch341_private {
spinlock_t lock; /* access lock */
- wait_queue_head_t delta_msr_wait; /* wait queue for modem status */
unsigned baud_rate; /* set baud rate */
u8 line_control; /* set line control value RTS/DTR */
u8 line_status; /* active status of modem control inputs */
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->baud_rate = DEFAULT_BAUD_RATE;
priv->line_control = CH341_BIT_RTS | CH341_BIT_DTR;
priv->line_control &= ~(CH341_BIT_RTS | CH341_BIT_DTR);
spin_unlock_irqrestore(&priv->lock, flags);
ch341_set_handshake(port->serial->dev, priv->line_control);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
static void ch341_close(struct usb_serial_port *port)
tty_kref_put(tty);
}
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
exit:
spin_unlock_irqrestore(&priv->lock, flags);
while (!multi_change) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
multi_change = priv->multi_status_change;
int baud_rate; /* stores current baud rate in
integer form */
int isthrottled; /* if throttled, discard reads */
- wait_queue_head_t delta_msr_wait; /* used for TIOCMIWAIT */
char prev_status, diff_status; /* used for TIOCMIWAIT */
/* we pass a pointer to this as the argument sent to
cypress_set_termios old_termios */
kfree(priv);
return -ENOMEM;
}
- init_waitqueue_head(&priv->delta_msr_wait);
usb_reset_configuration(serial->dev);
switch (cmd) {
/* This code comes from drivers/char/serial.c and ftdi_sio.c */
case TIOCMIWAIT:
- while (priv != NULL) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ for (;;) {
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
- else {
+
+ if (port->serial->disconnected)
+ return -EIO;
+
+ {
char diff = priv->diff_status;
if (diff == 0)
return -EIO; /* no change => error */
if (priv->current_status != priv->prev_status) {
priv->diff_status |= priv->current_status ^
priv->prev_status;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
priv->prev_status = priv->current_status;
}
spin_unlock_irqrestore(&priv->lock, flags);
struct f81232_private {
spinlock_t lock;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
line_status = priv->line_status;
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
int flags; /* some ASYNC_xxxx flags are supported */
unsigned long last_dtr_rts; /* saved modem control outputs */
struct async_icount icount;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
char prev_status; /* Used for TIOCMIWAIT */
- bool dev_gone; /* Used to abort TIOCMIWAIT */
char transmit_empty; /* If transmitter is empty or not */
__u16 interface; /* FT2232C, FT2232H or FT4232H port interface
(0 for FT232/245) */
{ USB_DEVICE(FTDI_VID, FTDI_RM_CANVIEW_PID) },
{ USB_DEVICE(ACTON_VID, ACTON_SPECTRAPRO_PID) },
{ USB_DEVICE(CONTEC_VID, CONTEC_COM1USBH_PID) },
+ { USB_DEVICE(MITSUBISHI_VID, MITSUBISHI_FXUSB_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USOTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USO9ML2_PID) },
kref_init(&priv->kref);
mutex_init(&priv->cfg_lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->flags = ASYNC_LOW_LATENCY;
- priv->dev_gone = false;
if (quirk && quirk->port_probe)
quirk->port_probe(priv);
{
struct ftdi_private *priv = usb_get_serial_port_data(port);
- priv->dev_gone = true;
- wake_up_interruptible_all(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
remove_sysfs_attrs(port);
if (diff_status & FTDI_RS0_RLSD)
priv->icount.dcd++;
- wake_up_interruptible_all(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
priv->prev_status = status;
}
*/
case TIOCMIWAIT:
cprev = priv->icount;
- while (!priv->dev_gone) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ for (;;) {
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = priv->icount;
if (((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) ||
((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) ||
}
cprev = cnow;
}
- return -EIO;
- break;
case TIOCSERGETLSR:
return get_lsr_info(port, (struct serial_struct __user *)arg);
break;
#define CONTEC_VID 0x06CE /* Vendor ID */
#define CONTEC_COM1USBH_PID 0x8311 /* COM-1(USB)H */
+/*
+ * Mitsubishi Electric Corp. (http://www.meau.com)
+ * Submitted by Konstantin Holoborodko
+ */
+#define MITSUBISHI_VID 0x06D3
+#define MITSUBISHI_FXUSB_PID 0x0284 /* USB/RS422 converters: FX-USB-AW/-BD */
+
/*
* Definitions for B&B Electronics products.
*/
if (!serial)
return;
- mutex_lock(&port->serial->disc_mutex);
-
- if (!port->serial->disconnected)
- garmin_clear(garmin_data_p);
+ garmin_clear(garmin_data_p);
/* shutdown our urbs */
usb_kill_urb(port->read_urb);
/* keep reset state so we know that we must start a new session */
if (garmin_data_p->state != STATE_RESET)
garmin_data_p->state = STATE_DISCONNECTED;
-
- mutex_unlock(&port->serial->disc_mutex);
}
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
wait_queue_head_t wait_open; /* for handling sleeping while waiting for open to finish */
wait_queue_head_t wait_command; /* for handling sleeping while waiting for command to finish */
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
struct async_icount icount;
struct usb_serial_port *port; /* loop back to the owner of this object */
/* initialize our wait queues */
init_waitqueue_head(&edge_port->wait_open);
init_waitqueue_head(&edge_port->wait_chase);
- init_waitqueue_head(&edge_port->delta_msr_wait);
init_waitqueue_head(&edge_port->wait_command);
/* initialize our icount structure */
dev_dbg(&port->dev, "%s (%d) TIOCMIWAIT\n", __func__, port->number);
cprev = edge_port->icount;
while (1) {
- prepare_to_wait(&edge_port->delta_msr_wait,
+ prepare_to_wait(&port->delta_msr_wait,
&wait, TASK_INTERRUPTIBLE);
schedule();
- finish_wait(&edge_port->delta_msr_wait, &wait);
+ finish_wait(&port->delta_msr_wait, &wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = edge_port->icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
icount->dcd++;
if (newMsr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&edge_port->delta_msr_wait);
+ wake_up_interruptible(&edge_port->port->delta_msr_wait);
}
/* Save the new modem status */
int close_pending;
int lsr_event;
struct async_icount icount;
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while
- waiting for msr change to
- happen */
struct edgeport_serial *edge_serial;
struct usb_serial_port *port;
__u8 bUartMode; /* Port type, 0: RS232, etc. */
icount->dcd++;
if (msr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&edge_port->delta_msr_wait);
+ wake_up_interruptible(&edge_port->port->delta_msr_wait);
}
/* Save the new modem status */
dev = port->serial->dev;
memset(&(edge_port->icount), 0x00, sizeof(edge_port->icount));
- init_waitqueue_head(&edge_port->delta_msr_wait);
/* turn off loopback */
status = ti_do_config(edge_port, UMPC_SET_CLR_LOOPBACK, 0);
dev_dbg(&port->dev, "%s - TIOCMIWAIT\n", __func__);
cprev = edge_port->icount;
while (1) {
- interruptible_sleep_on(&edge_port->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = edge_port->icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
.set_termios = edge_set_termios,
.tiocmget = edge_tiocmget,
.tiocmset = edge_tiocmset,
+ .get_icount = edge_get_icount,
.write = edge_write,
.write_room = edge_write_room,
.chars_in_buffer = edge_chars_in_buffer,
unsigned char last_msr; /* Modem Status Register */
unsigned int rx_flags; /* Throttling flags */
struct async_icount icount;
- wait_queue_head_t msr_wait; /* for handling sleeping while waiting
- for msr change to happen */
};
#define THROTTLED 0x01
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->msr_wait);
usb_set_serial_port_data(port, priv);
tty_kref_put(tty);
}
#endif
- wake_up_interruptible(&priv->msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
spin_unlock_irqrestore(&priv->lock, flags);
exit:
retval = usb_submit_urb(urb, GFP_ATOMIC);
cprev = mct_u232_port->icount;
spin_unlock_irqrestore(&mct_u232_port->lock, flags);
for ( ; ; ) {
- prepare_to_wait(&mct_u232_port->msr_wait,
+ prepare_to_wait(&port->delta_msr_wait,
&wait, TASK_INTERRUPTIBLE);
schedule();
- finish_wait(&mct_u232_port->msr_wait, &wait);
+ finish_wait(&port->delta_msr_wait, &wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&mct_u232_port->lock, flags);
cnow = mct_u232_port->icount;
spin_unlock_irqrestore(&mct_u232_port->lock, flags);
char open;
char open_ports;
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
int delta_msr_cond;
struct async_icount icount;
struct usb_serial_port *port; /* loop back to the owner of this object */
icount->rng++;
smp_wmb();
}
+
+ mos7840_port->delta_msr_cond = 1;
+ wake_up_interruptible(&port->port->delta_msr_wait);
}
}
/* initialize our wait queues */
init_waitqueue_head(&mos7840_port->wait_chase);
- init_waitqueue_head(&mos7840_port->delta_msr_wait);
/* initialize our icount structure */
memset(&(mos7840_port->icount), 0x00, sizeof(mos7840_port->icount));
mos7840_port->read_urb_busy = false;
}
}
- wake_up(&mos7840_port->delta_msr_wait);
- mos7840_port->delta_msr_cond = 1;
dev_dbg(&port->dev, "%s - mos7840_port->shadowLCR is End %x\n", __func__,
mos7840_port->shadowLCR);
}
while (1) {
/* interruptible_sleep_on(&mos7840_port->delta_msr_wait); */
mos7840_port->delta_msr_cond = 0;
- wait_event_interruptible(mos7840_port->delta_msr_wait,
- (mos7840_port->
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ mos7840_port->
delta_msr_cond == 1));
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = mos7840_port->icount;
smp_rmb();
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
u8 setup_done;
struct delayed_work delayed_setup_work;
- wait_queue_head_t intr_wait;
struct usb_serial_port *port; /* USB port with which associated */
};
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->intr_wait);
priv->port = port;
INIT_DELAYED_WORK(&priv->delayed_setup_work, setup_line);
INIT_DELAYED_WORK(&priv->delayed_write_work, send_data);
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- wait_event_interruptible(priv->intr_wait,
+ wait_event_interruptible(port->delta_msr_wait,
+ port->serial->disconnected ||
priv->status.pin_state != prev);
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->status.pin_state & PIN_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
if (!priv->transient) {
if (xs->pin_state != priv->status.pin_state)
- wake_up_interruptible(&priv->intr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
memcpy(&priv->status, xs, OTI6858_CTRL_PKT_SIZE);
}
struct pl2303_private {
spinlock_t lock;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->line_status & UART_BREAK_ERROR)
usb_serial_handle_break(port);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
tty = tty_port_tty_get(&port->port);
if (!tty)
line_status = priv->line_status;
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
u8 shadowLSR;
u8 shadowMSR;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
struct async_icount icount;
struct usb_serial_port *port;
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- wait_event_interruptible(priv->delta_msr_wait,
- ((priv->icount.rng != prev.rng) ||
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ (priv->icount.rng != prev.rng) ||
(priv->icount.dsr != prev.dsr) ||
(priv->icount.dcd != prev.dcd) ||
(priv->icount.cts != prev.cts)));
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
cur = priv->icount;
spin_unlock_irqrestore(&priv->lock, flags);
spin_lock_init(&port_priv->lock);
spin_lock_init(&port_priv->urb_lock);
- init_waitqueue_head(&port_priv->delta_msr_wait);
port_priv->port = port;
port_priv->write_urb = usb_alloc_urb(0, GFP_KERNEL);
if (newMSR & UART_MSR_TERI)
port_priv->icount.rng++;
- wake_up_interruptible(&port_priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
struct spcp8x5_private {
spinlock_t lock;
enum spcp8x5_type type;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->type = type;
usb_set_serial_port_data(port , priv);
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
/* wake up the wait for termios */
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
while (1) {
/* wake up in bulk read */
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
spinlock_t status_lock;
u8 shadowLSR;
u8 shadowMSR;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
struct async_icount icount;
};
spin_unlock_irqrestore(&priv->status_lock, flags);
while (1) {
- wait_event_interruptible(priv->delta_msr_wait,
- ((priv->icount.rng != prev.rng) ||
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ (priv->icount.rng != prev.rng) ||
(priv->icount.dsr != prev.dsr) ||
(priv->icount.dcd != prev.dcd) ||
(priv->icount.cts != prev.cts)));
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->status_lock, flags);
cur = priv->icount;
spin_unlock_irqrestore(&priv->status_lock, flags);
return -ENOMEM;
spin_lock_init(&priv->status_lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
priv->icount.dcd++;
if (msr & UART_MSR_TERI)
priv->icount.rng++;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
int tp_flags;
int tp_closing_wait;/* in .01 secs */
struct async_icount tp_icount;
- wait_queue_head_t tp_msr_wait; /* wait for msr change */
wait_queue_head_t tp_write_wait;
struct ti_device *tp_tdev;
struct usb_serial_port *tp_port;
else
tport->tp_uart_base_addr = TI_UART2_BASE_ADDR;
tport->tp_closing_wait = closing_wait;
- init_waitqueue_head(&tport->tp_msr_wait);
init_waitqueue_head(&tport->tp_write_wait);
if (kfifo_alloc(&tport->write_fifo, TI_WRITE_BUF_SIZE, GFP_KERNEL)) {
kfree(tport);
dev_dbg(&port->dev, "%s - TIOCMIWAIT\n", __func__);
cprev = tport->tp_icount;
while (1) {
- interruptible_sleep_on(&tport->tp_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = tport->tp_icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
icount->dcd++;
if (msr & TI_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&tport->tp_msr_wait);
+ wake_up_interruptible(&tport->tp_port->delta_msr_wait);
spin_unlock_irqrestore(&tport->tp_lock, flags);
}
}
}
+ usb_put_intf(serial->interface);
usb_put_dev(serial->dev);
kfree(serial);
}
}
serial->dev = usb_get_dev(dev);
serial->type = driver;
- serial->interface = interface;
+ serial->interface = usb_get_intf(interface);
kref_init(&serial->kref);
mutex_init(&serial->disc_mutex);
serial->minor = SERIAL_TTY_NO_MINOR;
port->port.ops = &serial_port_ops;
port->serial = serial;
spin_lock_init(&port->lock);
+ init_waitqueue_head(&port->delta_msr_wait);
/* Keep this for private driver use for the moment but
should probably go away */
INIT_WORK(&port->work, usb_serial_port_work);
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_MAX_SECTORS_64 | US_FL_BULK_IGNORE_TAG),
+/* Added by Dmitry Artamonow <mad_soft@inbox.ru> */
+UNUSUAL_DEV( 0x04e8, 0x5136, 0x0000, 0x9999,
+ "Samsung",
+ "YP-Z3",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_MAX_SECTORS_64),
+
/* Entry and supporting patch by Theodore Kilgore <kilgota@auburn.edu>.
* Device uses standards-violating 32-byte Bulk Command Block Wrappers and
* reports itself as "Proprietary SCSI Bulk." Cf. device entry 0x084d:0x0011.
if (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE)) {
size_t size;
+ int max = vfio_pci_get_irq_count(vdev, hdr.index);
if (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)
size = sizeof(uint8_t);
return -EINVAL;
if (hdr.argsz - minsz < hdr.count * size ||
- hdr.count > vfio_pci_get_irq_count(vdev, hdr.index))
+ hdr.start >= max || hdr.start + hdr.count > max)
return -EINVAL;
data = memdup_user((void __user *)(arg + minsz),
VHOST_SCSI_VQ_IO = 2,
};
+/*
+ * VIRTIO_RING_F_EVENT_IDX seems broken. Not sure the bug is in
+ * kernel but disabling it helps.
+ * TODO: debug and remove the workaround.
+ */
+enum {
+ VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)
+};
+
#define VHOST_SCSI_MAX_TARGET 256
#define VHOST_SCSI_MAX_VQ 128
struct vhost_scsi {
/* Protected by vhost_scsi->dev.mutex */
- struct tcm_vhost_tpg *vs_tpg[VHOST_SCSI_MAX_TARGET];
+ struct tcm_vhost_tpg **vs_tpg;
char vs_vhost_wwpn[TRANSPORT_IQN_LEN];
- bool vs_endpoint;
struct vhost_dev dev;
struct vhost_virtqueue vqs[VHOST_SCSI_MAX_VQ];
}
}
+static void vhost_scsi_send_bad_target(struct vhost_scsi *vs,
+ struct vhost_virtqueue *vq, int head, unsigned out)
+{
+ struct virtio_scsi_cmd_resp __user *resp;
+ struct virtio_scsi_cmd_resp rsp;
+ int ret;
+
+ memset(&rsp, 0, sizeof(rsp));
+ rsp.response = VIRTIO_SCSI_S_BAD_TARGET;
+ resp = vq->iov[out].iov_base;
+ ret = __copy_to_user(resp, &rsp, sizeof(rsp));
+ if (!ret)
+ vhost_add_used_and_signal(&vs->dev, vq, head, 0);
+ else
+ pr_err("Faulted on virtio_scsi_cmd_resp\n");
+}
+
static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
struct vhost_virtqueue *vq)
{
+ struct tcm_vhost_tpg **vs_tpg;
struct virtio_scsi_cmd_req v_req;
struct tcm_vhost_tpg *tv_tpg;
struct tcm_vhost_cmd *tv_cmd;
int head, ret;
u8 target;
- /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
- if (unlikely(!vs->vs_endpoint))
+ /*
+ * We can handle the vq only after the endpoint is setup by calling the
+ * VHOST_SCSI_SET_ENDPOINT ioctl.
+ *
+ * TODO: Check that we are running from vhost_worker which acts
+ * as read-side critical section for vhost kind of RCU.
+ * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h
+ */
+ vs_tpg = rcu_dereference_check(vq->private_data, 1);
+ if (!vs_tpg)
return;
mutex_lock(&vq->mutex);
/* Extract the tpgt */
target = v_req.lun[1];
- tv_tpg = vs->vs_tpg[target];
+ tv_tpg = ACCESS_ONCE(vs_tpg[target]);
/* Target does not exist, fail the request */
if (unlikely(!tv_tpg)) {
- struct virtio_scsi_cmd_resp __user *resp;
- struct virtio_scsi_cmd_resp rsp;
-
- memset(&rsp, 0, sizeof(rsp));
- rsp.response = VIRTIO_SCSI_S_BAD_TARGET;
- resp = vq->iov[out].iov_base;
- ret = __copy_to_user(resp, &rsp, sizeof(rsp));
- if (!ret)
- vhost_add_used_and_signal(&vs->dev,
- vq, head, 0);
- else
- pr_err("Faulted on virtio_scsi_cmd_resp\n");
-
+ vhost_scsi_send_bad_target(vs, vq, head, out);
continue;
}
if (IS_ERR(tv_cmd)) {
vq_err(vq, "vhost_scsi_allocate_cmd failed %ld\n",
PTR_ERR(tv_cmd));
- break;
+ goto err_cmd;
}
pr_debug("Allocated tv_cmd: %p exp_data_len: %d, data_direction"
": %d\n", tv_cmd, exp_data_len, data_direction);
tv_cmd->tvc_vhost = vs;
tv_cmd->tvc_vq = vq;
-
- if (unlikely(vq->iov[out].iov_len !=
- sizeof(struct virtio_scsi_cmd_resp))) {
- vq_err(vq, "Expecting virtio_scsi_cmd_resp, got %zu"
- " bytes, out: %d, in: %d\n",
- vq->iov[out].iov_len, out, in);
- break;
- }
-
tv_cmd->tvc_resp = vq->iov[out].iov_base;
/*
" exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n",
scsi_command_size(tv_cmd->tvc_cdb),
TCM_VHOST_MAX_CDB_SIZE);
- break; /* TODO */
+ goto err_free;
}
tv_cmd->tvc_lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
data_direction == DMA_TO_DEVICE);
if (unlikely(ret)) {
vq_err(vq, "Failed to map iov to sgl\n");
- break; /* TODO */
+ goto err_free;
}
}
}
mutex_unlock(&vq->mutex);
+ return;
+
+err_free:
+ vhost_scsi_free_cmd(tv_cmd);
+err_cmd:
+ vhost_scsi_send_bad_target(vs, vq, head, out);
+ mutex_unlock(&vq->mutex);
}
static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
vhost_scsi_handle_vq(vs, vq);
}
+static void vhost_scsi_flush_vq(struct vhost_scsi *vs, int index)
+{
+ vhost_poll_flush(&vs->dev.vqs[index].poll);
+}
+
+static void vhost_scsi_flush(struct vhost_scsi *vs)
+{
+ int i;
+
+ for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
+ vhost_scsi_flush_vq(vs, i);
+ vhost_work_flush(&vs->dev, &vs->vs_completion_work);
+}
+
/*
* Called from vhost_scsi_ioctl() context to walk the list of available
* tcm_vhost_tpg with an active struct tcm_vhost_nexus
{
struct tcm_vhost_tport *tv_tport;
struct tcm_vhost_tpg *tv_tpg;
+ struct tcm_vhost_tpg **vs_tpg;
+ struct vhost_virtqueue *vq;
+ int index, ret, i, len;
bool match = false;
- int index, ret;
mutex_lock(&vs->dev.mutex);
/* Verify that ring has been setup correctly. */
}
}
+ len = sizeof(vs_tpg[0]) * VHOST_SCSI_MAX_TARGET;
+ vs_tpg = kzalloc(len, GFP_KERNEL);
+ if (!vs_tpg) {
+ mutex_unlock(&vs->dev.mutex);
+ return -ENOMEM;
+ }
+ if (vs->vs_tpg)
+ memcpy(vs_tpg, vs->vs_tpg, len);
+
mutex_lock(&tcm_vhost_mutex);
list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) {
mutex_lock(&tv_tpg->tv_tpg_mutex);
tv_tport = tv_tpg->tport;
if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
- if (vs->vs_tpg[tv_tpg->tport_tpgt]) {
+ if (vs->vs_tpg && vs->vs_tpg[tv_tpg->tport_tpgt]) {
mutex_unlock(&tv_tpg->tv_tpg_mutex);
mutex_unlock(&tcm_vhost_mutex);
mutex_unlock(&vs->dev.mutex);
+ kfree(vs_tpg);
return -EEXIST;
}
tv_tpg->tv_tpg_vhost_count++;
- vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg;
+ vs_tpg[tv_tpg->tport_tpgt] = tv_tpg;
smp_mb__after_atomic_inc();
match = true;
}
if (match) {
memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn,
sizeof(vs->vs_vhost_wwpn));
- vs->vs_endpoint = true;
+ for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
+ vq = &vs->vqs[i];
+ /* Flushing the vhost_work acts as synchronize_rcu */
+ mutex_lock(&vq->mutex);
+ rcu_assign_pointer(vq->private_data, vs_tpg);
+ vhost_init_used(vq);
+ mutex_unlock(&vq->mutex);
+ }
ret = 0;
} else {
ret = -EEXIST;
}
+ /*
+ * Act as synchronize_rcu to make sure access to
+ * old vs->vs_tpg is finished.
+ */
+ vhost_scsi_flush(vs);
+ kfree(vs->vs_tpg);
+ vs->vs_tpg = vs_tpg;
+
mutex_unlock(&vs->dev.mutex);
return ret;
}
{
struct tcm_vhost_tport *tv_tport;
struct tcm_vhost_tpg *tv_tpg;
+ struct vhost_virtqueue *vq;
+ bool match = false;
int index, ret, i;
u8 target;
for (index = 0; index < vs->dev.nvqs; ++index) {
if (!vhost_vq_access_ok(&vs->vqs[index])) {
ret = -EFAULT;
- goto err;
+ goto err_dev;
}
}
+
+ if (!vs->vs_tpg) {
+ mutex_unlock(&vs->dev.mutex);
+ return 0;
+ }
+
for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
target = i;
-
tv_tpg = vs->vs_tpg[target];
if (!tv_tpg)
continue;
+ mutex_lock(&tv_tpg->tv_tpg_mutex);
tv_tport = tv_tpg->tport;
if (!tv_tport) {
ret = -ENODEV;
- goto err;
+ goto err_tpg;
}
if (strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
tv_tport->tport_name, tv_tpg->tport_tpgt,
t->vhost_wwpn, t->vhost_tpgt);
ret = -EINVAL;
- goto err;
+ goto err_tpg;
}
tv_tpg->tv_tpg_vhost_count--;
vs->vs_tpg[target] = NULL;
- vs->vs_endpoint = false;
+ match = true;
+ mutex_unlock(&tv_tpg->tv_tpg_mutex);
}
+ if (match) {
+ for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
+ vq = &vs->vqs[i];
+ /* Flushing the vhost_work acts as synchronize_rcu */
+ mutex_lock(&vq->mutex);
+ rcu_assign_pointer(vq->private_data, NULL);
+ mutex_unlock(&vq->mutex);
+ }
+ }
+ /*
+ * Act as synchronize_rcu to make sure access to
+ * old vs->vs_tpg is finished.
+ */
+ vhost_scsi_flush(vs);
+ kfree(vs->vs_tpg);
+ vs->vs_tpg = NULL;
mutex_unlock(&vs->dev.mutex);
+
return 0;
-err:
+err_tpg:
+ mutex_unlock(&tv_tpg->tv_tpg_mutex);
+err_dev:
mutex_unlock(&vs->dev.mutex);
return ret;
}
+static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
+{
+ if (features & ~VHOST_SCSI_FEATURES)
+ return -EOPNOTSUPP;
+
+ mutex_lock(&vs->dev.mutex);
+ if ((features & (1 << VHOST_F_LOG_ALL)) &&
+ !vhost_log_access_ok(&vs->dev)) {
+ mutex_unlock(&vs->dev.mutex);
+ return -EFAULT;
+ }
+ vs->dev.acked_features = features;
+ smp_wmb();
+ vhost_scsi_flush(vs);
+ mutex_unlock(&vs->dev.mutex);
+ return 0;
+}
+
static int vhost_scsi_open(struct inode *inode, struct file *f)
{
struct vhost_scsi *s;
return 0;
}
-static void vhost_scsi_flush_vq(struct vhost_scsi *vs, int index)
-{
- vhost_poll_flush(&vs->dev.vqs[index].poll);
-}
-
-static void vhost_scsi_flush(struct vhost_scsi *vs)
-{
- int i;
-
- for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
- vhost_scsi_flush_vq(vs, i);
-}
-
-static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
-{
- if (features & ~VHOST_FEATURES)
- return -EOPNOTSUPP;
-
- mutex_lock(&vs->dev.mutex);
- if ((features & (1 << VHOST_F_LOG_ALL)) &&
- !vhost_log_access_ok(&vs->dev)) {
- mutex_unlock(&vs->dev.mutex);
- return -EFAULT;
- }
- vs->dev.acked_features = features;
- smp_wmb();
- vhost_scsi_flush(vs);
- mutex_unlock(&vs->dev.mutex);
- return 0;
-}
-
static long vhost_scsi_ioctl(struct file *f, unsigned int ioctl,
unsigned long arg)
{
return -EFAULT;
return 0;
case VHOST_GET_FEATURES:
- features = VHOST_FEATURES;
+ features = VHOST_SCSI_FEATURES;
if (copy_to_user(featurep, &features, sizeof features))
return -EFAULT;
return 0;
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/fb.h>
+#include <linux/io.h>
#include <linux/platform_data/video-ep93xx.h>
{
struct fb_info *info = file_fb_info(file);
struct fb_ops *fb;
- unsigned long off;
+ unsigned long mmio_pgoff;
unsigned long start;
u32 len;
if (!info)
return -ENODEV;
- if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT))
- return -EINVAL;
- off = vma->vm_pgoff << PAGE_SHIFT;
fb = info->fbops;
if (!fb)
return -ENODEV;
return res;
}
- /* frame buffer memory */
+ /*
+ * Ugh. This can be either the frame buffer mapping, or
+ * if pgoff points past it, the mmio mapping.
+ */
start = info->fix.smem_start;
- len = PAGE_ALIGN((start & ~PAGE_MASK) + info->fix.smem_len);
- if (off >= len) {
- /* memory mapped io */
- off -= len;
- if (info->var.accel_flags) {
- mutex_unlock(&info->mm_lock);
- return -EINVAL;
- }
+ len = info->fix.smem_len;
+ mmio_pgoff = PAGE_ALIGN((start & ~PAGE_MASK) + len) >> PAGE_SHIFT;
+ if (vma->vm_pgoff >= mmio_pgoff) {
+ vma->vm_pgoff -= mmio_pgoff;
start = info->fix.mmio_start;
- len = PAGE_ALIGN((start & ~PAGE_MASK) + info->fix.mmio_len);
+ len = info->fix.mmio_len;
}
mutex_unlock(&info->mm_lock);
- start &= PAGE_MASK;
- if ((vma->vm_end - vma->vm_start + off) > len)
- return -EINVAL;
- off += start;
- vma->vm_pgoff = off >> PAGE_SHIFT;
- /* VM_IO | VM_DONTEXPAND | VM_DONTDUMP are set by io_remap_pfn_range()*/
+
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
- fb_pgprotect(file, vma, off);
- if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
- vma->vm_end - vma->vm_start, vma->vm_page_prot))
- return -EAGAIN;
- return 0;
+ fb_pgprotect(file, vma, start);
+
+ return vm_iomap_memory(vma, start, len);
}
static int
fbmode->vmode = 0;
if (vm->dmt_flags & VESA_DMT_HSYNC_HIGH)
fbmode->sync |= FB_SYNC_HOR_HIGH_ACT;
- if (vm->dmt_flags & VESA_DMT_HSYNC_HIGH)
+ if (vm->dmt_flags & VESA_DMT_VSYNC_HIGH)
fbmode->sync |= FB_SYNC_VERT_HIGH_ACT;
if (vm->data_flags & DISPLAY_FLAGS_INTERLACED)
fbmode->vmode |= FB_VMODE_INTERLACED;
kfree(path);
mutex_unlock(&disp_lock);
-
- dev_info(path->dev, "de-register %s\n", path->name);
}
EXPORT_SYMBOL_GPL(mmp_unregister_path);
unsigned dotclk_delay;
const struct mxsfb_devdata *devdata;
int mapped;
+ u32 sync;
};
#define mxsfb_is_v3(host) (host->devdata->ipversion == 3)
vdctrl0 |= VDCTRL0_HSYNC_ACT_HIGH;
if (fb_info->var.sync & FB_SYNC_VERT_HIGH_ACT)
vdctrl0 |= VDCTRL0_VSYNC_ACT_HIGH;
- if (fb_info->var.sync & FB_SYNC_DATA_ENABLE_HIGH_ACT)
+ if (host->sync & MXSFB_SYNC_DATA_ENABLE_HIGH_ACT)
vdctrl0 |= VDCTRL0_ENABLE_ACT_HIGH;
- if (fb_info->var.sync & FB_SYNC_DOTCLK_FAILING_ACT)
+ if (host->sync & MXSFB_SYNC_DOTCLK_FAILING_ACT)
vdctrl0 |= VDCTRL0_DOTCLK_ACT_FAILING;
writel(vdctrl0, host->base + LCDC_VDCTRL0);
INIT_LIST_HEAD(&fb_info->modelist);
+ host->sync = pdata->sync;
+
ret = mxsfb_init_fbinfo(host);
if (ret != 0)
goto error_init_fb;
#include <linux/omap-dma.h>
+#include <mach/hardware.h>
+
#include "omapfb.h"
#include "lcdc.h"
u32 power_on_resume:1;
};
+/* used to pass spi_device from SPI to DSS portion of the driver */
+static struct tpo_td043_device *g_tpo_td043;
+
static int tpo_td043_write(struct spi_device *spi, u8 addr, u8 data)
{
struct spi_message m;
static int tpo_td043_probe(struct omap_dss_device *dssdev)
{
- struct tpo_td043_device *tpo_td043 = dev_get_drvdata(&dssdev->dev);
+ struct tpo_td043_device *tpo_td043 = g_tpo_td043;
int nreset_gpio = dssdev->reset_gpio;
int ret = 0;
if (ret)
dev_warn(&dssdev->dev, "failed to create sysfs files\n");
+ dev_set_drvdata(&dssdev->dev, tpo_td043);
+
return 0;
fail_gpio_req:
return -ENODEV;
}
+ if (g_tpo_td043 != NULL)
+ return -EBUSY;
+
spi->bits_per_word = 16;
spi->mode = SPI_MODE_0;
tpo_td043->spi = spi;
tpo_td043->nreset_gpio = dssdev->reset_gpio;
dev_set_drvdata(&spi->dev, tpo_td043);
- dev_set_drvdata(&dssdev->dev, tpo_td043);
+ g_tpo_td043 = tpo_td043;
omap_dss_register_driver(&tpo_td043_driver);
omap_dss_unregister_driver(&tpo_td043_driver);
kfree(tpo_td043);
+ g_tpo_td043 = NULL;
return 0;
}
static const enum omap_dss_output_id omap4_dss_supported_outputs[] = {
/* OMAP_DSS_CHANNEL_LCD */
- OMAP_DSS_OUTPUT_DPI | OMAP_DSS_OUTPUT_DBI |
- OMAP_DSS_OUTPUT_DSI1,
+ OMAP_DSS_OUTPUT_DBI | OMAP_DSS_OUTPUT_DSI1,
/* OMAP_DSS_CHANNEL_DIGIT */
- OMAP_DSS_OUTPUT_VENC | OMAP_DSS_OUTPUT_HDMI |
- OMAP_DSS_OUTPUT_DPI,
+ OMAP_DSS_OUTPUT_VENC | OMAP_DSS_OUTPUT_HDMI,
/* OMAP_DSS_CHANNEL_LCD2 */
OMAP_DSS_OUTPUT_DPI | OMAP_DSS_OUTPUT_DBI |
tmp = ((mode->xres & 7) << 24) | ((display_h_total & 7) << 16)
| ((mode->hsync_len & 7) << 8) | (hsync_pos & 7);
lcdc_write_chan(ch, LDHAJR, tmp);
+ lcdc_write_chan_mirror(ch, LDHAJR, tmp);
}
static void sh_mobile_lcdc_overlay_setup(struct sh_mobile_lcdc_overlay *ovl)
err = -ENOMEM;
if (err) {
- platform_device_put(uvesafb_device);
+ if (uvesafb_device)
+ platform_device_put(uvesafb_device);
platform_driver_unregister(&uvesafb_driver);
cn_del_callback(&uvesafb_cn_id);
return err;
config AT91RM9200_WATCHDOG
tristate "AT91RM9200 watchdog"
- depends on ARCH_AT91
+ depends on ARCH_AT91RM9200
help
Watchdog timer embedded into AT91RM9200 chips. This will reboot your
system when the timeout is reached.
#include "sp5100_tco.h"
/* Module and version information */
-#define TCO_VERSION "0.03"
+#define TCO_VERSION "0.05"
#define TCO_MODULE_NAME "SP5100 TCO timer"
#define TCO_DRIVER_NAME TCO_MODULE_NAME ", v" TCO_VERSION
/* internal variables */
static u32 tcobase_phys;
-static u32 resbase_phys;
static u32 tco_wdt_fired;
static void __iomem *tcobase;
static unsigned int pm_iobase;
static unsigned long timer_alive;
static char tco_expect_close;
static struct pci_dev *sp5100_tco_pci;
-static struct resource wdt_res = {
- .name = "Watchdog Timer",
- .flags = IORESOURCE_MEM,
-};
/* the watchdog platform device */
static struct platform_device *sp5100_tco_platform_device;
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started."
" (default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
-static unsigned int force_addr;
-module_param(force_addr, uint, 0);
-MODULE_PARM_DESC(force_addr, "Force the use of specified MMIO address."
- " ONLY USE THIS PARAMETER IF YOU REALLY KNOW"
- " WHAT YOU ARE DOING (default=none)");
-
/*
* Some TCO specific functions
*/
}
}
-static void tco_timer_disable(void)
-{
- int val;
-
- if (sp5100_tco_pci->revision >= 0x40) {
- /* For SB800 or later */
- /* Enable watchdog decode bit and Disable watchdog timer */
- outb(SB800_PM_WATCHDOG_CONTROL, SB800_IO_PM_INDEX_REG);
- val = inb(SB800_IO_PM_DATA_REG);
- val |= SB800_PCI_WATCHDOG_DECODE_EN;
- val |= SB800_PM_WATCHDOG_DISABLE;
- outb(val, SB800_IO_PM_DATA_REG);
- } else {
- /* For SP5100 or SB7x0 */
- /* Enable watchdog decode bit */
- pci_read_config_dword(sp5100_tco_pci,
- SP5100_PCI_WATCHDOG_MISC_REG,
- &val);
-
- val |= SP5100_PCI_WATCHDOG_DECODE_EN;
-
- pci_write_config_dword(sp5100_tco_pci,
- SP5100_PCI_WATCHDOG_MISC_REG,
- val);
-
- /* Disable Watchdog timer */
- outb(SP5100_PM_WATCHDOG_CONTROL, SP5100_IO_PM_INDEX_REG);
- val = inb(SP5100_IO_PM_DATA_REG);
- val |= SP5100_PM_WATCHDOG_DISABLE;
- outb(val, SP5100_IO_PM_DATA_REG);
- }
-}
-
/*
* /dev/watchdog handling
*/
{
struct pci_dev *dev = NULL;
const char *dev_name = NULL;
- u32 val, tmp_val;
+ u32 val;
u32 index_reg, data_reg, base_addr;
/* Match the PCI device */
} else
pr_debug("SBResource_MMIO is disabled(0x%04x)\n", val);
- /*
- * Lastly re-programming the watchdog timer MMIO address,
- * This method is a last resort...
- *
- * Before re-programming, to ensure that the watchdog timer
- * is disabled, disable the watchdog timer.
- */
- tco_timer_disable();
-
- if (force_addr) {
- /*
- * Force the use of watchdog timer MMIO address, and aligned to
- * 8byte boundary.
- */
- force_addr &= ~0x7;
- val = force_addr;
-
- pr_info("Force the use of 0x%04x as MMIO address\n", val);
- } else {
- /*
- * Get empty slot into the resource tree for watchdog timer.
- */
- if (allocate_resource(&iomem_resource,
- &wdt_res,
- SP5100_WDT_MEM_MAP_SIZE,
- 0xf0000000,
- 0xfffffff8,
- 0x8,
- NULL,
- NULL)) {
- pr_err("MMIO allocation failed\n");
- goto unreg_region;
- }
-
- val = resbase_phys = wdt_res.start;
- pr_debug("Got 0x%04x from resource tree\n", val);
- }
-
- /* Restore to the low three bits */
- outb(base_addr+0, index_reg);
- tmp_val = val | (inb(data_reg) & 0x7);
-
- /* Re-programming the watchdog timer base address */
- outb(base_addr+0, index_reg);
- outb((tmp_val >> 0) & 0xff, data_reg);
- outb(base_addr+1, index_reg);
- outb((tmp_val >> 8) & 0xff, data_reg);
- outb(base_addr+2, index_reg);
- outb((tmp_val >> 16) & 0xff, data_reg);
- outb(base_addr+3, index_reg);
- outb((tmp_val >> 24) & 0xff, data_reg);
-
- if (!request_mem_region_exclusive(val, SP5100_WDT_MEM_MAP_SIZE,
- dev_name)) {
- pr_err("MMIO address 0x%04x already in use\n", val);
- goto unreg_resource;
- }
+ pr_notice("failed to find MMIO address, giving up.\n");
+ goto unreg_region;
setup_wdt:
tcobase_phys = val;
unreg_mem_region:
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
-unreg_resource:
- if (resbase_phys)
- release_resource(&wdt_res);
unreg_region:
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
exit:
static int sp5100_tco_init(struct platform_device *dev)
{
int ret;
- char addr_str[16];
/*
* Check whether or not the hardware watchdog is there. If found, then
clear_bit(0, &timer_alive);
/* Show module parameters */
- if (force_addr == tcobase_phys)
- /* The force_addr is vaild */
- sprintf(addr_str, "0x%04x", force_addr);
- else
- strcpy(addr_str, "none");
-
- pr_info("initialized (0x%p). heartbeat=%d sec (nowayout=%d, "
- "force_addr=%s)\n",
- tcobase, heartbeat, nowayout, addr_str);
+ pr_info("initialized (0x%p). heartbeat=%d sec (nowayout=%d)\n",
+ tcobase, heartbeat, nowayout);
return 0;
exit:
iounmap(tcobase);
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
- if (resbase_phys)
- release_resource(&wdt_res);
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
return ret;
}
misc_deregister(&sp5100_tco_miscdev);
iounmap(tcobase);
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
- if (resbase_phys)
- release_resource(&wdt_res);
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
}
#define SB800_PM_WATCHDOG_DISABLE (1 << 2)
#define SB800_PM_WATCHDOG_SECOND_RES (3 << 0)
#define SB800_ACPI_MMIO_DECODE_EN (1 << 0)
-#define SB800_ACPI_MMIO_SEL (1 << 2)
+#define SB800_ACPI_MMIO_SEL (1 << 1)
#define SB800_PM_WDT_MMIO_OFFSET 0xB00
config XEN_STUB
bool "Xen stub drivers"
- depends on XEN && X86_64
+ depends on XEN && X86_64 && BROKEN
default n
help
Allow kernel to install stub drivers, to reserve space for Xen drivers,
if (unlikely((cpu != cpu_from_evtchn(port))))
do_hypercall = 1;
- else
+ else {
+ /*
+ * Need to clear the mask before checking pending to
+ * avoid a race with an event becoming pending.
+ *
+ * EVTCHNOP_unmask will only trigger an upcall if the
+ * mask bit was set, so if a hypercall is needed
+ * remask the event.
+ */
+ sync_clear_bit(port, BM(&s->evtchn_mask[0]));
evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0]));
- if (unlikely(evtchn_pending && xen_hvm_domain()))
- do_hypercall = 1;
+ if (unlikely(evtchn_pending && xen_hvm_domain())) {
+ sync_set_bit(port, BM(&s->evtchn_mask[0]));
+ do_hypercall = 1;
+ }
+ }
/* Slow path (hypercall) if this is a non-local port or if this is
* an hvm domain and an event is pending (hvm domains don't have
} else {
struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
- sync_clear_bit(port, BM(&s->evtchn_mask[0]));
-
/*
* The following is basically the equivalent of
* 'hw_resend_irq'. Just like a real IO-APIC we 'lose
{
int start_word_idx, start_bit_idx;
int word_idx, bit_idx;
- int i;
+ int i, irq;
int cpu = get_cpu();
struct shared_info *s = HYPERVISOR_shared_info;
struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
do {
xen_ulong_t pending_words;
+ xen_ulong_t pending_bits;
+ struct irq_desc *desc;
vcpu_info->evtchn_upcall_pending = 0;
* selector flag. xchg_xen_ulong must contain an
* appropriate barrier.
*/
+ if ((irq = per_cpu(virq_to_irq, cpu)[VIRQ_TIMER]) != -1) {
+ int evtchn = evtchn_from_irq(irq);
+ word_idx = evtchn / BITS_PER_LONG;
+ pending_bits = evtchn % BITS_PER_LONG;
+ if (active_evtchns(cpu, s, word_idx) & (1ULL << pending_bits)) {
+ desc = irq_to_desc(irq);
+ if (desc)
+ generic_handle_irq_desc(irq, desc);
+ }
+ }
+
pending_words = xchg_xen_ulong(&vcpu_info->evtchn_pending_sel, 0);
start_word_idx = __this_cpu_read(current_word_idx);
word_idx = start_word_idx;
for (i = 0; pending_words != 0; i++) {
- xen_ulong_t pending_bits;
xen_ulong_t words;
words = MASK_LSBS(pending_words, word_idx);
do {
xen_ulong_t bits;
- int port, irq;
- struct irq_desc *desc;
+ int port;
bits = MASK_LSBS(pending_bits, bit_idx);
}
EXPORT_SYMBOL_GPL(xen_event_channel_op_compat);
-int HYPERVISOR_physdev_op_compat(int cmd, void *arg)
+int xen_physdev_op_compat(int cmd, void *arg)
{
struct physdev_op op;
int rc;
return rc;
}
+EXPORT_SYMBOL_GPL(xen_physdev_op_compat);
pr = per_cpu(processors, i);
perf = per_cpu_ptr(acpi_perf_data, i);
+ if (!pr)
+ continue;
+
pr->performance = perf;
rc = acpi_processor_get_performance_info(pr);
if (rc)
#include <xen/events.h>
#include <asm/xen/pci.h>
#include <asm/xen/hypervisor.h>
+#include <xen/interface/physdev.h>
#include "pciback.h"
#include "conf_space.h"
#include "conf_space_quirks.h"
static void pcistub_device_release(struct kref *kref)
{
struct pcistub_device *psdev;
+ struct pci_dev *dev;
struct xen_pcibk_dev_data *dev_data;
psdev = container_of(kref, struct pcistub_device, kref);
- dev_data = pci_get_drvdata(psdev->dev);
+ dev = psdev->dev;
+ dev_data = pci_get_drvdata(dev);
- dev_dbg(&psdev->dev->dev, "pcistub_device_release\n");
+ dev_dbg(&dev->dev, "pcistub_device_release\n");
- xen_unregister_device_domain_owner(psdev->dev);
+ xen_unregister_device_domain_owner(dev);
/* Call the reset function which does not take lock as this
* is called from "unbind" which takes a device_lock mutex.
*/
- __pci_reset_function_locked(psdev->dev);
- if (pci_load_and_free_saved_state(psdev->dev,
- &dev_data->pci_saved_state)) {
- dev_dbg(&psdev->dev->dev, "Could not reload PCI state\n");
- } else
- pci_restore_state(psdev->dev);
+ __pci_reset_function_locked(dev);
+ if (pci_load_and_free_saved_state(dev, &dev_data->pci_saved_state))
+ dev_dbg(&dev->dev, "Could not reload PCI state\n");
+ else
+ pci_restore_state(dev);
+
+ if (pci_find_capability(dev, PCI_CAP_ID_MSIX)) {
+ struct physdev_pci_device ppdev = {
+ .seg = pci_domain_nr(dev->bus),
+ .bus = dev->bus->number,
+ .devfn = dev->devfn
+ };
+ int err = HYPERVISOR_physdev_op(PHYSDEVOP_release_msix,
+ &ppdev);
+
+ if (err)
+ dev_warn(&dev->dev, "MSI-X release failed (%d)\n",
+ err);
+ }
/* Disable the device */
- xen_pcibk_reset_device(psdev->dev);
+ xen_pcibk_reset_device(dev);
kfree(dev_data);
- pci_set_drvdata(psdev->dev, NULL);
+ pci_set_drvdata(dev, NULL);
/* Clean-up the device */
- xen_pcibk_config_free_dyn_fields(psdev->dev);
- xen_pcibk_config_free_dev(psdev->dev);
+ xen_pcibk_config_free_dyn_fields(dev);
+ xen_pcibk_config_free_dev(dev);
- psdev->dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED;
- pci_dev_put(psdev->dev);
+ dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED;
+ pci_dev_put(dev);
kfree(psdev);
}
if (err)
goto config_release;
+ if (pci_find_capability(dev, PCI_CAP_ID_MSIX)) {
+ struct physdev_pci_device ppdev = {
+ .seg = pci_domain_nr(dev->bus),
+ .bus = dev->bus->number,
+ .devfn = dev->devfn
+ };
+
+ err = HYPERVISOR_physdev_op(PHYSDEVOP_prepare_msix, &ppdev);
+ if (err)
+ dev_err(&dev->dev, "MSI-X preparation failed (%d)\n",
+ err);
+ }
+
/* We need the device active to save the state. */
dev_dbg(&dev->dev, "save state of device\n");
pci_save_state(dev);
goto whole;
if (!(vma->vm_flags & VM_SHARED) && FILTER(HUGETLB_PRIVATE))
goto whole;
+ return 0;
}
/* Do not dump I/O mapped devices or special mappings */
else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
error = -EIO;
- trace_block_bio_complete(bio, error);
-
if (bio->bi_end_io)
bio->bi_end_io(bio, error);
}
ihold(bdev->bd_inode);
return bdev;
}
+EXPORT_SYMBOL(bdgrab);
long nr_blockdev_pages(void)
{
if (tree_mod_dont_log(fs_info, NULL))
return 0;
+ __tree_mod_log_free_eb(fs_info, old_root);
+
ret = tree_mod_alloc(fs_info, flags, &tm);
if (ret < 0)
goto out;
static noinline void
tree_mod_log_eb_copy(struct btrfs_fs_info *fs_info, struct extent_buffer *dst,
struct extent_buffer *src, unsigned long dst_offset,
- unsigned long src_offset, int nr_items)
+ unsigned long src_offset, int nr_items, int log_removal)
{
int ret;
int i;
}
for (i = 0; i < nr_items; i++) {
- ret = tree_mod_log_insert_key_locked(fs_info, src,
- i + src_offset,
- MOD_LOG_KEY_REMOVE);
- BUG_ON(ret < 0);
+ if (log_removal) {
+ ret = tree_mod_log_insert_key_locked(fs_info, src,
+ i + src_offset,
+ MOD_LOG_KEY_REMOVE);
+ BUG_ON(ret < 0);
+ }
ret = tree_mod_log_insert_key_locked(fs_info, dst,
i + dst_offset,
MOD_LOG_KEY_ADD);
ret = btrfs_dec_ref(trans, root, buf, 1, 1);
BUG_ON(ret); /* -ENOMEM */
}
- tree_mod_log_free_eb(root->fs_info, buf);
clean_tree_block(trans, root, buf);
*last_ref = 1;
}
btrfs_set_node_ptr_generation(parent, parent_slot,
trans->transid);
btrfs_mark_buffer_dirty(parent);
+ tree_mod_log_free_eb(root->fs_info, buf);
btrfs_free_tree_block(trans, root, buf, parent_start,
last_ref);
}
goto enospc;
}
- tree_mod_log_free_eb(root->fs_info, root->node);
tree_mod_log_set_root_pointer(root, child);
rcu_assign_pointer(root->node, child);
push_items = min(src_nritems - 8, push_items);
tree_mod_log_eb_copy(root->fs_info, dst, src, dst_nritems, 0,
- push_items);
+ push_items, 1);
copy_extent_buffer(dst, src,
btrfs_node_key_ptr_offset(dst_nritems),
btrfs_node_key_ptr_offset(0),
sizeof(struct btrfs_key_ptr));
tree_mod_log_eb_copy(root->fs_info, dst, src, 0,
- src_nritems - push_items, push_items);
+ src_nritems - push_items, push_items, 1);
copy_extent_buffer(dst, src,
btrfs_node_key_ptr_offset(0),
btrfs_node_key_ptr_offset(src_nritems - push_items),
int mid;
int ret;
u32 c_nritems;
+ int tree_mod_log_removal = 1;
c = path->nodes[level];
WARN_ON(btrfs_header_generation(c) != trans->transid);
if (c == root->node) {
/* trying to split the root, lets make a new one */
ret = insert_new_root(trans, root, path, level + 1);
+ /*
+ * removal of root nodes has been logged by
+ * tree_mod_log_set_root_pointer due to locking
+ */
+ tree_mod_log_removal = 0;
if (ret)
return ret;
} else {
(unsigned long)btrfs_header_chunk_tree_uuid(split),
BTRFS_UUID_SIZE);
- tree_mod_log_eb_copy(root->fs_info, split, c, 0, mid, c_nritems - mid);
+ tree_mod_log_eb_copy(root->fs_info, split, c, 0, mid, c_nritems - mid,
+ tree_mod_log_removal);
copy_extent_buffer(split, c,
btrfs_node_key_ptr_offset(0),
btrfs_node_key_ptr_offset(mid),
0, objectid, NULL, 0, 0, 0);
if (IS_ERR(leaf)) {
ret = PTR_ERR(leaf);
+ leaf = NULL;
goto fail;
}
btrfs_tree_unlock(leaf);
+ return root;
+
fail:
- if (ret)
- return ERR_PTR(ret);
+ if (leaf) {
+ btrfs_tree_unlock(leaf);
+ free_extent_buffer(leaf);
+ }
+ kfree(root);
- return root;
+ return ERR_PTR(ret);
}
static struct btrfs_root *alloc_log_tree(struct btrfs_trans_handle *trans,
if (btrfs_root_refs(&root->root_item) == 0)
synchronize_srcu(&fs_info->subvol_srcu);
- if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) {
+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
btrfs_free_log(NULL, root);
btrfs_free_log_root_tree(NULL, fs_info);
}
cache->bytes_super += stripe_len;
ret = add_excluded_extent(root, cache->key.objectid,
stripe_len);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret)
+ return ret;
}
for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) {
ret = btrfs_rmap_block(&root->fs_info->mapping_tree,
cache->key.objectid, bytenr,
0, &logical, &nr, &stripe_len);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret)
+ return ret;
while (nr--) {
cache->bytes_super += stripe_len;
ret = add_excluded_extent(root, logical[nr],
stripe_len);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret) {
+ kfree(logical);
+ return ret;
+ }
}
kfree(logical);
spin_lock(&sinfo->lock);
spin_lock(&block_rsv->lock);
- block_rsv->size = num_bytes;
+ block_rsv->size = min_t(u64, num_bytes, 512 * 1024 * 1024);
num_bytes = sinfo->bytes_used + sinfo->bytes_pinned +
sinfo->bytes_reserved + sinfo->bytes_readonly +
* If the inodes csum_bytes is the same as the original
* csum_bytes then we know we haven't raced with any free()ers
* so we can just reduce our inodes csum bytes and carry on.
- * Otherwise we have to do the normal free thing to account for
- * the case that the free side didn't free up its reserve
- * because of this outstanding reservation.
*/
- if (BTRFS_I(inode)->csum_bytes == csum_bytes)
+ if (BTRFS_I(inode)->csum_bytes == csum_bytes) {
calc_csum_metadata_size(inode, num_bytes, 0);
- else
- to_free = calc_csum_metadata_size(inode, num_bytes, 0);
+ } else {
+ u64 orig_csum_bytes = BTRFS_I(inode)->csum_bytes;
+ u64 bytes;
+
+ /*
+ * This is tricky, but first we need to figure out how much we
+ * free'd from any free-ers that occured during this
+ * reservation, so we reset ->csum_bytes to the csum_bytes
+ * before we dropped our lock, and then call the free for the
+ * number of bytes that were freed while we were trying our
+ * reservation.
+ */
+ bytes = csum_bytes - BTRFS_I(inode)->csum_bytes;
+ BTRFS_I(inode)->csum_bytes = csum_bytes;
+ to_free = calc_csum_metadata_size(inode, bytes, 0);
+
+
+ /*
+ * Now we need to see how much we would have freed had we not
+ * been making this reservation and our ->csum_bytes were not
+ * artificially inflated.
+ */
+ BTRFS_I(inode)->csum_bytes = csum_bytes - num_bytes;
+ bytes = csum_bytes - orig_csum_bytes;
+ bytes = calc_csum_metadata_size(inode, bytes, 0);
+
+ /*
+ * Now reset ->csum_bytes to what it should be. If bytes is
+ * more than to_free then we would have free'd more space had we
+ * not had an artificially high ->csum_bytes, so we need to free
+ * the remainder. If bytes is the same or less then we don't
+ * need to do anything, the other free-ers did the correct
+ * thing.
+ */
+ BTRFS_I(inode)->csum_bytes = orig_csum_bytes - num_bytes;
+ if (bytes > to_free)
+ to_free = bytes - to_free;
+ else
+ to_free = 0;
+ }
spin_unlock(&BTRFS_I(inode)->lock);
if (dropped)
to_free += btrfs_calc_trans_metadata_size(root, dropped);
* info has super bytes accounted for, otherwise we'll think
* we have more space than we actually do.
*/
- exclude_super_stripes(root, cache);
+ ret = exclude_super_stripes(root, cache);
+ if (ret) {
+ /*
+ * We may have excluded something, so call this just in
+ * case.
+ */
+ free_excluded_extents(root, cache);
+ kfree(cache->free_space_ctl);
+ kfree(cache);
+ goto error;
+ }
/*
* check for two cases, either we are full, and therefore
cache->last_byte_to_unpin = (u64)-1;
cache->cached = BTRFS_CACHE_FINISHED;
- exclude_super_stripes(root, cache);
+ ret = exclude_super_stripes(root, cache);
+ if (ret) {
+ /*
+ * We may have excluded something, so call this just in
+ * case.
+ */
+ free_excluded_extents(root, cache);
+ kfree(cache->free_space_ctl);
+ kfree(cache);
+ return ret;
+ }
add_new_free_space(cache, root->fs_info, chunk_offset,
chunk_offset + size);
GFP_NOFS);
}
+int extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end)
+{
+ unsigned long index = start >> PAGE_CACHE_SHIFT;
+ unsigned long end_index = end >> PAGE_CACHE_SHIFT;
+ struct page *page;
+
+ while (index <= end_index) {
+ page = find_get_page(inode->i_mapping, index);
+ BUG_ON(!page); /* Pages should be in the extent_io_tree */
+ clear_page_dirty_for_io(page);
+ page_cache_release(page);
+ index++;
+ }
+ return 0;
+}
+
+int extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
+{
+ unsigned long index = start >> PAGE_CACHE_SHIFT;
+ unsigned long end_index = end >> PAGE_CACHE_SHIFT;
+ struct page *page;
+
+ while (index <= end_index) {
+ page = find_get_page(inode->i_mapping, index);
+ BUG_ON(!page); /* Pages should be in the extent_io_tree */
+ account_page_redirty(page);
+ __set_page_dirty_nobuffers(page);
+ page_cache_release(page);
+ index++;
+ }
+ return 0;
+}
+
/*
* helper function to set both pages and extents in the tree writeback
*/
unsigned long *map_len);
int extent_range_uptodate(struct extent_io_tree *tree,
u64 start, u64 end);
+int extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end);
+int extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end);
int extent_clear_unlock_delalloc(struct inode *inode,
struct extent_io_tree *tree,
u64 start, u64 end, struct page *locked_page,
csums_in_item = btrfs_item_size_nr(leaf, path->slots[0]);
csums_in_item /= csum_size;
- if (csum_offset >= csums_in_item) {
+ if (csum_offset == csums_in_item) {
ret = -EFBIG;
goto fail;
+ } else if (csum_offset > csums_in_item) {
+ goto fail;
}
}
item = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_csum_item);
return -ENOMEM;
sector_sum = sums->sums;
- trans->adding_csums = 1;
again:
next_offset = (u64)-1;
found_next = 0;
goto again;
}
out:
- trans->adding_csums = 0;
btrfs_free_path(path);
return ret;
{
struct inode *inode = file_inode(file);
struct extent_state *cached_state = NULL;
+ struct btrfs_root *root = BTRFS_I(inode)->root;
u64 cur_offset;
u64 last_byte;
u64 alloc_start;
ret = btrfs_check_data_free_space(inode, alloc_end - alloc_start);
if (ret)
return ret;
+ if (root->fs_info->quota_enabled) {
+ ret = btrfs_qgroup_reserve(root, alloc_end - alloc_start);
+ if (ret)
+ goto out_reserve_fail;
+ }
/*
* wait for ordered IO before we have any locks. We'll loop again
&cached_state, GFP_NOFS);
out:
mutex_unlock(&inode->i_mutex);
+ if (root->fs_info->quota_enabled)
+ btrfs_qgroup_free(root, alloc_end - alloc_start);
+out_reserve_fail:
/* Let go of our reservation. */
btrfs_free_reserved_data_space(inode, alloc_end - alloc_start);
return ret;
int i;
int will_compress;
int compress_type = root->fs_info->compress_type;
+ int redirty = 0;
/* if this is a small write inside eof, kick off a defrag */
if ((end - start + 1) < 16 * 1024 &&
if (BTRFS_I(inode)->force_compress)
compress_type = BTRFS_I(inode)->force_compress;
+ /*
+ * we need to call clear_page_dirty_for_io on each
+ * page in the range. Otherwise applications with the file
+ * mmap'd can wander in and change the page contents while
+ * we are compressing them.
+ *
+ * If the compression fails for any reason, we set the pages
+ * dirty again later on.
+ */
+ extent_range_clear_dirty_for_io(inode, start, end);
+ redirty = 1;
ret = btrfs_compress_pages(compress_type,
inode->i_mapping, start,
total_compressed, pages,
__set_page_dirty_nobuffers(locked_page);
/* unlocked later on in the async handlers */
}
+ if (redirty)
+ extent_range_redirty_for_io(inode, start, end);
add_async_extent(async_cow, start, end - start + 1,
0, NULL, 0, BTRFS_COMPRESS_NONE);
*num_added += 1;
struct btrfs_ordered_sum *sum;
list_for_each_entry(sum, list, list) {
+ trans->adding_csums = 1;
btrfs_csum_file_blocks(trans,
BTRFS_I(inode)->root->fs_info->csum_root, sum);
+ trans->adding_csums = 0;
}
return 0;
}
* 1 for the dir item
* 1 for the dir index
* 1 for the inode ref
- * 1 for the inode ref in the tree log
- * 2 for the dir entries in the log
* 1 for the inode
*/
- trans = btrfs_start_transaction(root, 8);
+ trans = btrfs_start_transaction(root, 5);
if (!IS_ERR(trans) || PTR_ERR(trans) != -ENOSPC)
return trans;
* inodes. So 5 * 2 is 10, plus 1 for the new link, so 11 total items
* should cover the worst case number of items we'll modify.
*/
- trans = btrfs_start_transaction(root, 20);
+ trans = btrfs_start_transaction(root, 11);
if (IS_ERR(trans)) {
ret = PTR_ERR(trans);
goto out_notrans;
INIT_LIST_HEAD(&splice);
INIT_LIST_HEAD(&works);
+ mutex_lock(&root->fs_info->ordered_operations_mutex);
spin_lock(&root->fs_info->ordered_extent_lock);
list_splice_init(&root->fs_info->ordered_extents, &splice);
while (!list_empty(&splice)) {
cond_resched();
}
+ mutex_unlock(&root->fs_info->ordered_operations_mutex);
}
/*
ret = btrfs_find_all_roots(trans, fs_info, node->bytenr,
sgn > 0 ? node->seq - 1 : node->seq, &roots);
if (ret < 0)
- goto out;
+ return ret;
spin_lock(&fs_info->qgroup_lock);
quota_root = fs_info->quota_root;
ret = 0;
unlock:
spin_unlock(&fs_info->qgroup_lock);
-out:
ulist_free(roots);
ulist_free(tmp);
eb = path->nodes[0];
ei = btrfs_item_ptr(eb, path->slots[0], struct btrfs_extent_item);
item_size = btrfs_item_size_nr(eb, path->slots[0]);
- btrfs_release_path(path);
if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) {
do {
ret < 0 ? -1 : ref_level,
ret < 0 ? -1 : ref_root);
} while (ret != 1);
+ btrfs_release_path(path);
} else {
+ btrfs_release_path(path);
swarn.path = path;
swarn.dev = dev;
iterate_extent_inodes(fs_info, found_key.objectid,
found_key.type != key.type) {
key.offset += right_len;
break;
- } else {
- if (found_key.offset != key.offset + right_len) {
- /* Should really not happen */
- ret = -EIO;
- goto out;
- }
+ }
+ if (found_key.offset != key.offset + right_len) {
+ ret = 0;
+ goto out;
}
key = found_key;
}
unsigned long src_ptr;
unsigned long dst_ptr;
int overwrite_root = 0;
+ bool inode_item = key->type == BTRFS_INODE_ITEM_KEY;
if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)
overwrite_root = 1;
/* look for the key in the destination tree */
ret = btrfs_search_slot(NULL, root, key, path, 0, 0);
+ if (ret < 0)
+ return ret;
+
if (ret == 0) {
char *src_copy;
char *dst_copy;
return 0;
}
+ /*
+ * We need to load the old nbytes into the inode so when we
+ * replay the extents we've logged we get the right nbytes.
+ */
+ if (inode_item) {
+ struct btrfs_inode_item *item;
+ u64 nbytes;
+
+ item = btrfs_item_ptr(path->nodes[0], path->slots[0],
+ struct btrfs_inode_item);
+ nbytes = btrfs_inode_nbytes(path->nodes[0], item);
+ item = btrfs_item_ptr(eb, slot,
+ struct btrfs_inode_item);
+ btrfs_set_inode_nbytes(eb, item, nbytes);
+ }
+ } else if (inode_item) {
+ struct btrfs_inode_item *item;
+
+ /*
+ * New inode, set nbytes to 0 so that the nbytes comes out
+ * properly when we replay the extents.
+ */
+ item = btrfs_item_ptr(eb, slot, struct btrfs_inode_item);
+ btrfs_set_inode_nbytes(eb, item, 0);
}
insert:
btrfs_release_path(path);
int found_type;
u64 extent_end;
u64 start = key->offset;
- u64 saved_nbytes;
+ u64 nbytes = 0;
struct btrfs_file_extent_item *item;
struct inode *inode = NULL;
unsigned long size;
found_type = btrfs_file_extent_type(eb, item);
if (found_type == BTRFS_FILE_EXTENT_REG ||
- found_type == BTRFS_FILE_EXTENT_PREALLOC)
- extent_end = start + btrfs_file_extent_num_bytes(eb, item);
- else if (found_type == BTRFS_FILE_EXTENT_INLINE) {
+ found_type == BTRFS_FILE_EXTENT_PREALLOC) {
+ nbytes = btrfs_file_extent_num_bytes(eb, item);
+ extent_end = start + nbytes;
+
+ /*
+ * We don't add to the inodes nbytes if we are prealloc or a
+ * hole.
+ */
+ if (btrfs_file_extent_disk_bytenr(eb, item) == 0)
+ nbytes = 0;
+ } else if (found_type == BTRFS_FILE_EXTENT_INLINE) {
size = btrfs_file_extent_inline_len(eb, item);
+ nbytes = btrfs_file_extent_ram_bytes(eb, item);
extent_end = ALIGN(start + size, root->sectorsize);
} else {
ret = 0;
}
btrfs_release_path(path);
- saved_nbytes = inode_get_bytes(inode);
/* drop any overlapping extents */
ret = btrfs_drop_extents(trans, root, inode, start, extent_end, 1);
BUG_ON(ret);
BUG_ON(ret);
}
- inode_set_bytes(inode, saved_nbytes);
+ inode_add_bytes(inode, nbytes);
ret = btrfs_update_inode(trans, root, inode);
out:
if (inode)
em = lookup_extent_mapping(em_tree, chunk_start, 1);
read_unlock(&em_tree->lock);
- BUG_ON(!em || em->start != chunk_start);
+ if (!em) {
+ printk(KERN_ERR "btrfs: couldn't find em for chunk %Lu\n",
+ chunk_start);
+ return -EIO;
+ }
+
+ if (em->start != chunk_start) {
+ printk(KERN_ERR "btrfs: bad chunk start, em=%Lu, wanted=%Lu\n",
+ em->start, chunk_start);
+ free_extent_map(em);
+ return -EIO;
+ }
map = (struct map_lookup *)em->bdev;
length = em->len;
}
break;
case Opt_blank_pass:
- vol->password = NULL;
- break;
- case Opt_pass:
/* passwords have to be handled differently
* to allow the character used for deliminator
* to be passed within them
*/
+ /*
+ * Check if this is a case where the password
+ * starts with a delimiter
+ */
+ tmp_end = strchr(data, '=');
+ tmp_end++;
+ if (!(tmp_end < end && tmp_end[1] == delim)) {
+ /* No it is not. Set the password to NULL */
+ vol->password = NULL;
+ break;
+ }
+ /* Yes it is. Drop down to Opt_pass below.*/
+ case Opt_pass:
/* Obtain the value string */
value = strchr(data, '=');
value++;
bool slash = false;
int error = 0;
- br_read_lock(&vfsmount_lock);
while (dentry != root->dentry || vfsmnt != root->mnt) {
struct dentry * parent;
if (!error && !slash)
error = prepend(buffer, buflen, "/", 1);
-out:
- br_read_unlock(&vfsmount_lock);
return error;
global_root:
error = prepend(buffer, buflen, "/", 1);
if (!error)
error = is_mounted(vfsmnt) ? 1 : 2;
- goto out;
+ return error;
}
/**
int error;
prepend(&res, &buflen, "\0", 1);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = prepend_path(path, root, &res, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
return ERR_PTR(error);
int error;
prepend(&res, &buflen, "\0", 1);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = prepend_path(path, &root, &res, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error > 1)
error = -EINVAL;
return path->dentry->d_op->d_dname(path->dentry, buf, buflen);
get_fs_root(current->fs, &root);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = path_with_deleted(path, &root, &res, &buflen);
+ write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
res = ERR_PTR(error);
- write_sequnlock(&rename_lock);
path_put(&root);
return res;
}
get_fs_root_and_pwd(current->fs, &root, &pwd);
error = -ENOENT;
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
if (!d_unlinked(pwd.dentry)) {
unsigned long len;
prepend(&cwd, &buflen, "\0", 1);
error = prepend_path(&pwd, &root, &cwd, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
goto out;
}
} else {
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
}
out:
int rc;
mutex_lock(&ecryptfs_daemon_hash_mux);
- rc = try_module_get(THIS_MODULE);
- if (rc == 0) {
- rc = -EIO;
- printk(KERN_ERR "%s: Error attempting to increment module use "
- "count; rc = [%d]\n", __func__, rc);
- goto out_unlock_daemon_list;
- }
rc = ecryptfs_find_daemon_by_euid(&daemon);
if (!rc) {
rc = -EINVAL;
if (rc) {
printk(KERN_ERR "%s: Error attempting to spawn daemon; "
"rc = [%d]\n", __func__, rc);
- goto out_module_put_unlock_daemon_list;
+ goto out_unlock_daemon_list;
}
mutex_lock(&daemon->mux);
if (daemon->flags & ECRYPTFS_DAEMON_MISCDEV_OPEN) {
atomic_inc(&ecryptfs_num_miscdev_opens);
out_unlock_daemon:
mutex_unlock(&daemon->mux);
-out_module_put_unlock_daemon_list:
- if (rc)
- module_put(THIS_MODULE);
out_unlock_daemon_list:
mutex_unlock(&ecryptfs_daemon_hash_mux);
return rc;
"bug.\n", __func__, rc);
BUG();
}
- module_put(THIS_MODULE);
return rc;
}
static const struct file_operations ecryptfs_miscdev_fops = {
+ .owner = THIS_MODULE,
.open = ecryptfs_miscdev_open,
.poll = ecryptfs_miscdev_poll,
.read = ecryptfs_miscdev_read,
* when the old and new regions overlap clear from new_end.
*/
free_pgd_range(&tlb, new_end, old_end, new_end,
- vma->vm_next ? vma->vm_next->vm_start : 0);
+ vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
} else {
/*
* otherwise, clean from old_start; this is done to not touch
* for the others its just a little faster.
*/
free_pgd_range(&tlb, old_start, old_end, new_end,
- vma->vm_next ? vma->vm_next->vm_start : 0);
+ vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
}
tlb_finish_mmu(&tlb, new_end, old_end);
if (split_flag & EXT4_EXT_DATA_VALID1) {
err = ext4_ext_zeroout(inode, ex2);
zero_ex.ee_block = ex2->ee_block;
- zero_ex.ee_len = ext4_ext_get_actual_len(ex2);
+ zero_ex.ee_len = cpu_to_le16(
+ ext4_ext_get_actual_len(ex2));
ext4_ext_store_pblock(&zero_ex,
ext4_ext_pblock(ex2));
} else {
err = ext4_ext_zeroout(inode, ex);
zero_ex.ee_block = ex->ee_block;
- zero_ex.ee_len = ext4_ext_get_actual_len(ex);
+ zero_ex.ee_len = cpu_to_le16(
+ ext4_ext_get_actual_len(ex));
ext4_ext_store_pblock(&zero_ex,
ext4_ext_pblock(ex));
}
} else {
err = ext4_ext_zeroout(inode, &orig_ex);
zero_ex.ee_block = orig_ex.ee_block;
- zero_ex.ee_len = ext4_ext_get_actual_len(&orig_ex);
+ zero_ex.ee_len = cpu_to_le16(
+ ext4_ext_get_actual_len(&orig_ex));
ext4_ext_store_pblock(&zero_ex,
ext4_ext_pblock(&orig_ex));
}
if (err)
goto out;
zero_ex.ee_block = ex->ee_block;
- zero_ex.ee_len = ext4_ext_get_actual_len(ex);
+ zero_ex.ee_len = cpu_to_le16(ext4_ext_get_actual_len(ex));
ext4_ext_store_pblock(&zero_ex, ext4_ext_pblock(ex));
err = ext4_ext_get_access(handle, inode, path + depth);
blk = *i_data;
if (level > 0) {
ext4_lblk_t first2;
- bh = sb_bread(inode->i_sb, blk);
+ bh = sb_bread(inode->i_sb, le32_to_cpu(blk));
if (!bh) {
- EXT4_ERROR_INODE_BLOCK(inode, blk,
+ EXT4_ERROR_INODE_BLOCK(inode, le32_to_cpu(blk),
"Read failure");
return -EIO;
}
cmd = F_SETLK;
fl->fl_type = F_UNLCK;
}
- if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
+ if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags))) {
+ if (fl->fl_type == F_UNLCK)
+ posix_lock_file_wait(file, fl);
return -EIO;
+ }
if (IS_GETLK(cmd))
return dlm_posix_get(ls->ls_dlm, ip->i_no_addr, file, fl);
else if (fl->fl_type == F_UNLCK)
struct dlm_lksb ls_control_lksb; /* control_lock */
char ls_control_lvb[GDLM_LVB_SIZE]; /* control_lock lvb */
struct completion ls_sync_wait; /* {control,mounted}_{lock,unlock} */
+ char *ls_lvb_bits;
spinlock_t ls_recover_spin; /* protects following fields */
unsigned long ls_recover_flags; /* DFL_ */
static int all_jid_bits_clear(char *lvb)
{
- int i;
- for (i = JID_BITMAP_OFFSET; i < GDLM_LVB_SIZE; i++) {
- if (lvb[i])
- return 0;
- }
- return 1;
+ return !memchr_inv(lvb + JID_BITMAP_OFFSET, 0,
+ GDLM_LVB_SIZE - JID_BITMAP_OFFSET);
}
static void sync_wait_cb(void *arg)
{
struct gfs2_sbd *sdp = container_of(work, struct gfs2_sbd, sd_control_work.work);
struct lm_lockstruct *ls = &sdp->sd_lockstruct;
- char lvb_bits[GDLM_LVB_SIZE];
uint32_t block_gen, start_gen, lvb_gen, flags;
int recover_set = 0;
int write_lvb = 0;
return;
}
- control_lvb_read(ls, &lvb_gen, lvb_bits);
+ control_lvb_read(ls, &lvb_gen, ls->ls_lvb_bits);
spin_lock(&ls->ls_recover_spin);
if (block_gen != ls->ls_recover_block ||
ls->ls_recover_result[i] = 0;
- if (!test_bit_le(i, lvb_bits + JID_BITMAP_OFFSET))
+ if (!test_bit_le(i, ls->ls_lvb_bits + JID_BITMAP_OFFSET))
continue;
- __clear_bit_le(i, lvb_bits + JID_BITMAP_OFFSET);
+ __clear_bit_le(i, ls->ls_lvb_bits + JID_BITMAP_OFFSET);
write_lvb = 1;
}
}
continue;
if (ls->ls_recover_submit[i] < start_gen) {
ls->ls_recover_submit[i] = 0;
- __set_bit_le(i, lvb_bits + JID_BITMAP_OFFSET);
+ __set_bit_le(i, ls->ls_lvb_bits + JID_BITMAP_OFFSET);
}
}
/* even if there are no bits to set, we need to write the
spin_unlock(&ls->ls_recover_spin);
if (write_lvb) {
- control_lvb_write(ls, start_gen, lvb_bits);
+ control_lvb_write(ls, start_gen, ls->ls_lvb_bits);
flags = DLM_LKF_CONVERT | DLM_LKF_VALBLK;
} else {
flags = DLM_LKF_CONVERT;
*/
for (i = 0; i < recover_size; i++) {
- if (test_bit_le(i, lvb_bits + JID_BITMAP_OFFSET)) {
+ if (test_bit_le(i, ls->ls_lvb_bits + JID_BITMAP_OFFSET)) {
fs_info(sdp, "recover generation %u jid %d\n",
start_gen, i);
gfs2_recover_set(sdp, i);
static int control_mount(struct gfs2_sbd *sdp)
{
struct lm_lockstruct *ls = &sdp->sd_lockstruct;
- char lvb_bits[GDLM_LVB_SIZE];
uint32_t start_gen, block_gen, mount_gen, lvb_gen;
int mounted_mode;
int retries = 0;
* lvb_gen will be non-zero.
*/
- control_lvb_read(ls, &lvb_gen, lvb_bits);
+ control_lvb_read(ls, &lvb_gen, ls->ls_lvb_bits);
if (lvb_gen == 0xFFFFFFFF) {
/* special value to force mount attempts to fail */
* and all lvb bits to be clear (no pending journal recoveries.)
*/
- if (!all_jid_bits_clear(lvb_bits)) {
+ if (!all_jid_bits_clear(ls->ls_lvb_bits)) {
/* journals need recovery, wait until all are clear */
fs_info(sdp, "control_mount wait for journal recovery\n");
goto restart;
static int control_first_done(struct gfs2_sbd *sdp)
{
struct lm_lockstruct *ls = &sdp->sd_lockstruct;
- char lvb_bits[GDLM_LVB_SIZE];
uint32_t start_gen, block_gen;
int error;
memset(ls->ls_recover_result, 0, ls->ls_recover_size*sizeof(uint32_t));
spin_unlock(&ls->ls_recover_spin);
- memset(lvb_bits, 0, sizeof(lvb_bits));
- control_lvb_write(ls, start_gen, lvb_bits);
+ memset(ls->ls_lvb_bits, 0, GDLM_LVB_SIZE);
+ control_lvb_write(ls, start_gen, ls->ls_lvb_bits);
error = mounted_lock(sdp, DLM_LOCK_PR, DLM_LKF_CONVERT);
if (error)
uint32_t old_size, new_size;
int i, max_jid;
+ if (!ls->ls_lvb_bits) {
+ ls->ls_lvb_bits = kzalloc(GDLM_LVB_SIZE, GFP_NOFS);
+ if (!ls->ls_lvb_bits)
+ return -ENOMEM;
+ }
+
max_jid = 0;
for (i = 0; i < num_slots; i++) {
if (max_jid < slots[i].slot - 1)
static void free_recover_size(struct lm_lockstruct *ls)
{
+ kfree(ls->ls_lvb_bits);
kfree(ls->ls_recover_submit);
kfree(ls->ls_recover_result);
ls->ls_recover_submit = NULL;
ls->ls_recover_size = 0;
ls->ls_recover_submit = NULL;
ls->ls_recover_result = NULL;
+ ls->ls_lvb_bits = NULL;
error = set_recover_size(sdp, NULL, 0);
if (error)
RB_CLEAR_NODE(&ip->i_res->rs_node);
out:
up_write(&ip->i_rw_mutex);
- return 0;
+ return error;
}
static void dump_rs(struct seq_file *seq, const struct gfs2_blkreserv *rs)
const struct gfs2_bitmap *bi, unsigned minlen, u64 *ptrimmed)
{
struct super_block *sb = sdp->sd_vfs;
- struct block_device *bdev = sb->s_bdev;
- const unsigned int sects_per_blk = sdp->sd_sb.sb_bsize /
- bdev_logical_block_size(sb->s_bdev);
u64 blk;
sector_t start = 0;
- sector_t nr_sects = 0;
+ sector_t nr_blks = 0;
int rv;
unsigned int x;
u32 trimmed = 0;
if (diff == 0)
continue;
blk = offset + ((bi->bi_start + x) * GFS2_NBBY);
- blk *= sects_per_blk; /* convert to sectors */
while(diff) {
if (diff & 1) {
- if (nr_sects == 0)
+ if (nr_blks == 0)
goto start_new_extent;
- if ((start + nr_sects) != blk) {
- if (nr_sects >= minlen) {
- rv = blkdev_issue_discard(bdev,
- start, nr_sects,
+ if ((start + nr_blks) != blk) {
+ if (nr_blks >= minlen) {
+ rv = sb_issue_discard(sb,
+ start, nr_blks,
GFP_NOFS, 0);
if (rv)
goto fail;
- trimmed += nr_sects;
+ trimmed += nr_blks;
}
- nr_sects = 0;
+ nr_blks = 0;
start_new_extent:
start = blk;
}
- nr_sects += sects_per_blk;
+ nr_blks++;
}
diff >>= 2;
- blk += sects_per_blk;
+ blk++;
}
}
- if (nr_sects >= minlen) {
- rv = blkdev_issue_discard(bdev, start, nr_sects, GFP_NOFS, 0);
+ if (nr_blks >= minlen) {
+ rv = sb_issue_discard(sb, start, nr_blks, GFP_NOFS, 0);
if (rv)
goto fail;
- trimmed += nr_sects;
+ trimmed += nr_blks;
}
if (ptrimmed)
*ptrimmed = trimmed;
struct address_space *mapping = inode->i_mapping;
struct page *page;
void *fsdata;
- u32 size = inode->i_size;
+ loff_t size = inode->i_size;
res = pagecache_write_begin(NULL, mapping, size, 0,
AOP_FLAG_UNINTERRUPTIBLE,
* way when do_mmap_pgoff unwinds (may be important on powerpc
* and ia64).
*/
- vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
vma->vm_ops = &hugetlb_vm_ops;
if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
* inode to the back of the list so we don't spin on it.
*/
if (!spin_trylock(&inode->i_lock)) {
- list_move_tail(&inode->i_lru, &sb->s_inode_lru);
+ list_move(&inode->i_lru, &sb->s_inode_lru);
continue;
}
* dcache.c
*/
extern struct dentry *__d_alloc(struct super_block *, const struct qstr *);
+
+/*
+ * read_write.c
+ */
+extern ssize_t __kernel_write(struct file *, const char *, size_t, loff_t *);
}
mnt->mnt.mnt_flags = old->mnt.mnt_flags & ~MNT_WRITE_HOLD;
+ /* Don't allow unprivileged users to change mount flags */
+ if ((flag & CL_UNPRIVILEGED) && (mnt->mnt.mnt_flags & MNT_READONLY))
+ mnt->mnt.mnt_flags |= MNT_LOCK_READONLY;
+
atomic_inc(&sb->s_active);
mnt->mnt.mnt_sb = sb;
mnt->mnt.mnt_root = dget(root);
if (IS_ERR(mnt)) {
err = PTR_ERR(mnt);
- goto out;
+ goto out2;
}
err = graft_tree(mnt, path);
if (readonly_request == __mnt_is_readonly(mnt))
return 0;
+ if (mnt->mnt_flags & MNT_LOCK_READONLY)
+ return -EPERM;
+
if (readonly_request)
error = mnt_make_readonly(real_mount(mnt));
else
/* First pass: copy the tree topology */
copy_flags = CL_COPY_ALL | CL_EXPIRE;
if (user_ns != mnt_ns->user_ns)
- copy_flags |= CL_SHARED_TO_SLAVE;
+ copy_flags |= CL_SHARED_TO_SLAVE | CL_UNPRIVILEGED;
new = copy_tree(old, old->mnt.mnt_root, copy_flags);
if (IS_ERR(new)) {
up_write(&namespace_sem);
return check_mnt(real_mount(mnt));
}
+bool current_chrooted(void)
+{
+ /* Does the current process have a non-standard root */
+ struct path ns_root;
+ struct path fs_root;
+ bool chrooted;
+
+ /* Find the namespace root */
+ ns_root.mnt = ¤t->nsproxy->mnt_ns->root->mnt;
+ ns_root.dentry = ns_root.mnt->mnt_root;
+ path_get(&ns_root);
+ while (d_mountpoint(ns_root.dentry) && follow_down_one(&ns_root))
+ ;
+
+ get_fs_root(current->fs, &fs_root);
+
+ chrooted = !path_equal(&fs_root, &ns_root);
+
+ path_put(&fs_root);
+ path_put(&ns_root);
+
+ return chrooted;
+}
+
+void update_mnt_policy(struct user_namespace *userns)
+{
+ struct mnt_namespace *ns = current->nsproxy->mnt_ns;
+ struct mount *mnt;
+
+ down_read(&namespace_sem);
+ list_for_each_entry(mnt, &ns->list, mnt_list) {
+ switch (mnt->mnt.mnt_sb->s_magic) {
+ case SYSFS_MAGIC:
+ userns->may_mount_sysfs = true;
+ break;
+ case PROC_SUPER_MAGIC:
+ userns->may_mount_proc = true;
+ break;
+ }
+ if (userns->may_mount_sysfs && userns->may_mount_proc)
+ break;
+ }
+ up_read(&namespace_sem);
+}
+
static void *mntns_get(struct task_struct *task)
{
struct mnt_namespace *ns = NULL;
bl_pipe_msg.bl_wq = &nn->bl_wq;
memset(msg, 0, sizeof(*msg));
- msg->data = kzalloc(1 + sizeof(bl_umount_request), GFP_NOFS);
+ msg->len = sizeof(bl_msg) + bl_msg.totallen;
+ msg->data = kzalloc(msg->len, GFP_NOFS);
if (!msg->data)
goto out;
memcpy(msg->data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg->data;
memcpy(&dataptr[sizeof(bl_msg)], &bl_umount_request, sizeof(bl_umount_request));
- msg->len = sizeof(bl_msg) + bl_msg.totallen;
add_wait_queue(&nn->bl_wq, &wq);
if (rpc_queue_upcall(nn->bl_device_pipe, msg) < 0) {
return ret;
}
-static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data)
+static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data, size_t datalen)
{
- return key_instantiate_and_link(key, data, strlen(data) + 1,
+ return key_instantiate_and_link(key, data, datalen,
id_resolver_cache->thread_keyring,
authkey);
}
struct key *key, struct key *authkey)
{
char id_str[NFS_UINT_MAXLEN];
+ size_t len;
int ret = -ENOKEY;
/* ret = -ENOKEY */
case IDMAP_CONV_NAMETOID:
if (strcmp(upcall->im_name, im->im_name) != 0)
break;
- sprintf(id_str, "%d", im->im_id);
- ret = nfs_idmap_instantiate(key, authkey, id_str);
+ /* Note: here we store the NUL terminator too */
+ len = sprintf(id_str, "%d", im->im_id) + 1;
+ ret = nfs_idmap_instantiate(key, authkey, id_str, len);
break;
case IDMAP_CONV_IDTONAME:
if (upcall->im_id != im->im_id)
break;
- ret = nfs_idmap_instantiate(key, authkey, im->im_name);
+ len = strlen(im->im_name);
+ ret = nfs_idmap_instantiate(key, authkey, im->im_name, len);
break;
default:
ret = -EINVAL;
struct rpc_cred *cred)
{
struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id);
- struct nfs_client *pos, *n, *prev = NULL;
+ struct nfs_client *pos, *prev = NULL;
struct nfs4_setclientid_res clid = {
.clientid = new->cl_clientid,
.confirm = new->cl_confirm,
int status = -NFS4ERR_STALE_CLIENTID;
spin_lock(&nn->nfs_client_lock);
- list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) {
+ list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {
/* If "pos" isn't marked ready, we can't trust the
* remaining fields in "pos" */
- if (pos->cl_cons_state < NFS_CS_READY)
+ if (pos->cl_cons_state > NFS_CS_READY) {
+ atomic_inc(&pos->cl_count);
+ spin_unlock(&nn->nfs_client_lock);
+
+ if (prev)
+ nfs_put_client(prev);
+ prev = pos;
+
+ status = nfs_wait_client_init_complete(pos);
+ spin_lock(&nn->nfs_client_lock);
+ if (status < 0)
+ continue;
+ }
+ if (pos->cl_cons_state != NFS_CS_READY)
continue;
if (pos->rpc_ops != new->rpc_ops)
struct rpc_cred *cred)
{
struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id);
- struct nfs_client *pos, *n, *prev = NULL;
+ struct nfs_client *pos, *prev = NULL;
int status = -NFS4ERR_STALE_CLIENTID;
spin_lock(&nn->nfs_client_lock);
- list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) {
+ list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {
/* If "pos" isn't marked ready, we can't trust the
* remaining fields in "pos", especially the client
* ID and serverowner fields. Wait for CREATE_SESSION
* to finish. */
- if (pos->cl_cons_state < NFS_CS_READY) {
+ if (pos->cl_cons_state > NFS_CS_READY) {
atomic_inc(&pos->cl_count);
spin_unlock(&nn->nfs_client_lock);
nfs_put_client(prev);
prev = pos;
- nfs4_schedule_lease_recovery(pos);
status = nfs_wait_client_init_complete(pos);
- if (status < 0) {
- nfs_put_client(pos);
- spin_lock(&nn->nfs_client_lock);
- continue;
+ if (status == 0) {
+ nfs4_schedule_lease_recovery(pos);
+ status = nfs4_wait_clnt_recover(pos);
}
- status = pos->cl_cons_state;
spin_lock(&nn->nfs_client_lock);
if (status < 0)
continue;
}
+ if (pos->cl_cons_state != NFS_CS_READY)
+ continue;
if (pos->rpc_ops != new->rpc_ops)
continue;
continue;
atomic_inc(&pos->cl_count);
- spin_unlock(&nn->nfs_client_lock);
+ *result = pos;
+ status = 0;
dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
__func__, pos, atomic_read(&pos->cl_count));
-
- *result = pos;
- return 0;
+ break;
}
/* No matching nfs_client found. */
spin_unlock(&nn->nfs_client_lock);
dprintk("NFS: <-- %s status = %d\n", __func__, status);
+ if (prev)
+ nfs_put_client(prev);
return status;
}
#endif /* CONFIG_NFS_V4_1 */
{
if (!test_and_clear_bit(NFS_LAYOUT_RETURN, &lo->plh_flags))
return;
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(inode)->flags);
pnfs_return_layout(inode);
}
/* Save the delegation */
nfs4_stateid_copy(&stateid, &delegation->stateid);
rcu_read_unlock();
+ nfs_release_seqid(opendata->o_arg.seqid);
ret = nfs_may_open(state->inode, state->owner->so_cred, open_mode);
if (ret != 0)
goto out;
int status;
if (pnfs_ld_layoutret_on_setattr(inode))
- pnfs_return_layout(inode);
+ pnfs_commit_and_return_layout(inode);
nfs_fattr_init(fattr);
static void nfs4_layoutcommit_release(void *calldata)
{
struct nfs4_layoutcommit_data *data = calldata;
- struct pnfs_layout_segment *lseg, *tmp;
- unsigned long *bitlock = &NFS_I(data->args.inode)->flags;
pnfs_cleanup_layoutcommit(data);
- /* Matched by references in pnfs_set_layoutcommit */
- list_for_each_entry_safe(lseg, tmp, &data->lseg_list, pls_lc_list) {
- list_del_init(&lseg->pls_lc_list);
- if (test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT,
- &lseg->pls_flags))
- pnfs_put_lseg(lseg);
- }
-
- clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock);
- smp_mb__after_clear_bit();
- wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING);
-
put_rpccred(data->cred);
kfree(data);
}
status = PTR_ERR(clnt);
break;
}
- clp->cl_rpcclient = clnt;
+ /* Note: this is safe because we haven't yet marked the
+ * client as ready, so we are the only user of
+ * clp->cl_rpcclient
+ */
+ clnt = xchg(&clp->cl_rpcclient, clnt);
+ rpc_shutdown_client(clnt);
+ clnt = clp->cl_rpcclient;
goto again;
case -NFS4ERR_MINOR_VERS_MISMATCH:
lo_seg_intersecting(lseg_range, recall_range);
}
+static bool pnfs_lseg_dec_and_remove_zero(struct pnfs_layout_segment *lseg,
+ struct list_head *tmp_list)
+{
+ if (!atomic_dec_and_test(&lseg->pls_refcount))
+ return false;
+ pnfs_layout_remove_lseg(lseg->pls_layout, lseg);
+ list_add(&lseg->pls_list, tmp_list);
+ return true;
+}
+
/* Returns 1 if lseg is removed from list, 0 otherwise */
static int mark_lseg_invalid(struct pnfs_layout_segment *lseg,
struct list_head *tmp_list)
*/
dprintk("%s: lseg %p ref %d\n", __func__, lseg,
atomic_read(&lseg->pls_refcount));
- if (atomic_dec_and_test(&lseg->pls_refcount)) {
- pnfs_layout_remove_lseg(lseg->pls_layout, lseg);
- list_add(&lseg->pls_list, tmp_list);
+ if (pnfs_lseg_dec_and_remove_zero(lseg, tmp_list))
rv = 1;
- }
}
return rv;
}
return lseg;
}
+static void pnfs_clear_layoutcommit(struct inode *inode,
+ struct list_head *head)
+{
+ struct nfs_inode *nfsi = NFS_I(inode);
+ struct pnfs_layout_segment *lseg, *tmp;
+
+ if (!test_and_clear_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags))
+ return;
+ list_for_each_entry_safe(lseg, tmp, &nfsi->layout->plh_segs, pls_list) {
+ if (!test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
+ continue;
+ pnfs_lseg_dec_and_remove_zero(lseg, head);
+ }
+}
+
/*
* Initiates a LAYOUTRETURN(FILE), and removes the pnfs_layout_hdr
* when the layout segment list is empty.
/* Reference matched in nfs4_layoutreturn_release */
pnfs_get_layout_hdr(lo);
empty = list_empty(&lo->plh_segs);
+ pnfs_clear_layoutcommit(ino, &tmp_list);
pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
/* Don't send a LAYOUTRETURN if list was initially empty */
if (empty) {
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&tmp_list);
- WARN_ON(test_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags));
-
lrp = kzalloc(sizeof(*lrp), GFP_KERNEL);
if (unlikely(lrp == NULL)) {
status = -ENOMEM;
}
EXPORT_SYMBOL_GPL(_pnfs_return_layout);
+int
+pnfs_commit_and_return_layout(struct inode *inode)
+{
+ struct pnfs_layout_hdr *lo;
+ int ret;
+
+ spin_lock(&inode->i_lock);
+ lo = NFS_I(inode)->layout;
+ if (lo == NULL) {
+ spin_unlock(&inode->i_lock);
+ return 0;
+ }
+ pnfs_get_layout_hdr(lo);
+ /* Block new layoutgets and read/write to ds */
+ lo->plh_block_lgets++;
+ spin_unlock(&inode->i_lock);
+ filemap_fdatawait(inode->i_mapping);
+ ret = pnfs_layoutcommit_inode(inode, true);
+ if (ret == 0)
+ ret = _pnfs_return_layout(inode);
+ spin_lock(&inode->i_lock);
+ lo->plh_block_lgets--;
+ spin_unlock(&inode->i_lock);
+ pnfs_put_layout_hdr(lo);
+ return ret;
+}
+
bool pnfs_roc(struct inode *ino)
{
struct pnfs_layout_hdr *lo;
dprintk("pnfs write error = %d\n", hdr->pnfs_error);
if (NFS_SERVER(hdr->inode)->pnfs_curr_ld->flags &
PNFS_LAYOUTRET_ON_ERROR) {
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(hdr->inode)->flags);
pnfs_return_layout(hdr->inode);
}
if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags))
dprintk("pnfs read error = %d\n", hdr->pnfs_error);
if (NFS_SERVER(hdr->inode)->pnfs_curr_ld->flags &
PNFS_LAYOUTRET_ON_ERROR) {
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(hdr->inode)->flags);
pnfs_return_layout(hdr->inode);
}
if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags))
list_for_each_entry(lseg, &NFS_I(inode)->layout->plh_segs, pls_list) {
if (lseg->pls_range.iomode == IOMODE_RW &&
- test_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
+ test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
list_add(&lseg->pls_lc_list, listp);
}
}
+static void pnfs_list_write_lseg_done(struct inode *inode, struct list_head *listp)
+{
+ struct pnfs_layout_segment *lseg, *tmp;
+ unsigned long *bitlock = &NFS_I(inode)->flags;
+
+ /* Matched by references in pnfs_set_layoutcommit */
+ list_for_each_entry_safe(lseg, tmp, listp, pls_lc_list) {
+ list_del_init(&lseg->pls_lc_list);
+ pnfs_put_lseg(lseg);
+ }
+
+ clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock);
+ smp_mb__after_clear_bit();
+ wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING);
+}
+
void pnfs_set_lo_fail(struct pnfs_layout_segment *lseg)
{
pnfs_layout_io_set_failed(lseg->pls_layout, lseg->pls_range.iomode);
if (nfss->pnfs_curr_ld->cleanup_layoutcommit)
nfss->pnfs_curr_ld->cleanup_layoutcommit(data);
+ pnfs_list_write_lseg_done(data->args.inode, &data->lseg_list);
}
/*
void pnfs_cleanup_layoutcommit(struct nfs4_layoutcommit_data *data);
int pnfs_layoutcommit_inode(struct inode *inode, bool sync);
int _pnfs_return_layout(struct inode *);
+int pnfs_commit_and_return_layout(struct inode *);
void pnfs_ld_write_done(struct nfs_write_data *);
void pnfs_ld_read_done(struct nfs_read_data *);
struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino,
return 0;
}
+static inline int pnfs_commit_and_return_layout(struct inode *inode)
+{
+ return 0;
+}
+
static inline bool
pnfs_ld_layoutret_on_setattr(struct inode *inode)
{
iattr->ia_valid |= ATTR_SIZE;
}
if (bmval[0] & FATTR4_WORD0_ACL) {
- int nace;
+ u32 nace;
struct nfs4_ace *ace;
READ_BUF(4); len += 4;
{
if (rp->c_type == RC_REPLBUFF)
kfree(rp->c_replvec.iov_base);
- hlist_del(&rp->c_hash);
+ if (!hlist_unhashed(&rp->c_hash))
+ hlist_del(&rp->c_hash);
list_del(&rp->c_lru);
--num_drc_entries;
kmem_cache_free(drc_slab, rp);
int nfsd_reply_cache_init(void)
{
+ INIT_LIST_HEAD(&lru_head);
+ max_drc_entries = nfsd_cache_size_limit();
+ num_drc_entries = 0;
+
register_shrinker(&nfsd_reply_cache_shrinker);
drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep),
0, 0, NULL);
if (!cache_hash)
goto out_nomem;
- INIT_LIST_HEAD(&lru_head);
- max_drc_entries = nfsd_cache_size_limit();
- num_drc_entries = 0;
-
return 0;
out_nomem:
printk(KERN_ERR "nfsd: failed to allocate reply cache\n");
int host_err;
int stable = *stablep;
int use_wgather;
+ loff_t pos = offset;
dentry = file->f_path.dentry;
inode = dentry->d_inode;
/* Write the data. */
oldfs = get_fs(); set_fs(KERNEL_DS);
- host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &offset);
+ host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &pos);
set_fs(oldfs);
if (host_err < 0)
goto out_nfserr;
#include <linux/mnt_namespace.h>
#include <linux/mount.h>
#include <linux/fs.h>
+#include <linux/nsproxy.h>
#include "internal.h"
#include "pnode.h"
int propagate_mnt(struct mount *dest_mnt, struct dentry *dest_dentry,
struct mount *source_mnt, struct list_head *tree_list)
{
+ struct user_namespace *user_ns = current->nsproxy->mnt_ns->user_ns;
struct mount *m, *child;
int ret = 0;
struct mount *prev_dest_mnt = dest_mnt;
source = get_source(m, prev_dest_mnt, prev_src_mnt, &type);
+ /* Notice when we are propagating across user namespaces */
+ if (m->mnt_ns->user_ns != user_ns)
+ type |= CL_UNPRIVILEGED;
+
child = copy_tree(source, source->mnt.mnt_root, type);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
#define CL_MAKE_SHARED 0x08
#define CL_PRIVATE 0x10
#define CL_SHARED_TO_SLAVE 0x20
+#define CL_UNPRIVILEGED 0x40
static inline void set_mnt_shared(struct mount *mnt)
{
"x (dead)", /* 64 */
"K (wakekill)", /* 128 */
"W (waking)", /* 256 */
+ "P (parked)", /* 512 */
};
static inline const char *get_task_state(struct task_struct *tsk)
free_proc_entry(pde);
}
-/*
- * Remove a /proc entry and free it if it's not currently in use.
- */
-void remove_proc_entry(const char *name, struct proc_dir_entry *parent)
+static void entry_rundown(struct proc_dir_entry *de)
{
- struct proc_dir_entry **p;
- struct proc_dir_entry *de = NULL;
- const char *fn = name;
- unsigned int len;
-
- spin_lock(&proc_subdir_lock);
- if (__xlate_proc_name(name, &parent, &fn) != 0) {
- spin_unlock(&proc_subdir_lock);
- return;
- }
- len = strlen(fn);
-
- for (p = &parent->subdir; *p; p=&(*p)->next ) {
- if (proc_match(len, fn, *p)) {
- de = *p;
- *p = de->next;
- de->next = NULL;
- break;
- }
- }
- spin_unlock(&proc_subdir_lock);
- if (!de) {
- WARN(1, "name '%s'\n", name);
- return;
- }
-
spin_lock(&de->pde_unload_lock);
/*
* Stop accepting new callers into module. If you're
spin_lock(&de->pde_unload_lock);
}
spin_unlock(&de->pde_unload_lock);
+}
+
+/*
+ * Remove a /proc entry and free it if it's not currently in use.
+ */
+void remove_proc_entry(const char *name, struct proc_dir_entry *parent)
+{
+ struct proc_dir_entry **p;
+ struct proc_dir_entry *de = NULL;
+ const char *fn = name;
+ unsigned int len;
+
+ spin_lock(&proc_subdir_lock);
+ if (__xlate_proc_name(name, &parent, &fn) != 0) {
+ spin_unlock(&proc_subdir_lock);
+ return;
+ }
+ len = strlen(fn);
+
+ for (p = &parent->subdir; *p; p=&(*p)->next ) {
+ if (proc_match(len, fn, *p)) {
+ de = *p;
+ *p = de->next;
+ de->next = NULL;
+ break;
+ }
+ }
+ spin_unlock(&proc_subdir_lock);
+ if (!de) {
+ WARN(1, "name '%s'\n", name);
+ return;
+ }
+
+ entry_rundown(de);
if (S_ISDIR(de->mode))
parent->nlink--;
pde_put(de);
}
EXPORT_SYMBOL(remove_proc_entry);
+
+int remove_proc_subtree(const char *name, struct proc_dir_entry *parent)
+{
+ struct proc_dir_entry **p;
+ struct proc_dir_entry *root = NULL, *de, *next;
+ const char *fn = name;
+ unsigned int len;
+
+ spin_lock(&proc_subdir_lock);
+ if (__xlate_proc_name(name, &parent, &fn) != 0) {
+ spin_unlock(&proc_subdir_lock);
+ return -ENOENT;
+ }
+ len = strlen(fn);
+
+ for (p = &parent->subdir; *p; p=&(*p)->next ) {
+ if (proc_match(len, fn, *p)) {
+ root = *p;
+ *p = root->next;
+ root->next = NULL;
+ break;
+ }
+ }
+ if (!root) {
+ spin_unlock(&proc_subdir_lock);
+ return -ENOENT;
+ }
+ de = root;
+ while (1) {
+ next = de->subdir;
+ if (next) {
+ de->subdir = next->next;
+ next->next = NULL;
+ de = next;
+ continue;
+ }
+ spin_unlock(&proc_subdir_lock);
+
+ entry_rundown(de);
+ next = de->parent;
+ if (S_ISDIR(de->mode))
+ next->nlink--;
+ de->nlink = 0;
+ if (de == root)
+ break;
+ pde_put(de);
+
+ spin_lock(&proc_subdir_lock);
+ de = next;
+ }
+ pde_put(root);
+ return 0;
+}
+EXPORT_SYMBOL(remove_proc_subtree);
struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
{
- struct inode *inode = iget_locked(sb, de->low_ino);
+ struct inode *inode = new_inode_pseudo(sb);
- if (inode && (inode->i_state & I_NEW)) {
+ if (inode) {
+ inode->i_ino = de->low_ino;
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
PROC_I(inode)->pde = de;
inode->i_fop = de->proc_fops;
}
}
- unlock_new_inode(inode);
} else
pde_put(de);
return inode;
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/bitops.h>
+#include <linux/user_namespace.h>
#include <linux/mount.h>
#include <linux/pid_namespace.h>
#include <linux/parser.h>
} else {
ns = task_active_pid_ns(current);
options = data;
+
+ if (!current_user_ns()->may_mount_proc)
+ return ERR_PTR(-EPERM);
}
sb = sget(fs_type, proc_test_super, proc_set_super, flags, ns);
#include <linux/splice.h>
#include <linux/compat.h>
#include "read_write.h"
+#include "internal.h"
#include <asm/uaccess.h>
#include <asm/unistd.h>
EXPORT_SYMBOL(do_sync_write);
+ssize_t __kernel_write(struct file *file, const char *buf, size_t count, loff_t *pos)
+{
+ mm_segment_t old_fs;
+ const char __user *p;
+ ssize_t ret;
+
+ if (!file->f_op || (!file->f_op->write && !file->f_op->aio_write))
+ return -EINVAL;
+
+ old_fs = get_fs();
+ set_fs(get_ds());
+ p = (__force const char __user *)buf;
+ if (count > MAX_RW_COUNT)
+ count = MAX_RW_COUNT;
+ if (file->f_op->write)
+ ret = file->f_op->write(file, p, count, pos);
+ else
+ ret = do_sync_write(file, p, count, pos);
+ set_fs(old_fs);
+ if (ret > 0) {
+ fsnotify_modify(file);
+ add_wchar(current, ret);
+ }
+ inc_syscw(current);
+ return ret;
+}
+
ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_t *pos)
{
ssize_t ret;
if (dbuf->count == ARRAY_SIZE(dbuf->dentries))
return -ENOSPC;
- if (name[0] == '.' && (name[1] == '\0' ||
- (name[1] == '.' && name[2] == '\0')))
+ if (name[0] == '.' && (namelen < 2 ||
+ (namelen == 2 && name[1] == '.')))
return 0;
dentry = lookup_one_len(name, dbuf->xadir, namelen);
#include <linux/security.h>
#include <linux/gfp.h>
#include <linux/socket.h>
+#include "internal.h"
/*
* Attempt to steal a page from a pipe buffer. This should perhaps go into
{
int ret;
void *data;
+ loff_t tmp = sd->pos;
data = buf->ops->map(pipe, buf, 0);
- ret = kernel_write(sd->u.file, data + buf->offset, sd->len, sd->pos);
+ ret = __kernel_write(sd->u.file, data + buf->offset, sd->len, &tmp);
buf->ops->unmap(pipe, buf, data);
return ret;
ino = parent_sd->s_ino;
if (filldir(dirent, ".", 1, filp->f_pos, ino, DT_DIR) == 0)
filp->f_pos++;
+ else
+ return 0;
}
if (filp->f_pos == 1) {
if (parent_sd->s_parent)
ino = parent_sd->s_ino;
if (filldir(dirent, "..", 2, filp->f_pos, ino, DT_DIR) == 0)
filp->f_pos++;
+ else
+ return 0;
}
mutex_lock(&sysfs_mutex);
for (pos = sysfs_dir_pos(ns, parent_sd, filp->f_pos, pos);
return 0;
}
+static loff_t sysfs_dir_llseek(struct file *file, loff_t offset, int whence)
+{
+ struct inode *inode = file_inode(file);
+ loff_t ret;
+
+ mutex_lock(&inode->i_mutex);
+ ret = generic_file_llseek(file, offset, whence);
+ mutex_unlock(&inode->i_mutex);
+
+ return ret;
+}
const struct file_operations sysfs_dir_operations = {
.read = generic_read_dir,
.readdir = sysfs_readdir,
.release = sysfs_dir_release,
- .llseek = generic_file_llseek,
+ .llseek = sysfs_dir_llseek,
};
#include <linux/module.h>
#include <linux/magic.h>
#include <linux/slab.h>
+#include <linux/user_namespace.h>
#include "sysfs.h"
struct super_block *sb;
int error;
+ if (!(flags & MS_KERNMOUNT) && !current_user_ns()->may_mount_sysfs)
+ return ERR_PTR(-EPERM);
+
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);
c->remounting_rw = 1;
c->ro_mount = 0;
+ if (c->space_fixup) {
+ err = ubifs_fixup_free_space(c);
+ if (err)
+ return err;
+ }
+
err = check_free_space(c);
if (err)
goto out;
err = dbg_check_space_info(c);
}
- if (c->space_fixup) {
- err = ubifs_fixup_free_space(c);
- if (err)
- goto out;
- }
-
mutex_unlock(&c->umount_mutex);
return err;
#include <linux/mm_types.h>
#include <linux/bug.h>
+/*
+ * On almost all architectures and configurations, 0 can be used as the
+ * upper ceiling to free_pgtables(): on many architectures it has the same
+ * effect as using TASK_SIZE. However, there is one configuration which
+ * must impose a more careful limit, to avoid freeing kernel pgtables.
+ */
+#ifndef USER_PGTABLES_CEILING
+#define USER_PGTABLES_CEILING 0UL
+#endif
+
#ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
extern int ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep,
unsigned int need_flush : 1, /* Did free PTEs */
fast_mode : 1; /* No batching */
- unsigned int fullmm;
+ /* we are in the middle of an operation to clear
+ * a full mm and can make some optimizations */
+ unsigned int fullmm : 1,
+ /* we have performed an operation which
+ * requires a complete flush of the tlb */
+ need_flush_all : 1;
struct mmu_gather_batch *active;
struct mmu_gather_batch local;
}
}
-static inline bool atapi_command_packet_set(const u16 *dev_id)
+static inline int atapi_command_packet_set(const u16 *dev_id)
{
return (dev_id[ATA_ID_CONFIG] >> 8) & 0x1f;
}
struct blk_trace {
int trace_state;
- bool rq_based;
struct rchan *rchan;
unsigned long __percpu *sequence;
unsigned char __percpu *msg_data;
#define _KERNEL_CAP_T_SIZE (sizeof(kernel_cap_t))
+struct file;
struct inode;
struct dentry;
struct user_namespace;
extern bool ns_capable(struct user_namespace *ns, int cap);
extern bool nsown_capable(int cap);
extern bool inode_capable(const struct inode *inode, int cap);
+extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap);
/* audit system wants to get cap info from files as well */
extern int get_vfs_caps_from_disk(const struct dentry *dentry, struct cpu_vfs_cap_data *cpu_caps);
} compat_sigset_t;
struct compat_sigaction {
-#ifndef __ARCH_HAS_ODD_SIGACTION
+#ifndef __ARCH_HAS_IRIX_SIGACTION
compat_uptr_t sa_handler;
compat_ulong_t sa_flags;
#else
- compat_ulong_t sa_flags;
+ compat_uint_t sa_flags;
compat_uptr_t sa_handler;
#endif
#ifdef __ARCH_HAS_SA_RESTORER
extern void debug_show_all_locks(void);
extern void debug_show_held_locks(struct task_struct *task);
extern void debug_check_no_locks_freed(const void *from, unsigned long len);
-extern void debug_check_no_locks_held(void);
+extern void debug_check_no_locks_held(struct task_struct *task);
#else
static inline void debug_show_all_locks(void)
{
}
static inline void
-debug_check_no_locks_held(void)
+debug_check_no_locks_held(struct task_struct *task)
{
}
#endif
#endif
#else /* !CONFIG_PM_DEVFREQ */
-static struct devfreq *devfreq_add_device(struct device *dev,
+static inline struct devfreq *devfreq_add_device(struct device *dev,
struct devfreq_dev_profile *profile,
const char *governor_name,
void *data)
return NULL;
}
-static int devfreq_remove_device(struct devfreq *devfreq)
+static inline int devfreq_remove_device(struct devfreq *devfreq)
{
return 0;
}
-static int devfreq_suspend_device(struct devfreq *devfreq)
+static inline int devfreq_suspend_device(struct devfreq *devfreq)
{
return 0;
}
-static int devfreq_resume_device(struct devfreq *devfreq)
+static inline int devfreq_resume_device(struct devfreq *devfreq)
{
return 0;
}
-static struct opp *devfreq_recommended_opp(struct device *dev,
+static inline struct opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, u32 flags)
{
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
-static int devfreq_register_opp_notifier(struct device *dev,
+static inline int devfreq_register_opp_notifier(struct device *dev,
struct devfreq *devfreq)
{
return -EINVAL;
}
-static int devfreq_unregister_opp_notifier(struct device *dev,
+static inline int devfreq_unregister_opp_notifier(struct device *dev,
struct devfreq *devfreq)
{
return -EINVAL;
u32 ue_count; /* Uncorrectable Errors for this csrow */
u32 ce_count; /* Correctable Errors for this csrow */
- u32 nr_pages; /* combined pages count of all channels */
struct mem_ctl_info *mci; /* the parent */
* sees memory sticks ("dimms"), and the ones that sees memory ranks.
* All old memory controllers enumerate memories per rank, but most
* of the recent drivers enumerate memories per DIMM, instead.
- * When the memory controller is per rank, mem_is_per_rank is true.
+ * When the memory controller is per rank, csbased is true.
*/
unsigned n_layers;
struct edac_mc_layer *layers;
- bool mem_is_per_rank;
+ bool csbased;
/*
* DIMM info. Will eventually remove the entire csrows_info some day
u32 fake_inject_ue;
u16 fake_inject_count;
#endif
- __u8 csbased : 1, /* csrow-based memory controller */
- __resv : 7;
};
#endif
unsigned long count,
u64 *max_size,
int *reset_type);
+typedef efi_status_t efi_query_variable_store_t(u32 attributes, unsigned long size);
/*
* EFI Configuration Table and GUID definitions
#ifdef CONFIG_X86
extern void efi_late_init(void);
extern void efi_free_boot_services(void);
+extern efi_status_t efi_query_variable_store(u32 attributes, unsigned long size);
#else
static inline void efi_late_init(void) {}
static inline void efi_free_boot_services(void) {}
+
+static inline efi_status_t efi_query_variable_store(u32 attributes, unsigned long size)
+{
+ return EFI_SUCCESS;
+}
#endif
extern void __iomem *efi_lookup_mapped_addr(u64 phys_addr);
extern u64 efi_get_iobase (void);
efi_get_variable_t *get_variable;
efi_get_next_variable_t *get_next_variable;
efi_set_variable_t *set_variable;
- efi_query_variable_info_t *query_variable_info;
+ efi_query_variable_store_t *query_variable_store;
};
struct efivars {
#ifndef FREEZER_H_INCLUDED
#define FREEZER_H_INCLUDED
-#include <linux/debug_locks.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/atomic.h>
static inline bool try_to_freeze(void)
{
- if (!(current->flags & PF_NOFREEZE))
- debug_check_no_locks_held();
might_sleep();
if (likely(!freezing(current)))
return false;
spin_unlock(&fs->lock);
}
+extern bool current_chrooted(void);
+
#endif /* _LINUX_FS_STRUCT_H */
* that the call back has its own recursion protection. If it does
* not set this, then the ftrace infrastructure will add recursion
* protection for the caller.
+ * STUB - The ftrace_ops is just a place holder.
*/
enum {
FTRACE_OPS_FL_ENABLED = 1 << 0,
FTRACE_OPS_FL_SAVE_REGS = 1 << 4,
FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = 1 << 5,
FTRACE_OPS_FL_RECURSION_SAFE = 1 << 6,
+ FTRACE_OPS_FL_STUB = 1 << 7,
};
struct ftrace_ops {
size_t cnt, loff_t *ppos);
ssize_t ftrace_notrace_write(struct file *file, const char __user *ubuf,
size_t cnt, loff_t *ppos);
-loff_t ftrace_regex_lseek(struct file *file, loff_t offset, int whence);
int ftrace_regex_release(struct inode *inode, struct file *file);
void __init
ftrace_regex_release(struct inode *inode, struct file *file) { return -ENODEV; }
#endif /* CONFIG_DYNAMIC_FTRACE */
+loff_t ftrace_filter_lseek(struct file *file, loff_t offset, int whence);
+
/* totally disable ftrace - can not re-enable after this */
void ftrace_kill(void);
#ifdef CONFIG_IRQ_WORK
bool irq_work_needs_cpu(void);
#else
-static bool irq_work_needs_cpu(void) { return false; }
+static inline bool irq_work_needs_cpu(void) { return false; }
#endif
#endif /* _LINUX_IRQ_WORK_H */
unsigned long int_sqrt(unsigned long);
extern void bust_spinlocks(int yes);
-extern void wake_up_klogd(void);
extern int oops_in_progress; /* If set, an oops, panic(), BUG() or die() is in progress */
extern int panic_timeout;
extern int panic_on_oops;
int __init parse_crashkernel(char *cmdline, unsigned long long system_ram,
unsigned long long *crash_size, unsigned long long *crash_base);
+int parse_crashkernel_high(char *cmdline, unsigned long long system_ram,
+ unsigned long long *crash_size, unsigned long long *crash_base);
int parse_crashkernel_low(char *cmdline, unsigned long long system_ram,
unsigned long long *crash_size, unsigned long long *crash_base);
int crash_shrink_memory(unsigned long new_size);
#ifndef __KVM_HOST_H
#define __KVM_HOST_H
-#if IS_ENABLED(CONFIG_KVM)
-
/*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
void *data, unsigned long len);
int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
- gpa_t gpa);
+ gpa_t gpa, unsigned long len);
int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len);
int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len);
struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn);
}
#endif /* CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT */
-#else
-static inline void __guest_enter(void) { return; }
-static inline void __guest_exit(void) { return; }
-#endif /* IS_ENABLED(CONFIG_KVM) */
#endif
+
u64 generation;
gpa_t gpa;
unsigned long hva;
+ unsigned long len;
struct kvm_memory_slot *memslot;
};
ATA_HORKAGE_NOSETXFER = (1 << 14), /* skip SETXFER, SATA only */
ATA_HORKAGE_BROKEN_FPDMA_AA = (1 << 15), /* skip AA */
ATA_HORKAGE_DUMP_ID = (1 << 16), /* dump IDENTIFY data */
+ ATA_HORKAGE_MAX_SEC_LBA48 = (1 << 17), /* Set max sects to 65535 */
/* DMA mask for user DMA control: User visible values; DO NOT
renumber */
MAX77693_MUIC_REG_END,
};
+/* MAX77693 INTMASK1~2 Register */
+#define INTMASK1_ADC1K_SHIFT 3
+#define INTMASK1_ADCERR_SHIFT 2
+#define INTMASK1_ADCLOW_SHIFT 1
+#define INTMASK1_ADC_SHIFT 0
+#define INTMASK1_ADC1K_MASK (1 << INTMASK1_ADC1K_SHIFT)
+#define INTMASK1_ADCERR_MASK (1 << INTMASK1_ADCERR_SHIFT)
+#define INTMASK1_ADCLOW_MASK (1 << INTMASK1_ADCLOW_SHIFT)
+#define INTMASK1_ADC_MASK (1 << INTMASK1_ADC_SHIFT)
+
+#define INTMASK2_VIDRM_SHIFT 5
+#define INTMASK2_VBVOLT_SHIFT 4
+#define INTMASK2_DXOVP_SHIFT 3
+#define INTMASK2_DCDTMR_SHIFT 2
+#define INTMASK2_CHGDETRUN_SHIFT 1
+#define INTMASK2_CHGTYP_SHIFT 0
+#define INTMASK2_VIDRM_MASK (1 << INTMASK2_VIDRM_SHIFT)
+#define INTMASK2_VBVOLT_MASK (1 << INTMASK2_VBVOLT_SHIFT)
+#define INTMASK2_DXOVP_MASK (1 << INTMASK2_DXOVP_SHIFT)
+#define INTMASK2_DCDTMR_MASK (1 << INTMASK2_DCDTMR_SHIFT)
+#define INTMASK2_CHGDETRUN_MASK (1 << INTMASK2_CHGDETRUN_SHIFT)
+#define INTMASK2_CHGTYP_MASK (1 << INTMASK2_CHGTYP_SHIFT)
+
/* MAX77693 MUIC - STATUS1~3 Register */
#define STATUS1_ADC_SHIFT (0)
#define STATUS1_ADCLOW_SHIFT (5)
#define VM_PFNMAP 0x00000400 /* Page-ranges managed without "struct page", just pure PFN */
#define VM_DENYWRITE 0x00000800 /* ETXTBSY on write attempts.. */
-#define VM_POPULATE 0x00001000
#define VM_LOCKED 0x00002000
#define VM_IO 0x00004000 /* Memory mapped I/O or similar */
unsigned long pfn);
int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn);
+int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
+
struct page *follow_page_mask(struct vm_area_struct *vma,
unsigned long address, unsigned int foll_flags,
{
return _calc_vm_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN ) |
_calc_vm_trans(flags, MAP_DENYWRITE, VM_DENYWRITE ) |
- ((flags & MAP_LOCKED) ? (VM_LOCKED | VM_POPULATE) : 0) |
- (((flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE) ?
- VM_POPULATE : 0);
+ _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED );
}
#endif /* _LINUX_MMAN_H */
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
-static inline unsigned zone_end_pfn(const struct zone *zone)
+static inline unsigned long zone_end_pfn(const struct zone *zone)
{
return zone->zone_start_pfn + zone->spanned_pages;
}
#define MNT_INTERNAL 0x4000
+#define MNT_LOCK_READONLY 0x400000
+
struct vfsmount {
struct dentry *mnt_root; /* root of the mounted tree */
struct super_block *mnt_sb; /* pointer to superblock */
#define STMLCDIF_18BIT 2 /** pixel data bus to the display is of 18 bit width */
#define STMLCDIF_24BIT 3 /** pixel data bus to the display is of 24 bit width */
-#define FB_SYNC_DATA_ENABLE_HIGH_ACT (1 << 6)
-#define FB_SYNC_DOTCLK_FAILING_ACT (1 << 7) /* failing/negtive edge sampling */
+#define MXSFB_SYNC_DATA_ENABLE_HIGH_ACT (1 << 6)
+#define MXSFB_SYNC_DOTCLK_FAILING_ACT (1 << 7) /* failing/negtive edge sampling */
struct mxsfb_platform_data {
struct fb_videomode *mode_list;
* allocated. If specified,fb_size must also be specified.
* fb_phys must be unused by Linux.
*/
+ u32 sync; /* sync mask, contains MXSFB specifics not
+ * carried in fb_info->var.sync
+ */
};
#endif /* __LINUX_MXSFB_H */
#define NETDEV_HW_ADDR_T_SLAVE 3
#define NETDEV_HW_ADDR_T_UNICAST 4
#define NETDEV_HW_ADDR_T_MULTICAST 5
- bool synced;
bool global_use;
int refcount;
+ int synced;
struct rcu_head rcu_head;
};
*
* int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh)
* int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq,
- * struct net_device *dev)
+ * struct net_device *dev, u32 filter_mask)
*
* int (*ndo_change_carrier)(struct net_device *dev, bool new_carrier);
* Called to change device carrier. Soft-devices (like dummy, team, etc)
#define type_pf_data_tlist TOKEN(TYPE, PF, _data_tlist)
#define type_pf_data_next TOKEN(TYPE, PF, _data_next)
#define type_pf_data_flags TOKEN(TYPE, PF, _data_flags)
+#define type_pf_data_reset_flags TOKEN(TYPE, PF, _data_reset_flags)
#ifdef IP_SET_HASH_WITH_NETS
#define type_pf_data_match TOKEN(TYPE, PF, _data_match)
#else
struct ip_set_hash *h = set->data;
struct htable *t, *orig = h->table;
u8 htable_bits = orig->htable_bits;
- const struct type_pf_elem *data;
+ struct type_pf_elem *data;
struct hbucket *n, *m;
- u32 i, j;
+ u32 i, j, flags = 0;
int ret;
retry:
n = hbucket(orig, i);
for (j = 0; j < n->pos; j++) {
data = ahash_data(n, j);
+#ifdef IP_SET_HASH_WITH_NETS
+ flags = 0;
+ type_pf_data_reset_flags(data, &flags);
+#endif
m = hbucket(t, HKEY(data, h->initval, htable_bits));
- ret = type_pf_elem_add(m, data, AHASH_MAX(h), 0);
+ ret = type_pf_elem_add(m, data, AHASH_MAX(h), flags);
if (ret < 0) {
+#ifdef IP_SET_HASH_WITH_NETS
+ type_pf_data_flags(data, flags);
+#endif
read_unlock_bh(&set->lock);
ahash_destroy(t);
if (ret == -EAGAIN)
struct ip_set_hash *h = set->data;
struct htable *t, *orig = h->table;
u8 htable_bits = orig->htable_bits;
- const struct type_pf_elem *data;
+ struct type_pf_elem *data;
struct hbucket *n, *m;
- u32 i, j;
+ u32 i, j, flags = 0;
int ret;
/* Try to cleanup once */
n = hbucket(orig, i);
for (j = 0; j < n->pos; j++) {
data = ahash_tdata(n, j);
+#ifdef IP_SET_HASH_WITH_NETS
+ flags = 0;
+ type_pf_data_reset_flags(data, &flags);
+#endif
m = hbucket(t, HKEY(data, h->initval, htable_bits));
- ret = type_pf_elem_tadd(m, data, AHASH_MAX(h), 0,
- ip_set_timeout_get(type_pf_data_timeout(data)));
+ ret = type_pf_elem_tadd(m, data, AHASH_MAX(h), flags,
+ ip_set_timeout_get(type_pf_data_timeout(data)));
if (ret < 0) {
+#ifdef IP_SET_HASH_WITH_NETS
+ type_pf_data_flags(data, flags);
+#endif
read_unlock_bh(&set->lock);
ahash_destroy(t);
if (ret == -EAGAIN)
#undef type_pf_data_tlist
#undef type_pf_data_next
#undef type_pf_data_flags
+#undef type_pf_data_reset_flags
#undef type_pf_data_match
#undef type_pf_elem
NVME_LBAF_RP_DEGRADED = 3,
};
+struct nvme_smart_log {
+ __u8 critical_warning;
+ __u8 temperature[2];
+ __u8 avail_spare;
+ __u8 spare_thresh;
+ __u8 percent_used;
+ __u8 rsvd6[26];
+ __u8 data_units_read[16];
+ __u8 data_units_written[16];
+ __u8 host_reads[16];
+ __u8 host_writes[16];
+ __u8 ctrl_busy_time[16];
+ __u8 power_cycles[16];
+ __u8 power_on_hours[16];
+ __u8 unsafe_shutdowns[16];
+ __u8 media_errors[16];
+ __u8 num_err_log_entries[16];
+ __u8 rsvd192[320];
+};
+
+enum {
+ NVME_SMART_CRIT_SPARE = 1 << 0,
+ NVME_SMART_CRIT_TEMPERATURE = 1 << 1,
+ NVME_SMART_CRIT_RELIABILITY = 1 << 2,
+ NVME_SMART_CRIT_MEDIA = 1 << 3,
+ NVME_SMART_CRIT_VOLATILE_MEMORY = 1 << 4,
+};
+
struct nvme_lba_range_type {
__u8 type;
__u8 attributes;
void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size);
void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom);
size_t pci_get_rom_size(struct pci_dev *pdev, void __iomem *rom, size_t size);
+void __iomem __must_check *pci_platform_rom(struct pci_dev *pdev, size_t *size);
/* Power management related routines */
int pci_save_state(struct pci_dev *dev);
#else /* !CONFIG_PREEMPT_COUNT */
-#define preempt_disable() do { } while (0)
-#define sched_preempt_enable_no_resched() do { } while (0)
-#define preempt_enable_no_resched() do { } while (0)
-#define preempt_enable() do { } while (0)
-
-#define preempt_disable_notrace() do { } while (0)
-#define preempt_enable_no_resched_notrace() do { } while (0)
-#define preempt_enable_notrace() do { } while (0)
+/*
+ * Even if we don't have any preemption, we need preempt disable/enable
+ * to be barriers, so that we don't have things like get_user/put_user
+ * that can cause faults and scheduling migrate into our preempt-protected
+ * region.
+ */
+#define preempt_disable() barrier()
+#define sched_preempt_enable_no_resched() barrier()
+#define preempt_enable_no_resched() barrier()
+#define preempt_enable() barrier()
+
+#define preempt_disable_notrace() barrier()
+#define preempt_enable_no_resched_notrace() barrier()
+#define preempt_enable_notrace() barrier()
#endif /* CONFIG_PREEMPT_COUNT */
extern int dmesg_restrict;
extern int kptr_restrict;
+extern void wake_up_klogd(void);
+
void log_buf_kexec_setup(void);
void __init setup_log_buf(int early);
#else
return false;
}
+static inline void wake_up_klogd(void)
+{
+}
+
static inline void log_buf_kexec_setup(void)
{
}
const struct file_operations *proc_fops,
void *data);
extern void remove_proc_entry(const char *name, struct proc_dir_entry *parent);
+extern int remove_proc_subtree(const char *name, struct proc_dir_entry *parent);
struct pid_namespace;
return NULL;
}
#define remove_proc_entry(name, parent) do {} while (0)
+#define remove_proc_subtree(name, parent) do {} while (0)
static inline struct proc_dir_entry *proc_symlink(const char *name,
struct proc_dir_entry *parent,const char *dest) {return NULL;}
#define TASK_DEAD 64
#define TASK_WAKEKILL 128
#define TASK_WAKING 256
-#define TASK_STATE_MAX 512
+#define TASK_PARKED 512
+#define TASK_STATE_MAX 1024
-#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKW"
+#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKWP"
extern char ___assert_task_state[1 - 2*!!(
sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1)];
* This hook can be used by the module to update any security state
* associated with the TUN device's security structure.
* @security pointer to the TUN devices's security structure.
+ * @skb_owned_by:
+ * This hook sets the packet's owning sock.
+ * @skb is the packet.
+ * @sk the sock which owns the packet.
*
* Security hooks for XFRM operations.
*
int (*tun_dev_attach_queue) (void *security);
int (*tun_dev_attach) (struct sock *sk, void *security);
int (*tun_dev_open) (void *security);
+ void (*skb_owned_by) (struct sk_buff *skb, struct sock *sk);
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
int security_tun_dev_attach(struct sock *sk, void *security);
int security_tun_dev_open(void *security);
+void security_skb_owned_by(struct sk_buff *skb, struct sock *sk);
+
#else /* CONFIG_SECURITY_NETWORK */
static inline int security_unix_stream_connect(struct sock *sock,
struct sock *other,
{
return 0;
}
+
+static inline void security_skb_owned_by(struct sk_buff *skb, struct sock *sk)
+{
+}
+
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
extern int sigsuspend(sigset_t *);
struct sigaction {
-#ifndef __ARCH_HAS_ODD_SIGACTION
+#ifndef __ARCH_HAS_IRIX_SIGACTION
__sighandler_t sa_handler;
unsigned long sa_flags;
#else
- unsigned long sa_flags;
+ unsigned int sa_flags;
__sighandler_t sa_handler;
#endif
#ifdef __ARCH_HAS_SA_RESTORER
#endif
}
+static inline void nf_reset_trace(struct sk_buff *skb)
+{
+#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
+ skb->nf_trace = 0;
+#endif
+}
+
/* Note: This doesn't put any conntrack and bridge info in dst. */
static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src)
{
* In the debug case, 1 means unlocked, 0 means locked. (the values
* are inverted, to catch initialization bugs)
*
- * No atomicity anywhere, we are on UP.
+ * No atomicity anywhere, we are on UP. However, we still need
+ * the compiler barriers, because we do not want the compiler to
+ * move potentially faulting instructions (notably user accesses)
+ * into the locked sequence, resulting in non-atomic execution.
*/
#ifdef CONFIG_DEBUG_SPINLOCK
static inline void arch_spin_lock(arch_spinlock_t *lock)
{
lock->slock = 0;
+ barrier();
}
static inline void
{
local_irq_save(flags);
lock->slock = 0;
+ barrier();
}
static inline int arch_spin_trylock(arch_spinlock_t *lock)
char oldval = lock->slock;
lock->slock = 0;
+ barrier();
return oldval > 0;
}
static inline void arch_spin_unlock(arch_spinlock_t *lock)
{
+ barrier();
lock->slock = 1;
}
/*
* Read-write spinlocks. No debug version.
*/
-#define arch_read_lock(lock) do { (void)(lock); } while (0)
-#define arch_write_lock(lock) do { (void)(lock); } while (0)
-#define arch_read_trylock(lock) ({ (void)(lock); 1; })
-#define arch_write_trylock(lock) ({ (void)(lock); 1; })
-#define arch_read_unlock(lock) do { (void)(lock); } while (0)
-#define arch_write_unlock(lock) do { (void)(lock); } while (0)
+#define arch_read_lock(lock) do { barrier(); (void)(lock); } while (0)
+#define arch_write_lock(lock) do { barrier(); (void)(lock); } while (0)
+#define arch_read_trylock(lock) ({ barrier(); (void)(lock); 1; })
+#define arch_write_trylock(lock) ({ barrier(); (void)(lock); 1; })
+#define arch_read_unlock(lock) do { barrier(); (void)(lock); } while (0)
+#define arch_write_unlock(lock) do { barrier(); (void)(lock); } while (0)
#else /* DEBUG_SPINLOCK */
#define arch_spin_is_locked(lock) ((void)(lock), 0)
/* for sched.c and kernel_lock.c: */
-# define arch_spin_lock(lock) do { (void)(lock); } while (0)
-# define arch_spin_lock_flags(lock, flags) do { (void)(lock); } while (0)
-# define arch_spin_unlock(lock) do { (void)(lock); } while (0)
-# define arch_spin_trylock(lock) ({ (void)(lock); 1; })
+# define arch_spin_lock(lock) do { barrier(); (void)(lock); } while (0)
+# define arch_spin_lock_flags(lock, flags) do { barrier(); (void)(lock); } while (0)
+# define arch_spin_unlock(lock) do { barrier(); (void)(lock); } while (0)
+# define arch_spin_trylock(lock) ({ barrier(); (void)(lock); 1; })
#endif /* DEBUG_SPINLOCK */
#define arch_spin_is_contended(lock) (((void)(lock), 0))
#define SSB_CHIPCO_PMU_CTL 0x0600 /* PMU control */
#define SSB_CHIPCO_PMU_CTL_ILP_DIV 0xFFFF0000 /* ILP div mask */
#define SSB_CHIPCO_PMU_CTL_ILP_DIV_SHIFT 16
+#define SSB_CHIPCO_PMU_CTL_PLL_UPD 0x00000400
#define SSB_CHIPCO_PMU_CTL_NOILPONW 0x00000200 /* No ILP on wait */
#define SSB_CHIPCO_PMU_CTL_HTREQEN 0x00000100 /* HT req enable */
#define SSB_CHIPCO_PMU_CTL_ALPREQEN 0x00000080 /* ALP req enable */
void ssb_pmu_set_ldo_voltage(struct ssb_chipcommon *cc,
enum ssb_pmu_ldo_volt_id id, u32 voltage);
void ssb_pmu_set_ldo_paref(struct ssb_chipcommon *cc, bool on);
+void ssb_pmu_spuravoid_pllupdate(struct ssb_chipcommon *cc, int spuravoid);
#endif /* LINUX_SSB_CHIPCO_H_ */
extern void swiotlb_init(int verbose);
int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
extern unsigned long swiotlb_nr_tbl(void);
+unsigned long swiotlb_size_or_default(void);
extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
/*
/* Adding event notification support elements */
#define THERMAL_GENL_FAMILY_NAME "thermal_event"
#define THERMAL_GENL_VERSION 0x01
-#define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_group"
+#define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_grp"
/* Default Thermal Governor */
#if defined(CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE)
--- /dev/null
+#ifndef _LINUX_UCS2_STRING_H_
+#define _LINUX_UCS2_STRING_H_
+
+#include <linux/types.h> /* for size_t */
+#include <linux/stddef.h> /* for NULL */
+
+typedef u16 ucs2_char_t;
+
+unsigned long ucs2_strnlen(const ucs2_char_t *s, size_t maxlength);
+unsigned long ucs2_strlen(const ucs2_char_t *s);
+unsigned long ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength);
+int ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len);
+
+#endif /* _LINUX_UCS2_STRING_H_ */
* For encapsulation sockets.
*/
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
+ void (*encap_destroy)(struct sock *sk);
};
static inline struct udp_sock *udp_sk(const struct sock *sk)
*/
int (*disable_usb3_lpm_timeout)(struct usb_hcd *,
struct usb_device *, enum usb3_link_state state);
+ int (*find_raw_port_number)(struct usb_hcd *, int);
};
extern int usb_hcd_link_urb_to_ep(struct usb_hcd *hcd, struct urb *urb);
extern int usb_add_hcd(struct usb_hcd *hcd,
unsigned int irqnum, unsigned long irqflags);
extern void usb_remove_hcd(struct usb_hcd *hcd);
+extern int usb_hcd_find_raw_port_number(struct usb_hcd *hcd, int port1);
struct platform_device;
extern void usb_hcd_platform_shutdown(struct platform_device *dev);
* port.
* @flags: usb serial port flags
* @write_wait: a wait_queue_head_t used by the port.
+ * @delta_msr_wait: modem-status-change wait queue
* @work: work queue entry for the line discipline waking up.
* @throttled: nonzero if the read urb is inactive to throttle the device
* @throttle_req: nonzero if the tty wants to throttle us
unsigned long flags;
wait_queue_head_t write_wait;
+ wait_queue_head_t delta_msr_wait;
struct work_struct work;
char throttled;
char throttle_req;
/*-------------------------------------------------------------------------*/
+#if IS_ENABLED(CONFIG_USB_ULPI)
struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops,
unsigned int flags);
+#else
+static inline struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops,
+ unsigned int flags)
+{
+ return NULL;
+}
+#endif
#ifdef CONFIG_USB_ULPI_VIEWPORT
/* access ops for controllers with a viewport register */
kuid_t owner;
kgid_t group;
unsigned int proc_inum;
+ bool may_mount_sysfs;
+ bool may_mount_proc;
};
extern struct user_namespace init_user_ns;
#endif
+void update_mnt_policy(struct user_namespace *userns);
+
#endif /* _LINUX_USER_H */
/* Device notifier */
extern int register_inet6addr_notifier(struct notifier_block *nb);
extern int unregister_inet6addr_notifier(struct notifier_block *nb);
+extern int inet6addr_notifier_call_chain(unsigned long val, void *v);
extern void inet6_netconf_notify_devconf(struct net *net, int type, int ifindex,
struct ipv6_devconf *devconf);
__be32 ports;
__be16 port16[2];
};
+ u16 thoff;
u8 ip_proto;
};
int sysctl_sync_retries;
int sysctl_nat_icmp_send;
int sysctl_pmtu_disc;
+ int sysctl_backup_only;
/* ip_vs_lblc */
int sysctl_lblc_expiration;
return ipvs->sysctl_pmtu_disc;
}
+static inline int sysctl_backup_only(struct netns_ipvs *ipvs)
+{
+ return ipvs->sync_state & IP_VS_STATE_BACKUP &&
+ ipvs->sysctl_backup_only;
+}
+
#else
static inline int sysctl_sync_threshold(struct netns_ipvs *ipvs)
return 1;
}
+static inline int sysctl_backup_only(struct netns_ipvs *ipvs)
+{
+ return 0;
+}
+
#endif
/*
{
struct iphdr *iph = ip_hdr(skb);
- if (iph->frag_off & htons(IP_DF))
- iph->id = 0;
- else {
- /* Use inner packet iph-id if possible. */
- if (skb->protocol == htons(ETH_P_IP) && old_iph->id)
- iph->id = old_iph->id;
- else
- __ip_select_ident(iph, dst,
- (skb_shinfo(skb)->gso_segs ?: 1) - 1);
- }
+ /* Use inner packet iph-id if possible. */
+ if (skb->protocol == htons(ETH_P_IP) && old_iph->id)
+ iph->id = old_iph->id;
+ else
+ __ip_select_ident(iph, dst,
+ (skb_shinfo(skb)->gso_segs ?: 1) - 1);
}
#endif
return (self && self->lap) ? self->lap->daddr : 0;
}
-extern const char *irlmp_reasons[];
+const char *irlmp_reason_str(LM_REASON reason);
+
extern int sysctl_discovery_timeout;
extern int sysctl_discovery_slots;
extern int sysctl_discovery;
enum iucv_tx_notify n);
};
+struct iucv_skb_cb {
+ u32 class; /* target class of message */
+ u32 tag; /* tag associated with message */
+ u32 offset; /* offset for skb receival */
+};
+
+#define IUCV_SKB_CB(__skb) ((struct iucv_skb_cb *)&((__skb)->cb[0]))
+
/* iucv socket options (SOL_IUCV) */
#define SO_IPRMDATA_MSG 0x0080 /* send/recv IPRM_DATA msgs */
#define SO_MSGLIMIT 0x1000 /* get/set IUCV MSGLIMIT */
scm->pid = get_pid(pid);
scm->cred = cred ? get_cred(cred) : NULL;
scm->creds.pid = pid_vnr(pid);
- scm->creds.uid = cred ? cred->euid : INVALID_UID;
- scm->creds.gid = cred ? cred->egid : INVALID_GID;
+ scm->creds.uid = cred ? cred->uid : INVALID_UID;
+ scm->creds.gid = cred ? cred->gid : INVALID_GID;
}
static __inline__ void scm_destroy_cred(struct scm_cookie *scm)
/*
* DISCOVERY LAYER
*****************************/
-int fc_disc_init(struct fc_lport *);
+void fc_disc_init(struct fc_lport *);
+void fc_disc_config(struct fc_lport *, void *);
static inline struct fc_lport *fc_disc_lport(struct fc_disc *disc)
{
/* status */
u32 connect:1; /* source and sink widgets are connected */
u32 walked:1; /* path has been walked */
+ u32 walking:1; /* path is in the process of being walked */
u32 weak:1; /* path ignored for power management */
int (*connected)(struct snd_soc_dapm_widget *source,
/**
* block_bio_complete - completed all work on the block operation
+ * @q: queue holding the block operation
* @bio: block operation completed
* @error: io error value
*
*/
TRACE_EVENT(block_bio_complete,
- TP_PROTO(struct bio *bio, int error),
+ TP_PROTO(struct request_queue *q, struct bio *bio, int error),
- TP_ARGS(bio, error),
+ TP_ARGS(q, bio, error),
TP_STRUCT__entry(
__field( dev_t, dev )
),
TP_fast_assign(
- __entry->dev = bio->bi_bdev ?
- bio->bi_bdev->bd_dev : 0;
+ __entry->dev = bio->bi_bdev->bd_dev;
__entry->sector = bio->bi_sector;
__entry->nr_sector = bio->bi_size >> 9;
__entry->error = error;
__print_flags(__entry->prev_state & (TASK_STATE_MAX-1), "|",
{ 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" },
{ 16, "Z" }, { 32, "X" }, { 64, "x" },
- { 128, "W" }) : "R",
+ { 128, "K" }, { 256, "W" }, { 512, "P" }) : "R",
__entry->prev_state & TASK_STATE_MAX ? "+" : "",
__entry->next_comm, __entry->next_pid, __entry->next_prio)
);
#ifndef _LINUX_FUSE_H
#define _LINUX_FUSE_H
-#ifdef __linux__
+#ifdef __KERNEL__
#include <linux/types.h>
#else
#include <stdint.h>
-#define __u64 uint64_t
-#define __s64 int64_t
-#define __u32 uint32_t
-#define __s32 int32_t
-#define __u16 uint16_t
#endif
/*
userspace works under 64bit kernels */
struct fuse_attr {
- __u64 ino;
- __u64 size;
- __u64 blocks;
- __u64 atime;
- __u64 mtime;
- __u64 ctime;
- __u32 atimensec;
- __u32 mtimensec;
- __u32 ctimensec;
- __u32 mode;
- __u32 nlink;
- __u32 uid;
- __u32 gid;
- __u32 rdev;
- __u32 blksize;
- __u32 padding;
+ uint64_t ino;
+ uint64_t size;
+ uint64_t blocks;
+ uint64_t atime;
+ uint64_t mtime;
+ uint64_t ctime;
+ uint32_t atimensec;
+ uint32_t mtimensec;
+ uint32_t ctimensec;
+ uint32_t mode;
+ uint32_t nlink;
+ uint32_t uid;
+ uint32_t gid;
+ uint32_t rdev;
+ uint32_t blksize;
+ uint32_t padding;
};
struct fuse_kstatfs {
- __u64 blocks;
- __u64 bfree;
- __u64 bavail;
- __u64 files;
- __u64 ffree;
- __u32 bsize;
- __u32 namelen;
- __u32 frsize;
- __u32 padding;
- __u32 spare[6];
+ uint64_t blocks;
+ uint64_t bfree;
+ uint64_t bavail;
+ uint64_t files;
+ uint64_t ffree;
+ uint32_t bsize;
+ uint32_t namelen;
+ uint32_t frsize;
+ uint32_t padding;
+ uint32_t spare[6];
};
struct fuse_file_lock {
- __u64 start;
- __u64 end;
- __u32 type;
- __u32 pid; /* tgid */
+ uint64_t start;
+ uint64_t end;
+ uint32_t type;
+ uint32_t pid; /* tgid */
};
/**
#define FUSE_COMPAT_ENTRY_OUT_SIZE 120
struct fuse_entry_out {
- __u64 nodeid; /* Inode ID */
- __u64 generation; /* Inode generation: nodeid:gen must
- be unique for the fs's lifetime */
- __u64 entry_valid; /* Cache timeout for the name */
- __u64 attr_valid; /* Cache timeout for the attributes */
- __u32 entry_valid_nsec;
- __u32 attr_valid_nsec;
+ uint64_t nodeid; /* Inode ID */
+ uint64_t generation; /* Inode generation: nodeid:gen must
+ be unique for the fs's lifetime */
+ uint64_t entry_valid; /* Cache timeout for the name */
+ uint64_t attr_valid; /* Cache timeout for the attributes */
+ uint32_t entry_valid_nsec;
+ uint32_t attr_valid_nsec;
struct fuse_attr attr;
};
struct fuse_forget_in {
- __u64 nlookup;
+ uint64_t nlookup;
};
struct fuse_forget_one {
- __u64 nodeid;
- __u64 nlookup;
+ uint64_t nodeid;
+ uint64_t nlookup;
};
struct fuse_batch_forget_in {
- __u32 count;
- __u32 dummy;
+ uint32_t count;
+ uint32_t dummy;
};
struct fuse_getattr_in {
- __u32 getattr_flags;
- __u32 dummy;
- __u64 fh;
+ uint32_t getattr_flags;
+ uint32_t dummy;
+ uint64_t fh;
};
#define FUSE_COMPAT_ATTR_OUT_SIZE 96
struct fuse_attr_out {
- __u64 attr_valid; /* Cache timeout for the attributes */
- __u32 attr_valid_nsec;
- __u32 dummy;
+ uint64_t attr_valid; /* Cache timeout for the attributes */
+ uint32_t attr_valid_nsec;
+ uint32_t dummy;
struct fuse_attr attr;
};
#define FUSE_COMPAT_MKNOD_IN_SIZE 8
struct fuse_mknod_in {
- __u32 mode;
- __u32 rdev;
- __u32 umask;
- __u32 padding;
+ uint32_t mode;
+ uint32_t rdev;
+ uint32_t umask;
+ uint32_t padding;
};
struct fuse_mkdir_in {
- __u32 mode;
- __u32 umask;
+ uint32_t mode;
+ uint32_t umask;
};
struct fuse_rename_in {
- __u64 newdir;
+ uint64_t newdir;
};
struct fuse_link_in {
- __u64 oldnodeid;
+ uint64_t oldnodeid;
};
struct fuse_setattr_in {
- __u32 valid;
- __u32 padding;
- __u64 fh;
- __u64 size;
- __u64 lock_owner;
- __u64 atime;
- __u64 mtime;
- __u64 unused2;
- __u32 atimensec;
- __u32 mtimensec;
- __u32 unused3;
- __u32 mode;
- __u32 unused4;
- __u32 uid;
- __u32 gid;
- __u32 unused5;
+ uint32_t valid;
+ uint32_t padding;
+ uint64_t fh;
+ uint64_t size;
+ uint64_t lock_owner;
+ uint64_t atime;
+ uint64_t mtime;
+ uint64_t unused2;
+ uint32_t atimensec;
+ uint32_t mtimensec;
+ uint32_t unused3;
+ uint32_t mode;
+ uint32_t unused4;
+ uint32_t uid;
+ uint32_t gid;
+ uint32_t unused5;
};
struct fuse_open_in {
- __u32 flags;
- __u32 unused;
+ uint32_t flags;
+ uint32_t unused;
};
struct fuse_create_in {
- __u32 flags;
- __u32 mode;
- __u32 umask;
- __u32 padding;
+ uint32_t flags;
+ uint32_t mode;
+ uint32_t umask;
+ uint32_t padding;
};
struct fuse_open_out {
- __u64 fh;
- __u32 open_flags;
- __u32 padding;
+ uint64_t fh;
+ uint32_t open_flags;
+ uint32_t padding;
};
struct fuse_release_in {
- __u64 fh;
- __u32 flags;
- __u32 release_flags;
- __u64 lock_owner;
+ uint64_t fh;
+ uint32_t flags;
+ uint32_t release_flags;
+ uint64_t lock_owner;
};
struct fuse_flush_in {
- __u64 fh;
- __u32 unused;
- __u32 padding;
- __u64 lock_owner;
+ uint64_t fh;
+ uint32_t unused;
+ uint32_t padding;
+ uint64_t lock_owner;
};
struct fuse_read_in {
- __u64 fh;
- __u64 offset;
- __u32 size;
- __u32 read_flags;
- __u64 lock_owner;
- __u32 flags;
- __u32 padding;
+ uint64_t fh;
+ uint64_t offset;
+ uint32_t size;
+ uint32_t read_flags;
+ uint64_t lock_owner;
+ uint32_t flags;
+ uint32_t padding;
};
#define FUSE_COMPAT_WRITE_IN_SIZE 24
struct fuse_write_in {
- __u64 fh;
- __u64 offset;
- __u32 size;
- __u32 write_flags;
- __u64 lock_owner;
- __u32 flags;
- __u32 padding;
+ uint64_t fh;
+ uint64_t offset;
+ uint32_t size;
+ uint32_t write_flags;
+ uint64_t lock_owner;
+ uint32_t flags;
+ uint32_t padding;
};
struct fuse_write_out {
- __u32 size;
- __u32 padding;
+ uint32_t size;
+ uint32_t padding;
};
#define FUSE_COMPAT_STATFS_SIZE 48
};
struct fuse_fsync_in {
- __u64 fh;
- __u32 fsync_flags;
- __u32 padding;
+ uint64_t fh;
+ uint32_t fsync_flags;
+ uint32_t padding;
};
struct fuse_setxattr_in {
- __u32 size;
- __u32 flags;
+ uint32_t size;
+ uint32_t flags;
};
struct fuse_getxattr_in {
- __u32 size;
- __u32 padding;
+ uint32_t size;
+ uint32_t padding;
};
struct fuse_getxattr_out {
- __u32 size;
- __u32 padding;
+ uint32_t size;
+ uint32_t padding;
};
struct fuse_lk_in {
- __u64 fh;
- __u64 owner;
+ uint64_t fh;
+ uint64_t owner;
struct fuse_file_lock lk;
- __u32 lk_flags;
- __u32 padding;
+ uint32_t lk_flags;
+ uint32_t padding;
};
struct fuse_lk_out {
};
struct fuse_access_in {
- __u32 mask;
- __u32 padding;
+ uint32_t mask;
+ uint32_t padding;
};
struct fuse_init_in {
- __u32 major;
- __u32 minor;
- __u32 max_readahead;
- __u32 flags;
+ uint32_t major;
+ uint32_t minor;
+ uint32_t max_readahead;
+ uint32_t flags;
};
struct fuse_init_out {
- __u32 major;
- __u32 minor;
- __u32 max_readahead;
- __u32 flags;
- __u16 max_background;
- __u16 congestion_threshold;
- __u32 max_write;
+ uint32_t major;
+ uint32_t minor;
+ uint32_t max_readahead;
+ uint32_t flags;
+ uint16_t max_background;
+ uint16_t congestion_threshold;
+ uint32_t max_write;
};
#define CUSE_INIT_INFO_MAX 4096
struct cuse_init_in {
- __u32 major;
- __u32 minor;
- __u32 unused;
- __u32 flags;
+ uint32_t major;
+ uint32_t minor;
+ uint32_t unused;
+ uint32_t flags;
};
struct cuse_init_out {
- __u32 major;
- __u32 minor;
- __u32 unused;
- __u32 flags;
- __u32 max_read;
- __u32 max_write;
- __u32 dev_major; /* chardev major */
- __u32 dev_minor; /* chardev minor */
- __u32 spare[10];
+ uint32_t major;
+ uint32_t minor;
+ uint32_t unused;
+ uint32_t flags;
+ uint32_t max_read;
+ uint32_t max_write;
+ uint32_t dev_major; /* chardev major */
+ uint32_t dev_minor; /* chardev minor */
+ uint32_t spare[10];
};
struct fuse_interrupt_in {
- __u64 unique;
+ uint64_t unique;
};
struct fuse_bmap_in {
- __u64 block;
- __u32 blocksize;
- __u32 padding;
+ uint64_t block;
+ uint32_t blocksize;
+ uint32_t padding;
};
struct fuse_bmap_out {
- __u64 block;
+ uint64_t block;
};
struct fuse_ioctl_in {
- __u64 fh;
- __u32 flags;
- __u32 cmd;
- __u64 arg;
- __u32 in_size;
- __u32 out_size;
+ uint64_t fh;
+ uint32_t flags;
+ uint32_t cmd;
+ uint64_t arg;
+ uint32_t in_size;
+ uint32_t out_size;
};
struct fuse_ioctl_iovec {
- __u64 base;
- __u64 len;
+ uint64_t base;
+ uint64_t len;
};
struct fuse_ioctl_out {
- __s32 result;
- __u32 flags;
- __u32 in_iovs;
- __u32 out_iovs;
+ int32_t result;
+ uint32_t flags;
+ uint32_t in_iovs;
+ uint32_t out_iovs;
};
struct fuse_poll_in {
- __u64 fh;
- __u64 kh;
- __u32 flags;
- __u32 events;
+ uint64_t fh;
+ uint64_t kh;
+ uint32_t flags;
+ uint32_t events;
};
struct fuse_poll_out {
- __u32 revents;
- __u32 padding;
+ uint32_t revents;
+ uint32_t padding;
};
struct fuse_notify_poll_wakeup_out {
- __u64 kh;
+ uint64_t kh;
};
struct fuse_fallocate_in {
- __u64 fh;
- __u64 offset;
- __u64 length;
- __u32 mode;
- __u32 padding;
+ uint64_t fh;
+ uint64_t offset;
+ uint64_t length;
+ uint32_t mode;
+ uint32_t padding;
};
struct fuse_in_header {
- __u32 len;
- __u32 opcode;
- __u64 unique;
- __u64 nodeid;
- __u32 uid;
- __u32 gid;
- __u32 pid;
- __u32 padding;
+ uint32_t len;
+ uint32_t opcode;
+ uint64_t unique;
+ uint64_t nodeid;
+ uint32_t uid;
+ uint32_t gid;
+ uint32_t pid;
+ uint32_t padding;
};
struct fuse_out_header {
- __u32 len;
- __s32 error;
- __u64 unique;
+ uint32_t len;
+ int32_t error;
+ uint64_t unique;
};
struct fuse_dirent {
- __u64 ino;
- __u64 off;
- __u32 namelen;
- __u32 type;
+ uint64_t ino;
+ uint64_t off;
+ uint32_t namelen;
+ uint32_t type;
char name[];
};
#define FUSE_NAME_OFFSET offsetof(struct fuse_dirent, name)
-#define FUSE_DIRENT_ALIGN(x) (((x) + sizeof(__u64) - 1) & ~(sizeof(__u64) - 1))
+#define FUSE_DIRENT_ALIGN(x) \
+ (((x) + sizeof(uint64_t) - 1) & ~(sizeof(uint64_t) - 1))
#define FUSE_DIRENT_SIZE(d) \
FUSE_DIRENT_ALIGN(FUSE_NAME_OFFSET + (d)->namelen)
FUSE_DIRENT_ALIGN(FUSE_NAME_OFFSET_DIRENTPLUS + (d)->dirent.namelen)
struct fuse_notify_inval_inode_out {
- __u64 ino;
- __s64 off;
- __s64 len;
+ uint64_t ino;
+ int64_t off;
+ int64_t len;
};
struct fuse_notify_inval_entry_out {
- __u64 parent;
- __u32 namelen;
- __u32 padding;
+ uint64_t parent;
+ uint32_t namelen;
+ uint32_t padding;
};
struct fuse_notify_delete_out {
- __u64 parent;
- __u64 child;
- __u32 namelen;
- __u32 padding;
+ uint64_t parent;
+ uint64_t child;
+ uint32_t namelen;
+ uint32_t padding;
};
struct fuse_notify_store_out {
- __u64 nodeid;
- __u64 offset;
- __u32 size;
- __u32 padding;
+ uint64_t nodeid;
+ uint64_t offset;
+ uint32_t size;
+ uint32_t padding;
};
struct fuse_notify_retrieve_out {
- __u64 notify_unique;
- __u64 nodeid;
- __u64 offset;
- __u32 size;
- __u32 padding;
+ uint64_t notify_unique;
+ uint64_t nodeid;
+ uint64_t offset;
+ uint32_t size;
+ uint32_t padding;
};
/* Matches the size of fuse_write_in */
struct fuse_notify_retrieve_in {
- __u64 dummy1;
- __u64 offset;
- __u32 size;
- __u32 dummy2;
- __u64 dummy3;
- __u64 dummy4;
+ uint64_t dummy1;
+ uint64_t offset;
+ uint32_t size;
+ uint32_t dummy2;
+ uint64_t dummy3;
+ uint64_t dummy4;
};
#endif /* _LINUX_FUSE_H */
PACKET_DIAG_TX_RING,
PACKET_DIAG_FANOUT,
- PACKET_DIAG_MAX,
+ __PACKET_DIAG_MAX,
};
+#define PACKET_DIAG_MAX (__PACKET_DIAG_MAX - 1)
+
struct packet_diag_info {
__u32 pdi_index;
__u32 pdi_version;
UNIX_DIAG_MEMINFO,
UNIX_DIAG_SHUTDOWN,
- UNIX_DIAG_MAX,
+ __UNIX_DIAG_MAX,
};
+#define UNIX_DIAG_MAX (__UNIX_DIAG_MAX - 1)
+
struct unix_diag_vfs {
__u32 udiag_vfs_ino;
__u32 udiag_vfs_dev;
uint8_t _pad3;
} __attribute__((__packed__));
+struct blkif_request_other {
+ uint8_t _pad1;
+ blkif_vdev_t _pad2; /* only for read/write requests */
+#ifdef CONFIG_X86_64
+ uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/
+#endif
+ uint64_t id; /* private guest value, echoed in resp */
+} __attribute__((__packed__));
+
struct blkif_request {
uint8_t operation; /* BLKIF_OP_??? */
union {
struct blkif_request_rw rw;
struct blkif_request_discard discard;
+ struct blkif_request_other other;
} u;
} __attribute__((__packed__));
#define PHYSDEVOP_pci_device_remove 26
#define PHYSDEVOP_restore_msi_ext 27
+/*
+ * Dom0 should use these two to announce MMIO resources assigned to
+ * MSI-X capable devices won't (prepare) or may (release) change.
+ */
+#define PHYSDEVOP_prepare_msix 30
+#define PHYSDEVOP_release_msix 31
struct physdev_pci_device {
/* IN */
uint16_t seg;
int flags, const char *dev_name,
void *data)
{
- if (!(flags & MS_KERNMOUNT))
- data = current->nsproxy->ipc_ns;
+ if (!(flags & MS_KERNMOUNT)) {
+ struct ipc_namespace *ns = current->nsproxy->ipc_ns;
+ /* Don't allow mounting unless the caller has CAP_SYS_ADMIN
+ * over the ipc namespace.
+ */
+ if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
+ return ERR_PTR(-EPERM);
+
+ data = ns;
+ }
return mount_ns(fs_type, flags, data, mqueue_fill_super);
}
fd = error;
}
mutex_unlock(&root->d_inode->i_mutex);
- mnt_drop_write(mnt);
+ if (!ro)
+ mnt_drop_write(mnt);
out_putname:
putname(name);
return fd;
goto out_unlock;
break;
}
+ msg = ERR_PTR(-EAGAIN);
} else
break;
msg_counter++;
}
EXPORT_SYMBOL(ns_capable);
+/**
+ * file_ns_capable - Determine if the file's opener had a capability in effect
+ * @file: The file we want to check
+ * @ns: The usernamespace we want the capability in
+ * @cap: The capability to be tested for
+ *
+ * Return true if task that opened the file had a capability in effect
+ * when the file was opened.
+ *
+ * This does not set PF_SUPERPRIV because the caller may not
+ * actually be privileged.
+ */
+bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap)
+{
+ if (WARN_ON_ONCE(!cap_valid(cap)))
+ return false;
+
+ if (security_capable(file->f_cred, ns, cap) == 0)
+ return true;
+
+ return false;
+}
+EXPORT_SYMBOL(file_ns_capable);
+
/**
* capable - Determine if the current task has a superior capability in effect
* @cap: The capability to be tested for
} else {
if (arch_vma_name(mmap_event->vma)) {
name = strncpy(tmp, arch_vma_name(mmap_event->vma),
- sizeof(tmp));
+ sizeof(tmp) - 1);
+ tmp[sizeof(tmp) - 1] = '\0';
goto got_name;
}
static int perf_swevent_init(struct perf_event *event)
{
- int event_id = event->attr.config;
+ u64 event_id = event->attr.config;
if (event->attr.type != PERF_TYPE_SOFTWARE)
return -ENOENT;
if (pmu->pmu_cpu_context)
goto got_cpu_context;
+ ret = -ENOMEM;
pmu->pmu_cpu_context = alloc_percpu(struct perf_cpu_context);
if (!pmu->pmu_cpu_context)
goto free_dev;
int page_order; /* allocation order */
#endif
int nr_pages; /* nr of data pages */
- int writable; /* are we writable */
+ int overwrite; /* can overwrite itself */
atomic_t poll; /* POLL_ for wakeups */
static bool perf_output_space(struct ring_buffer *rb, unsigned long tail,
unsigned long offset, unsigned long head)
{
- unsigned long mask;
+ unsigned long sz = perf_data_size(rb);
+ unsigned long mask = sz - 1;
- if (!rb->writable)
+ /*
+ * check if user-writable
+ * overwrite : over-write its own tail
+ * !overwrite: buffer possibly drops events.
+ */
+ if (rb->overwrite)
return true;
- mask = perf_data_size(rb) - 1;
+ /*
+ * verify that payload is not bigger than buffer
+ * otherwise masking logic may fail to detect
+ * the "not enough space" condition
+ */
+ if ((head - offset) > sz)
+ return false;
offset = (offset - tail) & mask;
head = (head - tail) & mask;
rb->watermark = max_size / 2;
if (flags & RING_BUFFER_WRITABLE)
- rb->writable = 1;
+ rb->overwrite = 0;
+ else
+ rb->overwrite = 1;
atomic_set(&rb->refcount, 1);
/*
* Make sure we are holding no locks:
*/
- debug_check_no_locks_held();
+ debug_check_no_locks_held(tsk);
/*
* We can do this unlocked here. The futex code uses this flag
* just to verify whether the pi state cleanup has been done
DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
{
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock),
.clock_base =
{
{
struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu);
int i;
- raw_spin_lock_init(&cpu_base->lock);
-
for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
cpu_base->clock_base[i].cpu_base = cpu_base;
timerqueue_init_head(&cpu_base->clock_base[i].active);
.flags = IORESOURCE_BUSY | IORESOURCE_MEM
};
struct resource crashk_low_res = {
- .name = "Crash kernel low",
+ .name = "Crash kernel",
.start = 0,
.end = 0,
.flags = IORESOURCE_BUSY | IORESOURCE_MEM
return 0;
}
+#define SUFFIX_HIGH 0
+#define SUFFIX_LOW 1
+#define SUFFIX_NULL 2
+static __initdata char *suffix_tbl[] = {
+ [SUFFIX_HIGH] = ",high",
+ [SUFFIX_LOW] = ",low",
+ [SUFFIX_NULL] = NULL,
+};
+
/*
- * That function is the entry point for command line parsing and should be
- * called from the arch-specific code.
+ * That function parses "suffix" crashkernel command lines like
+ *
+ * crashkernel=size,[high|low]
+ *
+ * It returns 0 on success and -EINVAL on failure.
*/
+static int __init parse_crashkernel_suffix(char *cmdline,
+ unsigned long long *crash_size,
+ unsigned long long *crash_base,
+ const char *suffix)
+{
+ char *cur = cmdline;
+
+ *crash_size = memparse(cmdline, &cur);
+ if (cmdline == cur) {
+ pr_warn("crashkernel: memory value expected\n");
+ return -EINVAL;
+ }
+
+ /* check with suffix */
+ if (strncmp(cur, suffix, strlen(suffix))) {
+ pr_warn("crashkernel: unrecognized char\n");
+ return -EINVAL;
+ }
+ cur += strlen(suffix);
+ if (*cur != ' ' && *cur != '\0') {
+ pr_warn("crashkernel: unrecognized char\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static __init char *get_last_crashkernel(char *cmdline,
+ const char *name,
+ const char *suffix)
+{
+ char *p = cmdline, *ck_cmdline = NULL;
+
+ /* find crashkernel and use the last one if there are more */
+ p = strstr(p, name);
+ while (p) {
+ char *end_p = strchr(p, ' ');
+ char *q;
+
+ if (!end_p)
+ end_p = p + strlen(p);
+
+ if (!suffix) {
+ int i;
+
+ /* skip the one with any known suffix */
+ for (i = 0; suffix_tbl[i]; i++) {
+ q = end_p - strlen(suffix_tbl[i]);
+ if (!strncmp(q, suffix_tbl[i],
+ strlen(suffix_tbl[i])))
+ goto next;
+ }
+ ck_cmdline = p;
+ } else {
+ q = end_p - strlen(suffix);
+ if (!strncmp(q, suffix, strlen(suffix)))
+ ck_cmdline = p;
+ }
+next:
+ p = strstr(p+1, name);
+ }
+
+ if (!ck_cmdline)
+ return NULL;
+
+ return ck_cmdline;
+}
+
static int __init __parse_crashkernel(char *cmdline,
unsigned long long system_ram,
unsigned long long *crash_size,
unsigned long long *crash_base,
- const char *name)
+ const char *name,
+ const char *suffix)
{
- char *p = cmdline, *ck_cmdline = NULL;
char *first_colon, *first_space;
+ char *ck_cmdline;
BUG_ON(!crash_size || !crash_base);
*crash_size = 0;
*crash_base = 0;
- /* find crashkernel and use the last one if there are more */
- p = strstr(p, name);
- while (p) {
- ck_cmdline = p;
- p = strstr(p+1, name);
- }
+ ck_cmdline = get_last_crashkernel(cmdline, name, suffix);
if (!ck_cmdline)
return -EINVAL;
ck_cmdline += strlen(name);
+ if (suffix)
+ return parse_crashkernel_suffix(ck_cmdline, crash_size,
+ crash_base, suffix);
/*
* if the commandline contains a ':', then that's the extended
* syntax -- if not, it must be the classic syntax
return 0;
}
+/*
+ * That function is the entry point for command line parsing and should be
+ * called from the arch-specific code.
+ */
int __init parse_crashkernel(char *cmdline,
unsigned long long system_ram,
unsigned long long *crash_size,
unsigned long long *crash_base)
{
return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base,
- "crashkernel=");
+ "crashkernel=", NULL);
+}
+
+int __init parse_crashkernel_high(char *cmdline,
+ unsigned long long system_ram,
+ unsigned long long *crash_size,
+ unsigned long long *crash_base)
+{
+ return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base,
+ "crashkernel=", suffix_tbl[SUFFIX_HIGH]);
}
int __init parse_crashkernel_low(char *cmdline,
unsigned long long *crash_base)
{
return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base,
- "crashkernel_low=");
+ "crashkernel=", suffix_tbl[SUFFIX_LOW]);
}
static void update_vmcoreinfo_note(void)
}
#ifdef CONFIG_SYSCTL
-/* This should be called with kprobe_mutex locked */
static void __kprobes optimize_all_kprobes(void)
{
struct hlist_head *head;
struct kprobe *p;
unsigned int i;
+ mutex_lock(&kprobe_mutex);
/* If optimization is already allowed, just return */
if (kprobes_allow_optimization)
- return;
+ goto out;
kprobes_allow_optimization = true;
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
optimize_kprobe(p);
}
printk(KERN_INFO "Kprobes globally optimized\n");
+out:
+ mutex_unlock(&kprobe_mutex);
}
-/* This should be called with kprobe_mutex locked */
static void __kprobes unoptimize_all_kprobes(void)
{
struct hlist_head *head;
struct kprobe *p;
unsigned int i;
+ mutex_lock(&kprobe_mutex);
/* If optimization is already prohibited, just return */
- if (!kprobes_allow_optimization)
+ if (!kprobes_allow_optimization) {
+ mutex_unlock(&kprobe_mutex);
return;
+ }
kprobes_allow_optimization = false;
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
unoptimize_kprobe(p, false);
}
}
+ mutex_unlock(&kprobe_mutex);
+
/* Wait for unoptimizing completion */
wait_for_kprobe_optimizer();
printk(KERN_INFO "Kprobes globally unoptimized\n");
}
+static DEFINE_MUTEX(kprobe_sysctl_mutex);
int sysctl_kprobes_optimization;
int proc_kprobes_optimization_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length,
{
int ret;
- mutex_lock(&kprobe_mutex);
+ mutex_lock(&kprobe_sysctl_mutex);
sysctl_kprobes_optimization = kprobes_allow_optimization ? 1 : 0;
ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
optimize_all_kprobes();
else
unoptimize_all_kprobes();
- mutex_unlock(&kprobe_mutex);
+ mutex_unlock(&kprobe_sysctl_mutex);
return ret;
}
static void __kthread_parkme(struct kthread *self)
{
- __set_current_state(TASK_INTERRUPTIBLE);
+ __set_current_state(TASK_PARKED);
while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) {
if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags))
complete(&self->parked);
schedule();
- __set_current_state(TASK_INTERRUPTIBLE);
+ __set_current_state(TASK_PARKED);
}
clear_bit(KTHREAD_IS_PARKED, &self->flags);
__set_current_state(TASK_RUNNING);
}
EXPORT_SYMBOL(kthread_create_on_node);
-static void __kthread_bind(struct task_struct *p, unsigned int cpu)
+static void __kthread_bind(struct task_struct *p, unsigned int cpu, long state)
{
+ /* Must have done schedule() in kthread() before we set_task_cpu */
+ if (!wait_task_inactive(p, state)) {
+ WARN_ON(1);
+ return;
+ }
/* It's safe because the task is inactive. */
do_set_cpus_allowed(p, cpumask_of(cpu));
p->flags |= PF_THREAD_BOUND;
*/
void kthread_bind(struct task_struct *p, unsigned int cpu)
{
- /* Must have done schedule() in kthread() before we set_task_cpu */
- if (!wait_task_inactive(p, TASK_UNINTERRUPTIBLE)) {
- WARN_ON(1);
- return;
- }
- __kthread_bind(p, cpu);
+ __kthread_bind(p, cpu, TASK_UNINTERRUPTIBLE);
}
EXPORT_SYMBOL(kthread_bind);
return NULL;
}
+static void __kthread_unpark(struct task_struct *k, struct kthread *kthread)
+{
+ clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
+ /*
+ * We clear the IS_PARKED bit here as we don't wait
+ * until the task has left the park code. So if we'd
+ * park before that happens we'd see the IS_PARKED bit
+ * which might be about to be cleared.
+ */
+ if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) {
+ if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))
+ __kthread_bind(k, kthread->cpu, TASK_PARKED);
+ wake_up_state(k, TASK_PARKED);
+ }
+}
+
/**
* kthread_unpark - unpark a thread created by kthread_create().
* @k: thread created by kthread_create().
{
struct kthread *kthread = task_get_live_kthread(k);
- if (kthread) {
- clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
- /*
- * We clear the IS_PARKED bit here as we don't wait
- * until the task has left the park code. So if we'd
- * park before that happens we'd see the IS_PARKED bit
- * which might be about to be cleared.
- */
- if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) {
- if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))
- __kthread_bind(k, kthread->cpu);
- wake_up_process(k);
- }
- }
+ if (kthread)
+ __kthread_unpark(k, kthread);
put_task_struct(k);
}
trace_sched_kthread_stop(k);
if (kthread) {
set_bit(KTHREAD_SHOULD_STOP, &kthread->flags);
- clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);
+ __kthread_unpark(k, kthread);
wake_up_process(k);
wait_for_completion(&kthread->exited);
}
}
EXPORT_SYMBOL_GPL(debug_check_no_locks_freed);
-static void print_held_locks_bug(void)
+static void print_held_locks_bug(struct task_struct *curr)
{
if (!debug_locks_off())
return;
printk("\n");
printk("=====================================\n");
- printk("[ BUG: %s/%d still has locks held! ]\n",
- current->comm, task_pid_nr(current));
+ printk("[ BUG: lock held at task exit time! ]\n");
print_kernel_ident();
printk("-------------------------------------\n");
- lockdep_print_held_locks(current);
+ printk("%s/%d is exiting with locks still held!\n",
+ curr->comm, task_pid_nr(curr));
+ lockdep_print_held_locks(curr);
+
printk("\nstack backtrace:\n");
dump_stack();
}
-void debug_check_no_locks_held(void)
+void debug_check_no_locks_held(struct task_struct *task)
{
- if (unlikely(current->lockdep_depth > 0))
- print_held_locks_bug();
+ if (unlikely(task->lockdep_depth > 0))
+ print_held_locks_bug(task);
}
-EXPORT_SYMBOL_GPL(debug_check_no_locks_held);
void debug_show_all_locks(void)
{
int nr;
int rc;
struct task_struct *task, *me = current;
+ int init_pids = thread_group_leader(me) ? 1 : 2;
/* Don't allow any more processes into the pid namespace */
disable_pid_allocation(pid_ns);
*/
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
- if (pid_ns->nr_hashed == 1)
+ if (pid_ns->nr_hashed == init_pids)
break;
schedule();
}
#define MINIMUM_CONSOLE_LOGLEVEL 1 /* Minimum loglevel we let people use */
#define DEFAULT_CONSOLE_LOGLEVEL 7 /* anything MORE serious than KERN_DEBUG */
-DECLARE_WAIT_QUEUE_HEAD(log_wait);
-
int console_printk[4] = {
DEFAULT_CONSOLE_LOGLEVEL, /* console_loglevel */
DEFAULT_MESSAGE_LOGLEVEL, /* default_message_loglevel */
static DEFINE_RAW_SPINLOCK(logbuf_lock);
#ifdef CONFIG_PRINTK
+DECLARE_WAIT_QUEUE_HEAD(log_wait);
/* the next printk record to read by syslog(READ) or /proc/kmsg */
static u64 syslog_seq;
static u32 syslog_idx;
return console_locked;
}
-/*
- * Delayed printk version, for scheduler-internal messages:
- */
-#define PRINTK_BUF_SIZE 512
-
-#define PRINTK_PENDING_WAKEUP 0x01
-#define PRINTK_PENDING_SCHED 0x02
-
-static DEFINE_PER_CPU(int, printk_pending);
-static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
-
-static void wake_up_klogd_work_func(struct irq_work *irq_work)
-{
- int pending = __this_cpu_xchg(printk_pending, 0);
-
- if (pending & PRINTK_PENDING_SCHED) {
- char *buf = __get_cpu_var(printk_sched_buf);
- printk(KERN_WARNING "[sched_delayed] %s", buf);
- }
-
- if (pending & PRINTK_PENDING_WAKEUP)
- wake_up_interruptible(&log_wait);
-}
-
-static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
- .func = wake_up_klogd_work_func,
- .flags = IRQ_WORK_LAZY,
-};
-
-void wake_up_klogd(void)
-{
- preempt_disable();
- if (waitqueue_active(&log_wait)) {
- this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
- irq_work_queue(&__get_cpu_var(wake_up_klogd_work));
- }
- preempt_enable();
-}
-
static void console_cont_flush(char *text, size_t size)
{
unsigned long flags;
late_initcall(printk_late_init);
#if defined CONFIG_PRINTK
+/*
+ * Delayed printk version, for scheduler-internal messages:
+ */
+#define PRINTK_BUF_SIZE 512
+
+#define PRINTK_PENDING_WAKEUP 0x01
+#define PRINTK_PENDING_SCHED 0x02
+
+static DEFINE_PER_CPU(int, printk_pending);
+static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
+
+static void wake_up_klogd_work_func(struct irq_work *irq_work)
+{
+ int pending = __this_cpu_xchg(printk_pending, 0);
+
+ if (pending & PRINTK_PENDING_SCHED) {
+ char *buf = __get_cpu_var(printk_sched_buf);
+ printk(KERN_WARNING "[sched_delayed] %s", buf);
+ }
+
+ if (pending & PRINTK_PENDING_WAKEUP)
+ wake_up_interruptible(&log_wait);
+}
+
+static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
+ .func = wake_up_klogd_work_func,
+ .flags = IRQ_WORK_LAZY,
+};
+
+void wake_up_klogd(void)
+{
+ preempt_disable();
+ if (waitqueue_active(&log_wait)) {
+ this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
+ irq_work_queue(&__get_cpu_var(wake_up_klogd_work));
+ }
+ preempt_enable();
+}
int printk_sched(const char *fmt, ...)
{
u64 this_clock, remote_clock;
u64 *ptr, old_val, val;
+#if BITS_PER_LONG != 64
+again:
+ /*
+ * Careful here: The local and the remote clock values need to
+ * be read out atomic as we need to compare the values and
+ * then update either the local or the remote side. So the
+ * cmpxchg64 below only protects one readout.
+ *
+ * We must reread via sched_clock_local() in the retry case on
+ * 32bit as an NMI could use sched_clock_local() via the
+ * tracer and hit between the readout of
+ * the low32bit and the high 32bit portion.
+ */
+ this_clock = sched_clock_local(my_scd);
+ /*
+ * We must enforce atomic readout on 32bit, otherwise the
+ * update on the remote cpu can hit inbetween the readout of
+ * the low32bit and the high 32bit portion.
+ */
+ remote_clock = cmpxchg64(&scd->clock, 0, 0);
+#else
+ /*
+ * On 64bit the read of [my]scd->clock is atomic versus the
+ * update, so we can avoid the above 32bit dance.
+ */
sched_clock_local(my_scd);
again:
this_clock = my_scd->clock;
remote_clock = scd->clock;
+#endif
/*
* Use the opportunity that we have both locks
{
struct rq *rq = task_rq(p);
- BUG_ON(rq != this_rq());
- BUG_ON(p == current);
+ if (WARN_ON_ONCE(rq != this_rq()) ||
+ WARN_ON_ONCE(p == current))
+ return;
+
lockdep_assert_held(&rq->lock);
if (!raw_spin_trylock(&p->pi_lock)) {
}
static int min_load_idx = 0;
-static int max_load_idx = CPU_LOAD_IDX_MAX;
+static int max_load_idx = CPU_LOAD_IDX_MAX-1;
static void
set_table_entry(struct ctl_table *entry,
t = tsk;
do {
- task_cputime(tsk, &utime, &stime);
+ task_cputime(t, &utime, &stime);
times->utime += utime;
times->stime += stime;
times->sum_exec_runtime += task_sched_runtime(t);
static int do_tkill(pid_t tgid, pid_t pid, int sig)
{
- struct siginfo info;
+ struct siginfo info = {};
info.si_signo = sig;
info.si_errno = 0;
}
get_task_struct(tsk);
*per_cpu_ptr(ht->store, cpu) = tsk;
- if (ht->create)
- ht->create(cpu);
+ if (ht->create) {
+ /*
+ * Make sure that the task has actually scheduled out
+ * into park position, before calling the create
+ * callback. At least the migration thread callback
+ * requires that the task is off the runqueue.
+ */
+ if (!wait_task_inactive(tsk, TASK_PARKED))
+ WARN_ON(1);
+ else
+ ht->create(cpu);
+ }
return 0;
}
system_state = SYSTEM_RESTART;
usermodehelper_disable();
device_shutdown();
- syscore_shutdown();
}
/**
{
kernel_restart_prepare(cmd);
disable_nonboot_cpus();
+ syscore_shutdown();
if (!cmd)
printk(KERN_EMERG "Restarting system.\n");
else
void kernel_halt(void)
{
kernel_shutdown_prepare(SYSTEM_HALT);
+ disable_nonboot_cpus();
syscore_shutdown();
printk(KERN_EMERG "System halted.\n");
kmsg_dump(KMSG_DUMP_HALT);
char poweroff_cmd[POWEROFF_CMD_PATH_LEN] = "/sbin/poweroff";
-static int __orderly_poweroff(void)
+static int __orderly_poweroff(bool force)
{
- int argc;
char **argv;
static char *envp[] = {
"HOME=/",
};
int ret;
- argv = argv_split(GFP_ATOMIC, poweroff_cmd, &argc);
- if (argv == NULL) {
+ argv = argv_split(GFP_KERNEL, poweroff_cmd, NULL);
+ if (argv) {
+ ret = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
+ argv_free(argv);
+ } else {
printk(KERN_WARNING "%s failed to allocate memory for \"%s\"\n",
- __func__, poweroff_cmd);
- return -ENOMEM;
+ __func__, poweroff_cmd);
+ ret = -ENOMEM;
}
- ret = call_usermodehelper_fns(argv[0], argv, envp, UMH_WAIT_EXEC,
- NULL, NULL, NULL);
- argv_free(argv);
+ if (ret && force) {
+ printk(KERN_WARNING "Failed to start orderly shutdown: "
+ "forcing the issue\n");
+ /*
+ * I guess this should try to kick off some daemon to sync and
+ * poweroff asap. Or not even bother syncing if we're doing an
+ * emergency shutdown?
+ */
+ emergency_sync();
+ kernel_power_off();
+ }
return ret;
}
+static bool poweroff_force;
+
+static void poweroff_work_func(struct work_struct *work)
+{
+ __orderly_poweroff(poweroff_force);
+}
+
+static DECLARE_WORK(poweroff_work, poweroff_work_func);
+
/**
* orderly_poweroff - Trigger an orderly system poweroff
* @force: force poweroff if command execution fails
*/
int orderly_poweroff(bool force)
{
- int ret = __orderly_poweroff();
-
- if (ret && force) {
- printk(KERN_WARNING "Failed to start orderly shutdown: "
- "forcing the issue\n");
-
- /*
- * I guess this should try to kick off some daemon to sync and
- * poweroff asap. Or not even bother syncing if we're doing an
- * emergency shutdown?
- */
- emergency_sync();
- kernel_power_off();
- }
-
- return ret;
+ if (force) /* do not override the pending "true" */
+ poweroff_force = true;
+ schedule_work(&poweroff_work);
+ return 0;
}
EXPORT_SYMBOL_GPL(orderly_poweroff);
*/
int tick_check_broadcast_device(struct clock_event_device *dev)
{
- if ((tick_broadcast_device.evtdev &&
+ if ((dev->features & CLOCK_EVT_FEAT_DUMMY) ||
+ (tick_broadcast_device.evtdev &&
tick_broadcast_device.evtdev->rating >= dev->rating) ||
(dev->features & CLOCK_EVT_FEAT_C3STOP))
return 0;
struct request_queue *q,
struct request *rq)
{
- struct blk_trace *bt = q->blk_trace;
-
- /* if control ever passes through here, it's a request based driver */
- if (unlikely(bt && !bt->rq_based))
- bt->rq_based = true;
-
blk_add_trace_rq(q, rq, BLK_TA_COMPLETE);
}
blk_add_trace_bio(q, bio, BLK_TA_BOUNCE, 0);
}
-static void blk_add_trace_bio_complete(void *ignore, struct bio *bio, int error)
+static void blk_add_trace_bio_complete(void *ignore,
+ struct request_queue *q, struct bio *bio,
+ int error)
{
- struct request_queue *q;
- struct blk_trace *bt;
-
- if (!bio->bi_bdev)
- return;
-
- q = bdev_get_queue(bio->bi_bdev);
- bt = q->blk_trace;
-
- /*
- * Request based drivers will generate both rq and bio completions.
- * Ignore bio ones.
- */
- if (likely(!bt) || bt->rq_based)
- return;
-
blk_add_trace_bio(q, bio, BLK_TA_COMPLETE, error);
}
static struct ftrace_ops ftrace_list_end __read_mostly = {
.func = ftrace_stub,
- .flags = FTRACE_OPS_FL_RECURSION_SAFE,
+ .flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_STUB,
};
/* ftrace_enabled is a method to turn ftrace on or off */
free_page(tmp);
}
- free_page((unsigned long)stat->pages);
stat->pages = NULL;
stat->start = NULL;
static struct pid * const ftrace_swapper_pid = &init_struct_pid;
+loff_t
+ftrace_filter_lseek(struct file *file, loff_t offset, int whence)
+{
+ loff_t ret;
+
+ if (file->f_mode & FMODE_READ)
+ ret = seq_lseek(file, offset, whence);
+ else
+ file->f_pos = ret = 1;
+
+ return ret;
+}
+
#ifdef CONFIG_DYNAMIC_FTRACE
#ifndef CONFIG_FTRACE_MCOUNT_RECORD
* routine, you can use ftrace_filter_write() for the write
* routine if @flag has FTRACE_ITER_FILTER set, or
* ftrace_notrace_write() if @flag has FTRACE_ITER_NOTRACE set.
- * ftrace_regex_lseek() should be used as the lseek routine, and
+ * ftrace_filter_lseek() should be used as the lseek routine, and
* release must call ftrace_regex_release().
*/
int
inode, file);
}
-loff_t
-ftrace_regex_lseek(struct file *file, loff_t offset, int whence)
-{
- loff_t ret;
-
- if (file->f_mode & FMODE_READ)
- ret = seq_lseek(file, offset, whence);
- else
- file->f_pos = ret = 1;
-
- return ret;
-}
-
static int ftrace_match(char *str, char *regex, int len, int type)
{
int matched = 0;
static int __init set_ftrace_notrace(char *str)
{
- strncpy(ftrace_notrace_buf, str, FTRACE_FILTER_SIZE);
+ strlcpy(ftrace_notrace_buf, str, FTRACE_FILTER_SIZE);
return 1;
}
__setup("ftrace_notrace=", set_ftrace_notrace);
static int __init set_ftrace_filter(char *str)
{
- strncpy(ftrace_filter_buf, str, FTRACE_FILTER_SIZE);
+ strlcpy(ftrace_filter_buf, str, FTRACE_FILTER_SIZE);
return 1;
}
__setup("ftrace_filter=", set_ftrace_filter);
.open = ftrace_filter_open,
.read = seq_read,
.write = ftrace_filter_write,
- .llseek = ftrace_regex_lseek,
+ .llseek = ftrace_filter_lseek,
.release = ftrace_regex_release,
};
.open = ftrace_notrace_open,
.read = seq_read,
.write = ftrace_notrace_write,
- .llseek = ftrace_regex_lseek,
+ .llseek = ftrace_filter_lseek,
.release = ftrace_regex_release,
};
.open = ftrace_graph_open,
.read = seq_read,
.write = ftrace_graph_write,
+ .llseek = ftrace_filter_lseek,
.release = ftrace_graph_release,
- .llseek = seq_lseek,
};
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
preempt_disable_notrace();
trace_recursion_set(TRACE_CONTROL_BIT);
do_for_each_ftrace_op(op, ftrace_control_list) {
- if (!ftrace_function_local_disabled(op) &&
+ if (!(op->flags & FTRACE_OPS_FL_STUB) &&
+ !ftrace_function_local_disabled(op) &&
ftrace_ops_test(op, ip))
op->func(ip, parent_ip, op, regs);
} while_for_each_ftrace_op(op);
.open = ftrace_pid_open,
.write = ftrace_pid_write,
.read = seq_read,
- .llseek = seq_lseek,
+ .llseek = ftrace_filter_lseek,
.release = ftrace_pid_release,
};
ftrace_startup_sysctl();
/* we are starting ftrace again */
- if (ftrace_ops_list != &ftrace_list_end) {
- if (ftrace_ops_list->next == &ftrace_list_end)
- ftrace_trace_function = ftrace_ops_list->func;
- else
- ftrace_trace_function = ftrace_ops_list_func;
- }
+ if (ftrace_ops_list != &ftrace_list_end)
+ update_ftrace_function();
} else {
/* stopping ftrace calls (just send to ftrace_stub) */
static int __init set_cmdline_ftrace(char *str)
{
- strncpy(bootup_tracer_buf, str, MAX_TRACER_SIZE);
+ strlcpy(bootup_tracer_buf, str, MAX_TRACER_SIZE);
default_bootup_tracer = bootup_tracer_buf;
/* We are using ftrace early, expand it */
ring_buffer_expanded = 1;
static int __init set_trace_boot_options(char *str)
{
- strncpy(trace_boot_options_buf, str, MAX_TRACER_SIZE);
+ strlcpy(trace_boot_options_buf, str, MAX_TRACER_SIZE);
trace_boot_options = trace_boot_options_buf;
return 0;
}
return;
WARN_ON_ONCE(!irqs_disabled());
- if (WARN_ON_ONCE(!current_trace->allocated_snapshot))
+ if (!current_trace->allocated_snapshot) {
+ /* Only the nop tracer should hit this when disabling */
+ WARN_ON_ONCE(current_trace != &nop_trace);
return;
+ }
arch_spin_lock(&ftrace_max_lock);
.open = stack_trace_filter_open,
.read = seq_read,
.write = ftrace_filter_write,
- .llseek = ftrace_regex_lseek,
+ .llseek = ftrace_filter_lseek,
.release = ftrace_regex_release,
};
.owner = GLOBAL_ROOT_UID,
.group = GLOBAL_ROOT_GID,
.proc_inum = PROC_USER_INIT_INO,
+ .may_mount_sysfs = true,
+ .may_mount_proc = true,
};
EXPORT_SYMBOL_GPL(init_user_ns);
static struct kmem_cache *user_ns_cachep __read_mostly;
-static bool new_idmap_permitted(struct user_namespace *ns, int cap_setid,
+static bool new_idmap_permitted(const struct file *file,
+ struct user_namespace *ns, int cap_setid,
struct uid_gid_map *map);
static void set_cred_user_ns(struct cred *cred, struct user_namespace *user_ns)
kgid_t group = new->egid;
int ret;
+ /*
+ * Verify that we can not violate the policy of which files
+ * may be accessed that is specified by the root directory,
+ * by verifing that the root directory is at the root of the
+ * mount namespace which allows all files to be accessed.
+ */
+ if (current_chrooted())
+ return -EPERM;
+
/* The creator needs a mapping in the parent user namespace
* or else we won't be able to reasonably tell userspace who
* created a user_namespace.
set_cred_user_ns(new, ns);
+ update_mnt_policy(ns);
+
return 0;
}
if (map->nr_extents != 0)
goto out;
- /* Require the appropriate privilege CAP_SETUID or CAP_SETGID
- * over the user namespace in order to set the id mapping.
+ /*
+ * Adjusting namespace settings requires capabilities on the target.
*/
- if (cap_valid(cap_setid) && !ns_capable(ns, cap_setid))
+ if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN))
goto out;
/* Get a buffer */
ret = -EPERM;
/* Validate the user is allowed to use user id's mapped to. */
- if (!new_idmap_permitted(ns, cap_setid, &new_map))
+ if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
goto out;
/* Map the lower ids from the parent user namespace to the
&ns->projid_map, &ns->parent->projid_map);
}
-static bool new_idmap_permitted(struct user_namespace *ns, int cap_setid,
+static bool new_idmap_permitted(const struct file *file,
+ struct user_namespace *ns, int cap_setid,
struct uid_gid_map *new_map)
{
/* Allow mapping to your own filesystem ids */
u32 id = new_map->extent[0].lower_first;
if (cap_setid == CAP_SETUID) {
kuid_t uid = make_kuid(ns->parent, id);
- if (uid_eq(uid, current_fsuid()))
+ if (uid_eq(uid, file->f_cred->fsuid))
return true;
}
else if (cap_setid == CAP_SETGID) {
kgid_t gid = make_kgid(ns->parent, id);
- if (gid_eq(gid, current_fsgid()))
+ if (gid_eq(gid, file->f_cred->fsgid))
return true;
}
}
/* Allow the specified ids if we have the appropriate capability
* (CAP_SETUID or CAP_SETGID) over the parent user namespace.
+ * And the opener of the id file also had the approprpiate capability.
*/
- if (ns_capable(ns->parent, cap_setid))
+ if (ns_capable(ns->parent, cap_setid) &&
+ file_ns_capable(file, ns->parent, cap_setid))
return true;
return false;
help
Enable fast lookup object identifier registry.
+config UCS2_STRING
+ tristate
+
endmenu
cmd_build_OID_registry = perl $(srctree)/$(src)/build_OID_registry $< $@
clean-files += oid_registry_data.c
+
+obj-$(CONFIG_UCS2_STRING) += ucs2_string.o
*/
#include <linux/kernel.h>
+#include <linux/printk.h>
#include <linux/spinlock.h>
#include <linux/tty.h>
#include <linux/wait.h>
wake_up_klogd();
}
}
-
-
entry = bucket_find_exact(bucket, ref);
if (!entry) {
+ /* must drop lock before calling dma_mapping_error */
+ put_hash_bucket(bucket, &flags);
+
if (dma_mapping_error(ref->dev, ref->dev_addr)) {
err_printk(ref->dev, NULL,
- "DMA-API: device driver tries "
- "to free an invalid DMA memory address\n");
- return;
+ "DMA-API: device driver tries to free an "
+ "invalid DMA memory address\n");
+ } else {
+ err_printk(ref->dev, NULL,
+ "DMA-API: device driver tries to free DMA "
+ "memory it has not allocated [device "
+ "address=0x%016llx] [size=%llu bytes]\n",
+ ref->dev_addr, ref->size);
}
- err_printk(ref->dev, NULL, "DMA-API: device driver tries "
- "to free DMA memory it has not allocated "
- "[device address=0x%016llx] [size=%llu bytes]\n",
- ref->dev_addr, ref->size);
- goto out;
+ return;
}
if (ref->size != entry->size) {
hash_bucket_del(entry);
dma_entry_free(entry);
-out:
put_hash_bucket(bucket, &flags);
}
ref.dev = dev;
ref.dev_addr = dma_addr;
bucket = get_hash_bucket(&ref, &flags);
- entry = bucket_find_exact(bucket, &ref);
- if (!entry)
- goto out;
+ list_for_each_entry(entry, &bucket->list, list) {
+ if (!exact_match(&ref, entry))
+ continue;
+
+ /*
+ * The same physical address can be mapped multiple
+ * times. Without a hardware IOMMU this results in the
+ * same device addresses being put into the dma-debug
+ * hash multiple times too. This can result in false
+ * positives being reported. Therefore we implement a
+ * best-fit algorithm here which updates the first entry
+ * from the hash which fits the reference value and is
+ * not currently listed as being checked.
+ */
+ if (entry->map_err_type == MAP_ERR_NOT_CHECKED) {
+ entry->map_err_type = MAP_ERR_CHECKED;
+ break;
+ }
+ }
- entry->map_err_type = MAP_ERR_CHECKED;
-out:
put_hash_bucket(bucket, &flags);
}
EXPORT_SYMBOL(debug_dma_mapping_error);
return kobj;
}
+static struct kobject *kobject_get_unless_zero(struct kobject *kobj)
+{
+ if (!kref_get_unless_zero(&kobj->kref))
+ kobj = NULL;
+ return kobj;
+}
+
/*
* kobject_cleanup - free kobject resources.
* @kobj: object to cleanup
list_for_each_entry(k, &kset->list, entry) {
if (kobject_name(k) && !strcmp(kobject_name(k), name)) {
- ret = kobject_get(k);
+ ret = kobject_get_unless_zero(k);
break;
}
}
if (!strcmp(str, "force"))
swiotlb_force = 1;
- return 1;
+ return 0;
}
-__setup("swiotlb=", setup_io_tlb_npages);
+early_param("swiotlb", setup_io_tlb_npages);
/* make io_tlb_overflow tunable too? */
unsigned long swiotlb_nr_tbl(void)
return io_tlb_nslabs;
}
EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
+
+/* default to 64MB */
+#define IO_TLB_DEFAULT_SIZE (64UL<<20)
+unsigned long swiotlb_size_or_default(void)
+{
+ unsigned long size;
+
+ size = io_tlb_nslabs << IO_TLB_SHIFT;
+
+ return size ? size : (IO_TLB_DEFAULT_SIZE);
+}
+
/* Note that this doesn't work with highmem page */
static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
volatile void *address)
void __init
swiotlb_init(int verbose)
{
- /* default to 64MB */
- size_t default_size = 64UL<<20;
+ size_t default_size = IO_TLB_DEFAULT_SIZE;
unsigned char *vstart;
unsigned long bytes;
--- /dev/null
+#include <linux/ucs2_string.h>
+#include <linux/module.h>
+
+/* Return the number of unicode characters in data */
+unsigned long
+ucs2_strnlen(const ucs2_char_t *s, size_t maxlength)
+{
+ unsigned long length = 0;
+
+ while (*s++ != 0 && length < maxlength)
+ length++;
+ return length;
+}
+EXPORT_SYMBOL(ucs2_strnlen);
+
+unsigned long
+ucs2_strlen(const ucs2_char_t *s)
+{
+ return ucs2_strnlen(s, ~0UL);
+}
+EXPORT_SYMBOL(ucs2_strlen);
+
+/*
+ * Return the number of bytes is the length of this string
+ * Note: this is NOT the same as the number of unicode characters
+ */
+unsigned long
+ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength)
+{
+ return ucs2_strnlen(data, maxlength/sizeof(ucs2_char_t)) * sizeof(ucs2_char_t);
+}
+EXPORT_SYMBOL(ucs2_strsize);
+
+int
+ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len)
+{
+ while (1) {
+ if (len == 0)
+ return 0;
+ if (*a < *b)
+ return -1;
+ if (*a > *b)
+ return 1;
+ if (*a == 0) /* implies *b == 0 */
+ return 0;
+ a++;
+ b++;
+ len--;
+ }
+}
+EXPORT_SYMBOL(ucs2_strncmp);
unsigned long addr;
struct file *file = get_file(vma->vm_file);
- vm_flags = vma->vm_flags;
- if (!(flags & MAP_NONBLOCK))
- vm_flags |= VM_POPULATE;
- addr = mmap_region(file, start, size, vm_flags, pgoff);
+ addr = mmap_region(file, start, size,
+ vma->vm_flags, pgoff);
fput(file);
if (IS_ERR_VALUE(addr)) {
err = addr;
mutex_unlock(&mapping->i_mmap_mutex);
}
- if (!(flags & MAP_NONBLOCK) && !(vma->vm_flags & VM_POPULATE)) {
- if (!has_write_lock)
- goto get_write_lock;
- vma->vm_flags |= VM_POPULATE;
- }
-
if (vma->vm_flags & VM_LOCKED) {
/*
* drop PG_Mlocked flag for over-mapped range
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
- struct hstate *h = &default_hstate;
- return h->nr_huge_pages * pages_per_huge_page(h);
+ struct hstate *h;
+ unsigned long nr_total_pages = 0;
+
+ for_each_hstate(h)
+ nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+ return nr_total_pages;
}
static int hugetlb_acct_memory(struct hstate *h, long delta)
break;
}
- if (absent ||
+ /*
+ * We need call hugetlb_fault for both hugepages under migration
+ * (in which case hugetlb_fault waits for the migration,) and
+ * hwpoisoned hugepages (in which case we need to prevent the
+ * caller from accessing to them.) In order to do this, we use
+ * here is_swap_pte instead of is_hugetlb_entry_migration and
+ * is_hugetlb_entry_hwpoisoned. This is because it simply covers
+ * both cases, and because we can't follow correct pages
+ * directly from any kind of swap entries.
+ */
+ if (absent || is_swap_pte(huge_ptep_get(pte)) ||
((flags & FOLL_WRITE) && !pte_write(huge_ptep_get(pte)))) {
int ret;
tlb->mm = mm;
tlb->fullmm = fullmm;
+ tlb->need_flush_all = 0;
tlb->start = -1UL;
tlb->end = 0;
tlb->need_flush = 0;
}
EXPORT_SYMBOL(remap_pfn_range);
+/**
+ * vm_iomap_memory - remap memory to userspace
+ * @vma: user vma to map to
+ * @start: start of area
+ * @len: size of area
+ *
+ * This is a simplified io_remap_pfn_range() for common driver use. The
+ * driver just needs to give us the physical memory range to be mapped,
+ * we'll figure out the rest from the vma information.
+ *
+ * NOTE! Some drivers might want to tweak vma->vm_page_prot first to get
+ * whatever write-combining details or similar.
+ */
+int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
+{
+ unsigned long vm_len, pfn, pages;
+
+ /* Check that the physical memory area passed in looks valid */
+ if (start + len < start)
+ return -EINVAL;
+ /*
+ * You *really* shouldn't map things that aren't page-aligned,
+ * but we've historically allowed it because IO memory might
+ * just have smaller alignment.
+ */
+ len += start & ~PAGE_MASK;
+ pfn = start >> PAGE_SHIFT;
+ pages = (len + ~PAGE_MASK) >> PAGE_SHIFT;
+ if (pfn + pages < pfn)
+ return -EINVAL;
+
+ /* We start the mapping 'vm_pgoff' pages into the area */
+ if (vma->vm_pgoff > pages)
+ return -EINVAL;
+ pfn += vma->vm_pgoff;
+ pages -= vma->vm_pgoff;
+
+ /* Can we fit all of the mapping? */
+ vm_len = vma->vm_end - vma->vm_start;
+ if (vm_len >> PAGE_SHIFT > pages)
+ return -EINVAL;
+
+ /* Ok, let it rip */
+ return io_remap_pfn_range(vma, vma->vm_start, pfn, vm_len, vma->vm_page_prot);
+}
+EXPORT_SYMBOL(vm_iomap_memory);
+
static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
unsigned long addr, unsigned long end,
pte_fn_t fn, void *data)
for (i = 0; i < MAX_NR_ZONES; i++) {
struct zone *zone = pgdat->node_zones + i;
- if (zone->wait_table)
+ /*
+ * wait_table may be allocated from boot memory,
+ * here only free if it's allocated by vmalloc.
+ */
+ if (is_vmalloc_addr(zone->wait_table))
vfree(zone->wait_table);
}
newflags = vma->vm_flags & ~VM_LOCKED;
if (on)
- newflags |= VM_LOCKED | VM_POPULATE;
+ newflags |= VM_LOCKED;
tmp = vma->vm_end;
if (tmp > end)
* range with the first VMA. Also, skip undesirable VMA types.
*/
nend = min(end, vma->vm_end);
- if ((vma->vm_flags & (VM_IO | VM_PFNMAP | VM_POPULATE)) !=
- VM_POPULATE)
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP))
continue;
if (nstart < vma->vm_start)
nstart = vma->vm_start;
struct vm_area_struct * vma, * prev = NULL;
if (flags & MCL_FUTURE)
- current->mm->def_flags |= VM_LOCKED | VM_POPULATE;
+ current->mm->def_flags |= VM_LOCKED;
else
- current->mm->def_flags &= ~(VM_LOCKED | VM_POPULATE);
+ current->mm->def_flags &= ~VM_LOCKED;
if (flags == MCL_FUTURE)
goto out;
newflags = vma->vm_flags & ~VM_LOCKED;
if (flags & MCL_CURRENT)
- newflags |= VM_LOCKED | VM_POPULATE;
+ newflags |= VM_LOCKED;
/* Ignore errors */
mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags);
}
addr = mmap_region(file, addr, len, vm_flags, pgoff);
- if (!IS_ERR_VALUE(addr) && (vm_flags & VM_POPULATE))
+ if (!IS_ERR_VALUE(addr) &&
+ ((vm_flags & VM_LOCKED) ||
+ (flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE))
*populate = len;
return addr;
}
/* Check the cache first. */
/* (Cache hit rate is typically around 35%.) */
- vma = mm->mmap_cache;
+ vma = ACCESS_ONCE(mm->mmap_cache);
if (!(vma && vma->vm_end > addr && vma->vm_start <= addr)) {
struct rb_node *rb_node;
update_hiwater_rss(mm);
unmap_vmas(&tlb, vma, start, end);
free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
- next ? next->vm_start : 0);
+ next ? next->vm_start : USER_PGTABLES_CEILING);
tlb_finish_mmu(&tlb, start, end);
}
/* Use -1 here to ensure all VMAs in the mm are unmapped */
unmap_vmas(&tlb, vma, 0, -1);
- free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0);
+ free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
tlb_finish_mmu(&tlb, 0, -1);
/*
struct vm_area_struct *vma;
/* check the cache first */
- vma = mm->mmap_cache;
+ vma = ACCESS_ONCE(mm->mmap_cache);
if (vma && vma->vm_start <= addr && vma->vm_end > addr)
return vma;
if (IS_ERR(pgdat->kswapd)) {
/* failure at boot is fatal */
BUG_ON(system_state == SYSTEM_BOOTING);
- pgdat->kswapd = NULL;
pr_err("Failed to start kswapd on node %d\n", nid);
ret = PTR_ERR(pgdat->kswapd);
+ pgdat->kswapd = NULL;
}
return ret;
}
* all pending messages before the applicant is gone.
*/
del_timer_sync(&app->join_timer);
+
+ spin_lock(&app->lock);
mrp_mad_event(app, MRP_EVENT_TX);
mrp_pdu_queue(app);
+ spin_unlock(&app->lock);
+
mrp_queue_xmit(app);
dev_mc_del(dev, appl->group_address);
grp = &vlan_info->grp;
- /* Take it out of our own structures, but be sure to interlock with
- * HW accelerating devices or SW vlan input packet processing if
- * VLAN is not 0 (leave it there for 802.1p).
- */
- if (vlan_id)
- vlan_vid_del(real_dev, vlan_id);
-
grp->nr_vlan_devs--;
if (vlan->flags & VLAN_FLAG_MVRP)
vlan_gvrp_uninit_applicant(real_dev);
}
+ /* Take it out of our own structures, but be sure to interlock with
+ * HW accelerating devices or SW vlan input packet processing if
+ * VLAN is not 0 (leave it there for 802.1p).
+ */
+ if (vlan_id)
+ vlan_vid_del(real_dev, vlan_id);
+
/* Get rid of the vlan's reference to real_dev */
dev_put(real_dev);
}
struct sk_buff *skb;
int copied, error = -EINVAL;
+ msg->msg_namelen = 0;
+
if (sock->state != SS_CONNECTED)
return -ENOTCONN;
ax25_address src;
const unsigned char *mac = skb_mac_header(skb);
+ memset(sax, 0, sizeof(struct full_sockaddr_ax25));
ax25_addr_parse(mac + 1, skb->data - mac - 1, &src, NULL,
&digi, NULL, NULL);
sax->sax25_family = AF_AX25;
atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE);
}
-int batadv_is_my_mac(const uint8_t *addr)
+int batadv_is_my_mac(struct batadv_priv *bat_priv, const uint8_t *addr)
{
const struct batadv_hard_iface *hard_iface;
if (hard_iface->if_status != BATADV_IF_ACTIVE)
continue;
+ if (hard_iface->soft_iface != bat_priv->soft_iface)
+ continue;
+
if (batadv_compare_eth(hard_iface->net_dev->dev_addr, addr)) {
rcu_read_unlock();
return 1;
int batadv_mesh_init(struct net_device *soft_iface);
void batadv_mesh_free(struct net_device *soft_iface);
-int batadv_is_my_mac(const uint8_t *addr);
+int batadv_is_my_mac(struct batadv_priv *bat_priv, const uint8_t *addr);
struct batadv_hard_iface *
batadv_seq_print_text_primary_if_get(struct seq_file *seq);
int batadv_batman_skb_recv(struct sk_buff *skb, struct net_device *dev,
goto out;
/* not for me */
- if (!batadv_is_my_mac(ethhdr->h_dest))
+ if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest))
goto out;
icmp_packet = (struct batadv_icmp_packet_rr *)skb->data;
}
/* packet for me */
- if (batadv_is_my_mac(icmp_packet->dst))
+ if (batadv_is_my_mac(bat_priv, icmp_packet->dst))
return batadv_recv_my_icmp_packet(bat_priv, skb, hdr_size);
/* TTL exceeded */
return router;
}
-static int batadv_check_unicast_packet(struct sk_buff *skb, int hdr_size)
+static int batadv_check_unicast_packet(struct batadv_priv *bat_priv,
+ struct sk_buff *skb, int hdr_size)
{
struct ethhdr *ethhdr;
return -1;
/* not for me */
- if (!batadv_is_my_mac(ethhdr->h_dest))
+ if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest))
return -1;
return 0;
char tt_flag;
size_t packet_size;
- if (batadv_check_unicast_packet(skb, hdr_size) < 0)
+ if (batadv_check_unicast_packet(bat_priv, skb, hdr_size) < 0)
return NET_RX_DROP;
/* I could need to modify it */
case BATADV_TT_RESPONSE:
batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_RX);
- if (batadv_is_my_mac(tt_query->dst)) {
+ if (batadv_is_my_mac(bat_priv, tt_query->dst)) {
/* packet needs to be linearized to access the TT
* changes
*/
struct batadv_roam_adv_packet *roam_adv_packet;
struct batadv_orig_node *orig_node;
- if (batadv_check_unicast_packet(skb, sizeof(*roam_adv_packet)) < 0)
+ if (batadv_check_unicast_packet(bat_priv, skb,
+ sizeof(*roam_adv_packet)) < 0)
goto out;
batadv_inc_counter(bat_priv, BATADV_CNT_TT_ROAM_ADV_RX);
roam_adv_packet = (struct batadv_roam_adv_packet *)skb->data;
- if (!batadv_is_my_mac(roam_adv_packet->dst))
+ if (!batadv_is_my_mac(bat_priv, roam_adv_packet->dst))
return batadv_route_unicast_packet(skb, recv_if);
/* check if it is a backbone gateway. we don't accept
* last time) the packet had an updated information or not
*/
curr_ttvn = (uint8_t)atomic_read(&bat_priv->tt.vn);
- if (!batadv_is_my_mac(unicast_packet->dest)) {
+ if (!batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
orig_node = batadv_orig_hash_find(bat_priv,
unicast_packet->dest);
/* if it is not possible to find the orig_node representing the
if (is4addr)
hdr_size = sizeof(*unicast_4addr_packet);
- if (batadv_check_unicast_packet(skb, hdr_size) < 0)
+ if (batadv_check_unicast_packet(bat_priv, skb, hdr_size) < 0)
return NET_RX_DROP;
if (!batadv_check_unicast_ttvn(bat_priv, skb))
return NET_RX_DROP;
/* packet for me */
- if (batadv_is_my_mac(unicast_packet->dest)) {
+ if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
if (is4addr) {
batadv_dat_inc_counter(bat_priv,
unicast_4addr_packet->subtype);
struct sk_buff *new_skb = NULL;
int ret;
- if (batadv_check_unicast_packet(skb, hdr_size) < 0)
+ if (batadv_check_unicast_packet(bat_priv, skb, hdr_size) < 0)
return NET_RX_DROP;
if (!batadv_check_unicast_ttvn(bat_priv, skb))
unicast_packet = (struct batadv_unicast_frag_packet *)skb->data;
/* packet for me */
- if (batadv_is_my_mac(unicast_packet->dest)) {
+ if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
ret = batadv_frag_reassemble_skb(skb, bat_priv, &new_skb);
if (ret == NET_RX_DROP)
goto out;
/* ignore broadcasts sent by myself */
- if (batadv_is_my_mac(ethhdr->h_source))
+ if (batadv_is_my_mac(bat_priv, ethhdr->h_source))
goto out;
bcast_packet = (struct batadv_bcast_packet *)skb->data;
/* ignore broadcasts originated by myself */
- if (batadv_is_my_mac(bcast_packet->orig))
+ if (batadv_is_my_mac(bat_priv, bcast_packet->orig))
goto out;
if (bcast_packet->header.ttl < 2)
ethhdr = (struct ethhdr *)skb_mac_header(skb);
/* not for me */
- if (!batadv_is_my_mac(ethhdr->h_dest))
+ if (!batadv_is_my_mac(bat_priv, ethhdr->h_dest))
return NET_RX_DROP;
/* ignore own packets */
- if (batadv_is_my_mac(vis_packet->vis_orig))
+ if (batadv_is_my_mac(bat_priv, vis_packet->vis_orig))
return NET_RX_DROP;
- if (batadv_is_my_mac(vis_packet->sender_orig))
+ if (batadv_is_my_mac(bat_priv, vis_packet->sender_orig))
return NET_RX_DROP;
switch (vis_packet->vis_type) {
bool batadv_send_tt_response(struct batadv_priv *bat_priv,
struct batadv_tt_query_packet *tt_request)
{
- if (batadv_is_my_mac(tt_request->dst)) {
+ if (batadv_is_my_mac(bat_priv, tt_request->dst)) {
/* don't answer backbone gws! */
if (batadv_bla_is_backbone_gw_orig(bat_priv, tt_request->src))
return true;
/* Are we the target for this VIS packet? */
if (vis_server == BATADV_VIS_TYPE_SERVER_SYNC &&
- batadv_is_my_mac(vis_packet->target_orig))
+ batadv_is_my_mac(bat_priv, vis_packet->target_orig))
are_target = 1;
spin_lock_bh(&bat_priv->vis.hash_lock);
batadv_send_list_add(bat_priv, info);
/* ... we're not the recipient (and thus need to forward). */
- } else if (!batadv_is_my_mac(packet->target_orig)) {
+ } else if (!batadv_is_my_mac(bat_priv, packet->target_orig)) {
batadv_send_list_add(bat_priv, info);
}
if (flags & (MSG_OOB))
return -EOPNOTSUPP;
+ msg->msg_namelen = 0;
+
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb) {
if (sk->sk_shutdown & RCV_SHUTDOWN)
return err;
}
- msg->msg_namelen = 0;
-
copied = skb->len;
if (len < copied) {
msg->msg_flags |= MSG_TRUNC;
if (test_and_clear_bit(RFCOMM_DEFER_SETUP, &d->flags)) {
rfcomm_dlc_accept(d);
+ msg->msg_namelen = 0;
return 0;
}
sco_chan_del(sk, ECONNRESET);
break;
+ case BT_CONNECT2:
case BT_CONNECT:
case BT_DISCONN:
sco_chan_del(sk, ECONNRESET);
test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
hci_conn_accept(pi->conn->hcon, 0);
sk->sk_state = BT_CONFIG;
+ msg->msg_namelen = 0;
release_sock(sk);
return 0;
return 0;
br_warn(br, "adding interface %s with same address "
"as a received packet\n",
- source->dev->name);
+ source ? source->dev->name : br->dev->name);
fdb_delete(br, fdb);
}
struct net_device *dev = p->dev;
struct net_bridge *br = p->br;
- if (netif_running(dev) && netif_oper_up(dev))
+ if (!(p->flags & BR_ADMIN_COST) &&
+ netif_running(dev) && netif_oper_up(dev))
p->path_cost = port_cost(dev);
if (!netif_running(br->dev))
#define BR_BPDU_GUARD 0x00000002
#define BR_ROOT_BLOCK 0x00000004
#define BR_MULTICAST_FAST_LEAVE 0x00000008
+#define BR_ADMIN_COST 0x00000010
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
u32 multicast_startup_queries_sent;
path_cost > BR_MAX_PATH_COST)
return -ERANGE;
+ p->flags |= BR_ADMIN_COST;
p->path_cost = path_cost;
br_configuration_update(p->br);
br_port_state_selection(p->br);
if (m->msg_flags&MSG_OOB)
goto read_error;
+ m->msg_namelen = 0;
+
skb = skb_recv_datagram(sk, flags, 0 , &ret);
if (!skb)
goto read_error;
if (gwj->src.dev == dev || gwj->dst.dev == dev) {
hlist_del(&gwj->list);
cgw_unregister_filter(gwj);
- kfree(gwj);
+ kmem_cache_free(cgw_cache, gwj);
}
}
}
hlist_for_each_entry_safe(gwj, nx, &cgw_list, list) {
hlist_del(&gwj->list);
cgw_unregister_filter(gwj);
- kfree(gwj);
+ kmem_cache_free(cgw_cache, gwj);
}
}
hlist_del(&gwj->list);
cgw_unregister_filter(gwj);
- kfree(gwj);
+ kmem_cache_free(cgw_cache, gwj);
err = 0;
break;
}
return;
}
#endif
- WARN_ON(in_interrupt());
static_key_slow_inc(&netstamp_needed);
}
EXPORT_SYMBOL(net_enable_timestamp);
}
skb_orphan(skb);
- nf_reset(skb);
if (unlikely(!is_skb_forwardable(dev, skb))) {
atomic_long_inc(&dev->rx_dropped);
skb->mark = 0;
secpath_reset(skb);
nf_reset(skb);
+ nf_reset_trace(skb);
return netif_rx(skb);
}
EXPORT_SYMBOL_GPL(dev_forward_skb);
struct net_device *dev = skb->dev;
const char *driver = "";
+ if (!net_ratelimit())
+ return;
+
if (dev && dev->dev.parent)
driver = dev_driver_string(dev->dev.parent);
if (dev->rx_handler)
return -EBUSY;
+ /* Note: rx_handler_data must be set before rx_handler */
rcu_assign_pointer(dev->rx_handler_data, rx_handler_data);
rcu_assign_pointer(dev->rx_handler, rx_handler);
ASSERT_RTNL();
RCU_INIT_POINTER(dev->rx_handler, NULL);
+ /* a reader seeing a non NULL rx_handler in a rcu_read_lock()
+ * section has a guarantee to see a non NULL rx_handler_data
+ * as well.
+ */
+ synchronize_net();
RCU_INIT_POINTER(dev->rx_handler_data, NULL);
}
EXPORT_SYMBOL_GPL(netdev_rx_handler_unregister);
ha->type = addr_type;
ha->refcount = 1;
ha->global_use = global;
- ha->synced = false;
+ ha->synced = 0;
list_add_tail_rcu(&ha->list, &list->list);
list->count++;
addr_len, ha->type);
if (err)
break;
- ha->synced = true;
+ ha->synced++;
ha->refcount++;
} else if (ha->refcount == 1) {
__hw_addr_del(to_list, ha->addr, addr_len, ha->type);
if (ha->synced) {
__hw_addr_del(to_list, ha->addr,
addr_len, ha->type);
- ha->synced = false;
+ ha->synced--;
__hw_addr_del(from_list, ha->addr,
addr_len, ha->type);
}
struct flow_flush_info *info = data;
struct tasklet_struct *tasklet;
- tasklet = this_cpu_ptr(&info->cache->percpu->flush_tasklet);
+ tasklet = &this_cpu_ptr(info->cache->percpu)->flush_tasklet;
tasklet->data = (unsigned long)info;
tasklet_schedule(tasklet);
}
flow->ports = *ports;
}
+ flow->thoff = (u16) nhoff;
+
return true;
}
EXPORT_SYMBOL(skb_flow_dissect);
}
if (ops->fill_info) {
data = nla_nest_start(skb, IFLA_INFO_DATA);
- if (data == NULL)
+ if (data == NULL) {
+ err = -EMSGSIZE;
goto err_cancel_link;
+ }
err = ops->fill_info(skb, dev);
if (err < 0)
goto err_cancel_data;
rcu_read_lock();
cb->seq = net->dev_base_seq;
- if (nlmsg_parse(cb->nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX,
+ if (nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb, IFLA_MAX,
ifla_policy) >= 0) {
if (tb[IFLA_EXT_MASK])
u32 ext_filter_mask = 0;
u16 min_ifinfo_dump_size = 0;
- if (nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX,
+ if (nlmsg_parse(nlh, sizeof(struct ifinfomsg), tb, IFLA_MAX,
ifla_policy) >= 0) {
if (tb[IFLA_EXT_MASK])
ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]);
#include <linux/interrupt.h>
#include <linux/netdevice.h>
#include <linux/security.h>
+#include <linux/pid_namespace.h>
#include <linux/pid.h>
#include <linux/nsproxy.h>
#include <linux/slab.h>
if (!uid_valid(uid) || !gid_valid(gid))
return -EINVAL;
- if ((creds->pid == task_tgid_vnr(current) || nsown_capable(CAP_SYS_ADMIN)) &&
+ if ((creds->pid == task_tgid_vnr(current) ||
+ ns_capable(current->nsproxy->pid_ns->user_ns, CAP_SYS_ADMIN)) &&
((uid_eq(uid, cred->uid) || uid_eq(uid, cred->euid) ||
uid_eq(uid, cred->suid)) || nsown_capable(CAP_SETUID)) &&
((gid_eq(gid, cred->gid) || gid_eq(gid, cred->egid) ||
iph->frag_off |= htons(IP_MF);
offset += (skb->len - skb->mac_len - iph->ihl * 4);
} else {
- if (!(iph->frag_off & htons(IP_DF)))
- iph->id = htons(id++);
+ iph->id = htons(id++);
}
iph->tot_len = htons(skb->len - skb->mac_len);
iph->check = 0;
{
unsigned long now, next, next_sec, next_sched;
struct in_ifaddr *ifa;
+ struct hlist_node *n;
int i;
now = jiffies;
next = round_jiffies_up(now + ADDR_CHECK_FREQUENCY);
- rcu_read_lock();
for (i = 0; i < IN4_ADDR_HSIZE; i++) {
+ bool change_needed = false;
+
+ rcu_read_lock();
hlist_for_each_entry_rcu(ifa, &inet_addr_lst[i], hash) {
unsigned long age;
if (ifa->ifa_valid_lft != INFINITY_LIFE_TIME &&
age >= ifa->ifa_valid_lft) {
- struct in_ifaddr **ifap ;
-
- rtnl_lock();
- for (ifap = &ifa->ifa_dev->ifa_list;
- *ifap != NULL; ifap = &ifa->ifa_next) {
- if (*ifap == ifa)
- inet_del_ifa(ifa->ifa_dev,
- ifap, 1);
- }
- rtnl_unlock();
+ change_needed = true;
} else if (ifa->ifa_preferred_lft ==
INFINITY_LIFE_TIME) {
continue;
next = ifa->ifa_tstamp +
ifa->ifa_valid_lft * HZ;
- if (!(ifa->ifa_flags & IFA_F_DEPRECATED)) {
- ifa->ifa_flags |= IFA_F_DEPRECATED;
- rtmsg_ifa(RTM_NEWADDR, ifa, NULL, 0);
- }
+ if (!(ifa->ifa_flags & IFA_F_DEPRECATED))
+ change_needed = true;
} else if (time_before(ifa->ifa_tstamp +
ifa->ifa_preferred_lft * HZ,
next)) {
ifa->ifa_preferred_lft * HZ;
}
}
+ rcu_read_unlock();
+ if (!change_needed)
+ continue;
+ rtnl_lock();
+ hlist_for_each_entry_safe(ifa, n, &inet_addr_lst[i], hash) {
+ unsigned long age;
+
+ if (ifa->ifa_flags & IFA_F_PERMANENT)
+ continue;
+
+ /* We try to batch several events at once. */
+ age = (now - ifa->ifa_tstamp +
+ ADDRCONF_TIMER_FUZZ_MINUS) / HZ;
+
+ if (ifa->ifa_valid_lft != INFINITY_LIFE_TIME &&
+ age >= ifa->ifa_valid_lft) {
+ struct in_ifaddr **ifap;
+
+ for (ifap = &ifa->ifa_dev->ifa_list;
+ *ifap != NULL; ifap = &(*ifap)->ifa_next) {
+ if (*ifap == ifa) {
+ inet_del_ifa(ifa->ifa_dev,
+ ifap, 1);
+ break;
+ }
+ }
+ } else if (ifa->ifa_preferred_lft !=
+ INFINITY_LIFE_TIME &&
+ age >= ifa->ifa_preferred_lft &&
+ !(ifa->ifa_flags & IFA_F_DEPRECATED)) {
+ ifa->ifa_flags |= IFA_F_DEPRECATED;
+ rtmsg_ifa(RTM_NEWADDR, ifa, NULL, 0);
+ }
+ }
+ rtnl_unlock();
}
- rcu_read_unlock();
next_sec = round_jiffies_up(next);
next_sched = next;
if (nlh->nlmsg_flags & NLM_F_EXCL ||
!(nlh->nlmsg_flags & NLM_F_REPLACE))
return -EEXIST;
-
- set_ifa_lifetime(ifa_existing, valid_lft, prefered_lft);
+ ifa = ifa_existing;
+ set_ifa_lifetime(ifa, valid_lft, prefered_lft);
+ cancel_delayed_work(&check_lifetime_work);
+ schedule_delayed_work(&check_lifetime_work, 0);
+ rtmsg_ifa(RTM_NEWADDR, ifa, nlh, NETLINK_CB(skb).portid);
+ blocking_notifier_call_chain(&inetaddr_chain, NETDEV_UP, ifa);
}
return 0;
}
/* skb is pure payload to encrypt */
- err = -ENOMEM;
-
esp = x->data;
aead = esp->aead;
alen = crypto_aead_authsize(aead);
}
tmp = esp_alloc_tmp(aead, nfrags + sglists, seqhilen);
- if (!tmp)
+ if (!tmp) {
+ err = -ENOMEM;
goto error;
+ }
seqhi = esp_tmp_seqhi(tmp);
iv = esp_tmp_iv(aead, tmp, seqhilen);
if (!head->dev)
goto out_rcu_unlock;
- /* skb dst is stale, drop it, and perform route lookup again */
- skb_dst_drop(head);
+ /* skb has no dst, perform route lookup again */
iph = ip_hdr(head);
err = ip_route_input_noref(head, iph->daddr, iph->saddr,
iph->tos, head->dev);
qp->q.max_size = skb->len + ihl;
if (qp->q.last_in == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) &&
- qp->q.meat == qp->q.len)
- return ip_frag_reasm(qp, prev, dev);
+ qp->q.meat == qp->q.len) {
+ unsigned long orefdst = skb->_skb_refdst;
+ skb->_skb_refdst = 0UL;
+ err = ip_frag_reasm(qp, prev, dev);
+ skb->_skb_refdst = orefdst;
+ return err;
+ }
+
+ skb_dst_drop(skb);
inet_frag_lru_move(&qp->q);
return -EINPROGRESS;
}
for (i++; i < CONF_NAMESERVERS_MAX; i++)
if (ic_nameservers[i] != NONE)
- pr_cont(", nameserver%u=%pI4\n", i, &ic_nameservers[i]);
+ pr_cont(", nameserver%u=%pI4", i, &ic_nameservers[i]);
+ pr_cont("\n");
#endif /* !SILENT */
return 0;
If unsure, say Y.
-config IP_NF_QUEUE
- tristate "IP Userspace queueing via NETLINK (OBSOLETE)"
- depends on NETFILTER_ADVANCED
- help
- Netfilter has the ability to queue packets to user space: the
- netlink device can be used to access them using this driver.
-
- This option enables the old IPv4-only "ip_queue" implementation
- which has been obsoleted by the new "nfnetlink_queue" code (see
- CONFIG_NETFILTER_NETLINK_QUEUE).
-
- To compile it as a module, choose M here. If unsure, say N.
-
config IP_NF_IPTABLES
tristate "IP tables support (required for filtering/masq/NAT)"
default m if NETFILTER_ADVANCED=n
return dev_match;
}
+static bool rpfilter_is_local(const struct sk_buff *skb)
+{
+ const struct rtable *rt = skb_rtable(skb);
+ return rt && (rt->rt_flags & RTCF_LOCAL);
+}
+
static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
{
const struct xt_rpfilter_info *info;
info = par->matchinfo;
invert = info->flags & XT_RPFILTER_INVERT;
- if (par->in->flags & IFF_LOOPBACK)
+ if (rpfilter_is_local(skb))
return true ^ invert;
iph = ip_hdr(skb);
* hasn't changed since we received the original syn, but I see
* no easy way to do this.
*/
- flowi4_init_output(&fl4, 0, sk->sk_mark, RT_CONN_FLAGS(sk),
- RT_SCOPE_UNIVERSE, IPPROTO_TCP,
+ flowi4_init_output(&fl4, sk->sk_bound_dev_if, sk->sk_mark,
+ RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE, IPPROTO_TCP,
inet_sk_flowi_flags(sk),
(opt && opt->srr) ? opt->faddr : ireq->rmt_addr,
ireq->loc_addr, th->source, th->dest);
#define FLAG_DSACKING_ACK 0x800 /* SACK blocks contained D-SACK info */
#define FLAG_NONHEAD_RETRANS_ACKED 0x1000 /* Non-head rexmitted data was ACKed */
#define FLAG_SACK_RENEGING 0x2000 /* snd_una advanced to a sacked seq */
+#define FLAG_UPDATE_TS_RECENT 0x4000 /* tcp_replace_ts_recent() */
#define FLAG_ACKED (FLAG_DATA_ACKED|FLAG_SYN_ACKED)
#define FLAG_NOT_DUP (FLAG_DATA|FLAG_WIN_UPDATE|FLAG_ACKED)
if (tcp_is_reno(tp))
tcp_reset_reno_sack(tp);
- if (!how) {
- /* Push undo marker, if it was plain RTO and nothing
- * was retransmitted. */
- tp->undo_marker = tp->snd_una;
- } else {
+ tp->undo_marker = tp->snd_una;
+ if (how) {
tp->sacked_out = 0;
tp->fackets_out = 0;
}
}
}
+static void tcp_store_ts_recent(struct tcp_sock *tp)
+{
+ tp->rx_opt.ts_recent = tp->rx_opt.rcv_tsval;
+ tp->rx_opt.ts_recent_stamp = get_seconds();
+}
+
+static void tcp_replace_ts_recent(struct tcp_sock *tp, u32 seq)
+{
+ if (tp->rx_opt.saw_tstamp && !after(seq, tp->rcv_wup)) {
+ /* PAWS bug workaround wrt. ACK frames, the PAWS discard
+ * extra check below makes sure this can only happen
+ * for pure ACK frames. -DaveM
+ *
+ * Not only, also it occurs for expired timestamps.
+ */
+
+ if (tcp_paws_check(&tp->rx_opt, 0))
+ tcp_store_ts_recent(tp);
+ }
+}
+
/* This routine deals with incoming acks, but not outgoing ones. */
static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
{
prior_fackets = tp->fackets_out;
prior_in_flight = tcp_packets_in_flight(tp);
+ /* ts_recent update must be made after we are sure that the packet
+ * is in window.
+ */
+ if (flag & FLAG_UPDATE_TS_RECENT)
+ tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq);
+
if (!(flag & FLAG_SLOWPATH) && after(ack, prior_snd_una)) {
/* Window is constant, pure forward advance.
* No more checks are required.
EXPORT_SYMBOL(tcp_parse_md5sig_option);
#endif
-static inline void tcp_store_ts_recent(struct tcp_sock *tp)
-{
- tp->rx_opt.ts_recent = tp->rx_opt.rcv_tsval;
- tp->rx_opt.ts_recent_stamp = get_seconds();
-}
-
-static inline void tcp_replace_ts_recent(struct tcp_sock *tp, u32 seq)
-{
- if (tp->rx_opt.saw_tstamp && !after(seq, tp->rcv_wup)) {
- /* PAWS bug workaround wrt. ACK frames, the PAWS discard
- * extra check below makes sure this can only happen
- * for pure ACK frames. -DaveM
- *
- * Not only, also it occurs for expired timestamps.
- */
-
- if (tcp_paws_check(&tp->rx_opt, 0))
- tcp_store_ts_recent(tp);
- }
-}
-
/* Sorry, PAWS as specified is broken wrt. pure-ACKs -DaveM
*
* It is not fatal. If this ACK does _not_ change critical state (seqs, window)
return 0;
step5:
- if (tcp_ack(sk, skb, FLAG_SLOWPATH) < 0)
+ if (tcp_ack(sk, skb, FLAG_SLOWPATH | FLAG_UPDATE_TS_RECENT) < 0)
goto discard;
- /* ts_recent update must be made after we are sure that the packet
- * is in window.
- */
- tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq);
-
tcp_rcv_rtt_measure_ts(sk, skb);
/* Process urgent data. */
/* step 5: check the ACK field */
if (true) {
- int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH) > 0;
+ int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH |
+ FLAG_UPDATE_TS_RECENT) > 0;
switch (sk->sk_state) {
case TCP_SYN_RECV:
}
}
- /* ts_recent update must be made after we are sure that the packet
- * is in window.
- */
- tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq);
-
/* step 6: check the URG bit */
tcp_urg(sk, skb, th);
goto send_now;
}
- /* Ok, it looks like it is advisable to defer. */
- tp->tso_deferred = 1 | (jiffies << 1);
+ /* Ok, it looks like it is advisable to defer.
+ * Do not rearm the timer if already set to not break TCP ACK clocking.
+ */
+ if (!tp->tso_deferred)
+ tp->tso_deferred = 1 | (jiffies << 1);
return true;
*/
TCP_SKB_CB(skb)->when = tcp_time_stamp;
- /* make sure skb->data is aligned on arches that require it */
- if (unlikely(NET_IP_ALIGN && ((unsigned long)skb->data & 3))) {
+ /* make sure skb->data is aligned on arches that require it
+ * and check if ack-trimming & collapsing extended the headroom
+ * beyond what csum_start can cover.
+ */
+ if (unlikely((NET_IP_ALIGN && ((unsigned long)skb->data & 3)) ||
+ skb_headroom(skb) >= 0xFFFF)) {
struct sk_buff *nskb = __pskb_copy(skb, MAX_TCP_HEADER,
GFP_ATOMIC);
return nskb ? tcp_transmit_skb(sk, nskb, 0, GFP_ATOMIC) :
skb_reserve(skb, MAX_TCP_HEADER);
skb_dst_set(skb, dst);
+ security_skb_owned_by(skb, sk);
mss = dst_metric_advmss(dst);
if (tp->rx_opt.user_mss && tp->rx_opt.user_mss < mss)
void udp_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
bool slow = lock_sock_fast(sk);
udp_flush_pending_frames(sk);
unlock_sock_fast(sk, slow);
+ if (static_key_false(&udp_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
}
/*
static bool ipv6_chk_same_addr(struct net *net, const struct in6_addr *addr,
struct net_device *dev);
-static ATOMIC_NOTIFIER_HEAD(inet6addr_chain);
-
static struct ipv6_devconf ipv6_devconf __read_mostly = {
.forwarding = 0,
.hop_limit = IPV6_DEFAULT_HOPLIMIT,
rcu_read_unlock_bh();
if (likely(err == 0))
- atomic_notifier_call_chain(&inet6addr_chain, NETDEV_UP, ifa);
+ inet6addr_notifier_call_chain(NETDEV_UP, ifa);
else {
kfree(ifa);
ifa = ERR_PTR(err);
ipv6_ifa_notify(RTM_DELADDR, ifp);
- atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifp);
+ inet6addr_notifier_call_chain(NETDEV_DOWN, ifp);
/*
* Purge or update corresponding prefix
static void init_loopback(struct net_device *dev)
{
struct inet6_dev *idev;
+ struct net_device *sp_dev;
+ struct inet6_ifaddr *sp_ifa;
+ struct rt6_info *sp_rt;
/* ::1 */
}
add_addr(idev, &in6addr_loopback, 128, IFA_HOST);
+
+ /* Add routes to other interface's IPv6 addresses */
+ for_each_netdev(dev_net(dev), sp_dev) {
+ if (!strcmp(sp_dev->name, dev->name))
+ continue;
+
+ idev = __in6_dev_get(sp_dev);
+ if (!idev)
+ continue;
+
+ read_lock_bh(&idev->lock);
+ list_for_each_entry(sp_ifa, &idev->addr_list, if_list) {
+
+ if (sp_ifa->flags & (IFA_F_DADFAILED | IFA_F_TENTATIVE))
+ continue;
+
+ sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0);
+
+ /* Failure cases are ignored */
+ if (!IS_ERR(sp_rt))
+ ip6_ins_rt(sp_rt);
+ }
+ read_unlock_bh(&idev->lock);
+ }
}
static void addrconf_add_linklocal(struct inet6_dev *idev, const struct in6_addr *addr)
if (state != INET6_IFADDR_STATE_DEAD) {
__ipv6_ifa_notify(RTM_DELADDR, ifa);
- atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifa);
+ inet6addr_notifier_call_chain(NETDEV_DOWN, ifa);
}
in6_ifa_put(ifa);
static int __net_init addrconf_init_net(struct net *net)
{
- int err;
+ int err = -ENOMEM;
struct ipv6_devconf *all, *dflt;
- err = -ENOMEM;
- all = &ipv6_devconf;
- dflt = &ipv6_devconf_dflt;
+ all = kmemdup(&ipv6_devconf, sizeof(ipv6_devconf), GFP_KERNEL);
+ if (all == NULL)
+ goto err_alloc_all;
- if (!net_eq(net, &init_net)) {
- all = kmemdup(all, sizeof(ipv6_devconf), GFP_KERNEL);
- if (all == NULL)
- goto err_alloc_all;
+ dflt = kmemdup(&ipv6_devconf_dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL);
+ if (dflt == NULL)
+ goto err_alloc_dflt;
- dflt = kmemdup(dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL);
- if (dflt == NULL)
- goto err_alloc_dflt;
- } else {
- /* these will be inherited by all namespaces */
- dflt->autoconf = ipv6_defaults.autoconf;
- dflt->disable_ipv6 = ipv6_defaults.disable_ipv6;
- }
+ /* these will be inherited by all namespaces */
+ dflt->autoconf = ipv6_defaults.autoconf;
+ dflt->disable_ipv6 = ipv6_defaults.disable_ipv6;
net->ipv6.devconf_all = all;
net->ipv6.devconf_dflt = dflt;
.exit = addrconf_exit_net,
};
-/*
- * Device notifier
- */
-
-int register_inet6addr_notifier(struct notifier_block *nb)
-{
- return atomic_notifier_chain_register(&inet6addr_chain, nb);
-}
-EXPORT_SYMBOL(register_inet6addr_notifier);
-
-int unregister_inet6addr_notifier(struct notifier_block *nb)
-{
- return atomic_notifier_chain_unregister(&inet6addr_chain, nb);
-}
-EXPORT_SYMBOL(unregister_inet6addr_notifier);
-
static struct rtnl_af_ops inet6_ops = {
.family = AF_INET6,
.fill_link_af = inet6_fill_link_af,
}
EXPORT_SYMBOL(__ipv6_addr_type);
+static ATOMIC_NOTIFIER_HEAD(inet6addr_chain);
+
+int register_inet6addr_notifier(struct notifier_block *nb)
+{
+ return atomic_notifier_chain_register(&inet6addr_chain, nb);
+}
+EXPORT_SYMBOL(register_inet6addr_notifier);
+
+int unregister_inet6addr_notifier(struct notifier_block *nb)
+{
+ return atomic_notifier_chain_unregister(&inet6addr_chain, nb);
+}
+EXPORT_SYMBOL(unregister_inet6addr_notifier);
+
+int inet6addr_notifier_call_chain(unsigned long val, void *v)
+{
+ return atomic_notifier_call_chain(&inet6addr_chain, val, v);
+}
+EXPORT_SYMBOL(inet6addr_notifier_call_chain);
ipv6_addr_loopback(&hdr->daddr))
goto err;
+ /* RFC4291 Errata ID: 3480
+ * Interface-Local scope spans only a single interface on a
+ * node and is useful only for loopback transmission of
+ * multicast. Packets with interface-local scope received
+ * from another node must be discarded.
+ */
+ if (!(skb->pkt_type == PACKET_LOOPBACK ||
+ dev->flags & IFF_LOOPBACK) &&
+ ipv6_addr_is_multicast(&hdr->daddr) &&
+ IPV6_ADDR_MC_SCOPE(&hdr->daddr) == 1)
+ goto err;
+
/* RFC4291 2.7
* Nodes must not originate a packet to a multicast address whose scope
* field contains the reserved value 0; if such a packet is received, it
if (pfx_len - i >= 32)
mask = 0;
else
- mask = htonl(~((1 << (pfx_len - i)) - 1));
+ mask = htonl((1 << (i - pfx_len + 32)) - 1);
idx = i / 32;
addr->s6_addr32[idx] &= mask;
static struct xt_target ip6t_npt_target_reg[] __read_mostly = {
{
.name = "SNPT",
+ .table = "mangle",
.target = ip6t_snpt_tg,
.targetsize = sizeof(struct ip6t_npt_tginfo),
.checkentry = ip6t_npt_checkentry,
},
{
.name = "DNPT",
+ .table = "mangle",
.target = ip6t_dnpt_tg,
.targetsize = sizeof(struct ip6t_npt_tginfo),
.checkentry = ip6t_npt_checkentry,
return ret;
}
+static bool rpfilter_is_local(const struct sk_buff *skb)
+{
+ const struct rt6_info *rt = (const void *) skb_dst(skb);
+ return rt && (rt->rt6i_flags & RTF_LOCAL);
+}
+
static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
{
const struct xt_rpfilter_info *info = par->matchinfo;
struct ipv6hdr *iph;
bool invert = info->flags & XT_RPFILTER_INVERT;
- if (par->in->flags & IFF_LOOPBACK)
+ if (rpfilter_is_local(skb))
return true ^ invert;
iph = ipv6_hdr(skb);
}
if (fq->q.last_in == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) &&
- fq->q.meat == fq->q.len)
- return ip6_frag_reasm(fq, prev, dev);
+ fq->q.meat == fq->q.len) {
+ int res;
+ unsigned long orefdst = skb->_skb_refdst;
+
+ skb->_skb_refdst = 0UL;
+ res = ip6_frag_reasm(fq, prev, dev);
+ skb->_skb_refdst = orefdst;
+ return res;
+ }
+ skb_dst_drop(skb);
inet_frag_lru_move(&fq->q);
return -1;
if (dst)
dst->ops->redirect(dst, sk, skb);
+ goto out;
}
if (type == ICMPV6_PKT_TOOBIG) {
void udpv6_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
lock_sock(sk);
udp_v6_flush_pending_frames(sk);
release_sock(sk);
+ if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
+
inet6_destroy_sock(sk);
}
IRDA_DEBUG(4, "%s()\n", __func__);
+ msg->msg_namelen = 0;
+
skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
flags & MSG_DONTWAIT, &err);
if (!skb)
NULL, NULL, NULL);
/* Check if the we got some results */
- if (!self->cachedaddr)
- return -EAGAIN; /* Didn't find any devices */
+ if (!self->cachedaddr) {
+ err = -EAGAIN; /* Didn't find any devices */
+ goto out;
+ }
daddr = self->cachedaddr;
/* Cleanup */
self->cachedaddr = 0;
{
struct iriap_cb *self;
- IRDA_DEBUG(4, "%s(), reason=%s\n", __func__, irlmp_reasons[reason]);
+ IRDA_DEBUG(4, "%s(), reason=%s [%d]\n", __func__,
+ irlmp_reason_str(reason), reason);
self = instance;
"LM_LAP_RESET",
"LM_INIT_DISCONNECT",
"ERROR, NOT USED",
+ "UNKNOWN",
};
+const char *irlmp_reason_str(LM_REASON reason)
+{
+ reason = min_t(size_t, reason, ARRAY_SIZE(irlmp_reasons) - 1);
+ return irlmp_reasons[reason];
+}
+
/*
* Function irlmp_init (void)
*
{
struct lsap_cb *lsap;
- IRDA_DEBUG(1, "%s(), reason=%s\n", __func__, irlmp_reasons[reason]);
+ IRDA_DEBUG(1, "%s(), reason=%s [%d]\n", __func__,
+ irlmp_reason_str(reason), reason);
IRDA_ASSERT(self != NULL, return;);
IRDA_ASSERT(self->magic == LMP_LSAP_MAGIC, return;);
#define TRGCLS_SIZE (sizeof(((struct iucv_message *)0)->class))
-/* macros to set/get socket control buffer at correct offset */
-#define CB_TAG(skb) ((skb)->cb) /* iucv message tag */
-#define CB_TAG_LEN (sizeof(((struct iucv_message *) 0)->tag))
-#define CB_TRGCLS(skb) ((skb)->cb + CB_TAG_LEN) /* iucv msg target class */
-#define CB_TRGCLS_LEN (TRGCLS_SIZE)
-
#define __iucv_sock_wait(sk, condition, timeo, ret) \
do { \
DEFINE_WAIT(__wait); \
/* increment and save iucv message tag for msg_completion cbk */
txmsg.tag = iucv->send_tag++;
- memcpy(CB_TAG(skb), &txmsg.tag, CB_TAG_LEN);
+ IUCV_SKB_CB(skb)->tag = txmsg.tag;
if (iucv->transport == AF_IUCV_TRANS_HIPER) {
atomic_inc(&iucv->msg_sent);
return -ENOMEM;
/* copy target class to control buffer of new skb */
- memcpy(CB_TRGCLS(nskb), CB_TRGCLS(skb), CB_TRGCLS_LEN);
+ IUCV_SKB_CB(nskb)->class = IUCV_SKB_CB(skb)->class;
/* copy data fragment */
memcpy(nskb->data, skb->data + copied, size);
/* store msg target class in the second 4 bytes of skb ctrl buffer */
/* Note: the first 4 bytes are reserved for msg tag */
- memcpy(CB_TRGCLS(skb), &msg->class, CB_TRGCLS_LEN);
+ IUCV_SKB_CB(skb)->class = msg->class;
/* check for special IPRM messages (e.g. iucv_sock_shutdown) */
if ((msg->flags & IUCV_IPRMDATA) && len > 7) {
}
}
+ IUCV_SKB_CB(skb)->offset = 0;
if (sock_queue_rcv_skb(sk, skb))
skb_queue_head(&iucv_sk(sk)->backlog_skb_q, skb);
}
unsigned int copied, rlen;
struct sk_buff *skb, *rskb, *cskb;
int err = 0;
+ u32 offset;
+
+ msg->msg_namelen = 0;
if ((sk->sk_state == IUCV_DISCONN) &&
skb_queue_empty(&iucv->backlog_skb_q) &&
return err;
}
- rlen = skb->len; /* real length of skb */
+ offset = IUCV_SKB_CB(skb)->offset;
+ rlen = skb->len - offset; /* real length of skb */
copied = min_t(unsigned int, rlen, len);
if (!rlen)
sk->sk_shutdown = sk->sk_shutdown | RCV_SHUTDOWN;
cskb = skb;
- if (skb_copy_datagram_iovec(cskb, 0, msg->msg_iov, copied)) {
+ if (skb_copy_datagram_iovec(cskb, offset, msg->msg_iov, copied)) {
if (!(flags & MSG_PEEK))
skb_queue_head(&sk->sk_receive_queue, skb);
return -EFAULT;
* get the trgcls from the control buffer of the skb due to
* fragmentation of original iucv message. */
err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
- CB_TRGCLS_LEN, CB_TRGCLS(skb));
+ sizeof(IUCV_SKB_CB(skb)->class),
+ (void *)&IUCV_SKB_CB(skb)->class);
if (err) {
if (!(flags & MSG_PEEK))
skb_queue_head(&sk->sk_receive_queue, skb);
/* SOCK_STREAM: re-queue skb if it contains unreceived data */
if (sk->sk_type == SOCK_STREAM) {
- skb_pull(skb, copied);
- if (skb->len) {
- skb_queue_head(&sk->sk_receive_queue, skb);
+ if (copied < rlen) {
+ IUCV_SKB_CB(skb)->offset = offset + copied;
goto done;
}
}
spin_lock_bh(&iucv->message_q.lock);
rskb = skb_dequeue(&iucv->backlog_skb_q);
while (rskb) {
+ IUCV_SKB_CB(rskb)->offset = 0;
if (sock_queue_rcv_skb(sk, rskb)) {
skb_queue_head(&iucv->backlog_skb_q,
rskb);
spin_lock_irqsave(&list->lock, flags);
while (list_skb != (struct sk_buff *)list) {
- if (!memcmp(&msg->tag, CB_TAG(list_skb), CB_TAG_LEN)) {
+ if (msg->tag != IUCV_SKB_CB(list_skb)->tag) {
this = list_skb;
break;
}
skb_pull(skb, sizeof(struct af_iucv_trans_hdr));
skb_reset_transport_header(skb);
skb_reset_network_header(skb);
+ IUCV_SKB_CB(skb)->offset = 0;
spin_lock(&iucv->message_q.lock);
if (skb_queue_empty(&iucv->backlog_skb_q)) {
if (sock_queue_rcv_skb(sk, skb)) {
/* fall through and receive zero length data */
case 0:
/* plain data frame */
- memcpy(CB_TRGCLS(skb), &trans_hdr->iucv_hdr.class,
- CB_TRGCLS_LEN);
+ IUCV_SKB_CB(skb)->class = trans_hdr->iucv_hdr.class;
err = afiucv_hs_callback_rx(sk, skb);
break;
default:
hdr->sadb_msg_pid = c->portid;
hdr->sadb_msg_version = PF_KEY_V2;
hdr->sadb_msg_errno = (uint8_t) 0;
+ hdr->sadb_msg_satype = SADB_SATYPE_UNSPEC;
hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
pfkey_broadcast(skb_out, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net);
return 0;
static void l2tp_session_set_header_len(struct l2tp_session *session, int version);
static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
-static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
static inline struct l2tp_net *l2tp_pernet(struct net *net)
{
} else {
/* Socket is owned by kernelspace */
sk = tunnel->sock;
+ sock_hold(sk);
}
out:
}
sock_put(sk);
}
+ sock_put(sk);
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put);
struct sk_buff *skbp;
struct sk_buff *tmp;
u32 ns = L2TP_SKB_CB(skb)->ns;
- struct l2tp_stats *sstats;
spin_lock_bh(&session->reorder_q.lock);
- sstats = &session->stats;
skb_queue_walk_safe(&session->reorder_q, skbp, tmp) {
if (L2TP_SKB_CB(skbp)->ns > ns) {
__skb_queue_before(&session->reorder_q, skbp, skb);
"%s: pkt %hu, inserted before %hu, reorder_q len=%d\n",
session->name, ns, L2TP_SKB_CB(skbp)->ns,
skb_queue_len(&session->reorder_q));
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_oos_packets++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_oos_packets);
goto out;
}
}
{
struct l2tp_tunnel *tunnel = session->tunnel;
int length = L2TP_SKB_CB(skb)->length;
- struct l2tp_stats *tstats, *sstats;
/* We're about to requeue the skb, so return resources
* to its current owner (a socket receive buffer).
*/
skb_orphan(skb);
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- sstats = &session->stats;
- u64_stats_update_begin(&sstats->syncp);
- tstats->rx_packets++;
- tstats->rx_bytes += length;
- sstats->rx_packets++;
- sstats->rx_bytes += length;
- u64_stats_update_end(&tstats->syncp);
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&tunnel->stats.rx_packets);
+ atomic_long_add(length, &tunnel->stats.rx_bytes);
+ atomic_long_inc(&session->stats.rx_packets);
+ atomic_long_add(length, &session->stats.rx_bytes);
if (L2TP_SKB_CB(skb)->has_seq) {
/* Bump our Nr */
{
struct sk_buff *skb;
struct sk_buff *tmp;
- struct l2tp_stats *sstats;
/* If the pkt at the head of the queue has the nr that we
* expect to send up next, dequeue it and any other
*/
start:
spin_lock_bh(&session->reorder_q.lock);
- sstats = &session->stats;
skb_queue_walk_safe(&session->reorder_q, skb, tmp) {
if (time_after(jiffies, L2TP_SKB_CB(skb)->expires)) {
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- sstats->rx_errors++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
+ atomic_long_inc(&session->stats.rx_errors);
l2tp_dbg(session, L2TP_MSG_SEQ,
"%s: oos pkt %u len %d discarded (too old), waiting for %u, reorder_q_len=%d\n",
session->name, L2TP_SKB_CB(skb)->ns,
struct l2tp_tunnel *tunnel = session->tunnel;
int offset;
u32 ns, nr;
- struct l2tp_stats *sstats = &session->stats;
/* The ref count is increased since we now hold a pointer to
* the session. Take care to decrement the refcnt when exiting
"%s: cookie mismatch (%u/%u). Discarding.\n",
tunnel->name, tunnel->tunnel_id,
session->session_id);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_cookie_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_cookie_discards);
goto discard;
}
ptr += session->peer_cookie_len;
l2tp_warn(session, L2TP_MSG_SEQ,
"%s: recv data has no seq numbers when required. Discarding.\n",
session->name);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
goto discard;
}
l2tp_warn(session, L2TP_MSG_SEQ,
"%s: recv data has no seq numbers when required. Discarding.\n",
session->name);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
goto discard;
}
}
* packets
*/
if (L2TP_SKB_CB(skb)->ns != session->nr) {
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
l2tp_dbg(session, L2TP_MSG_SEQ,
"%s: oos pkt %u len %d discarded, waiting for %u, reorder_q_len=%d\n",
session->name, L2TP_SKB_CB(skb)->ns,
return;
discard:
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_errors++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_errors);
kfree_skb(skb);
if (session->deref)
}
EXPORT_SYMBOL(l2tp_recv_common);
+/* Drop skbs from the session's reorder_q
+ */
+int l2tp_session_queue_purge(struct l2tp_session *session)
+{
+ struct sk_buff *skb = NULL;
+ BUG_ON(!session);
+ BUG_ON(session->magic != L2TP_SESSION_MAGIC);
+ while ((skb = skb_dequeue(&session->reorder_q))) {
+ atomic_long_inc(&session->stats.rx_errors);
+ kfree_skb(skb);
+ if (session->deref)
+ (*session->deref)(session);
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(l2tp_session_queue_purge);
+
/* Internal UDP receive frame. Do the real work of receiving an L2TP data frame
* here. The skb is not on a list when we get here.
* Returns 0 if the packet was a data packet and was successfully passed on.
u32 tunnel_id, session_id;
u16 version;
int length;
- struct l2tp_stats *tstats;
if (tunnel->sock && l2tp_verify_udp_checksum(tunnel->sock, skb))
goto discard_bad_csum;
discard_bad_csum:
LIMIT_NETDEBUG("%s: UDP: bad checksum\n", tunnel->name);
UDP_INC_STATS_USER(tunnel->l2tp_net, UDP_MIB_INERRORS, 0);
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- tstats->rx_errors++;
- u64_stats_update_end(&tstats->syncp);
+ atomic_long_inc(&tunnel->stats.rx_errors);
kfree_skb(skb);
return 0;
struct l2tp_tunnel *tunnel = session->tunnel;
unsigned int len = skb->len;
int error;
- struct l2tp_stats *tstats, *sstats;
/* Debug */
if (session->send_seq)
error = ip_queue_xmit(skb, fl);
/* Update stats */
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- sstats = &session->stats;
- u64_stats_update_begin(&sstats->syncp);
if (error >= 0) {
- tstats->tx_packets++;
- tstats->tx_bytes += len;
- sstats->tx_packets++;
- sstats->tx_bytes += len;
+ atomic_long_inc(&tunnel->stats.tx_packets);
+ atomic_long_add(len, &tunnel->stats.tx_bytes);
+ atomic_long_inc(&session->stats.tx_packets);
+ atomic_long_add(len, &session->stats.tx_bytes);
} else {
- tstats->tx_errors++;
- sstats->tx_errors++;
+ atomic_long_inc(&tunnel->stats.tx_errors);
+ atomic_long_inc(&session->stats.tx_errors);
}
- u64_stats_update_end(&tstats->syncp);
- u64_stats_update_end(&sstats->syncp);
return 0;
}
/* No longer an encapsulation socket. See net/ipv4/udp.c */
(udp_sk(sk))->encap_type = 0;
(udp_sk(sk))->encap_rcv = NULL;
+ (udp_sk(sk))->encap_destroy = NULL;
break;
case L2TP_ENCAPTYPE_IP:
break;
/* When the tunnel is closed, all the attached sessions need to go too.
*/
-static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel)
+void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel)
{
int hash;
struct hlist_node *walk;
hlist_del_init(&session->hlist);
- /* Since we should hold the sock lock while
- * doing any unbinding, we need to release the
- * lock we're holding before taking that lock.
- * Hold a reference to the sock so it doesn't
- * disappear as we're jumping between locks.
- */
if (session->ref != NULL)
(*session->ref)(session);
write_unlock_bh(&tunnel->hlist_lock);
- if (tunnel->version != L2TP_HDR_VER_2) {
- struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
-
- spin_lock_bh(&pn->l2tp_session_hlist_lock);
- hlist_del_init_rcu(&session->global_hlist);
- spin_unlock_bh(&pn->l2tp_session_hlist_lock);
- synchronize_rcu();
- }
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
if (session->session_close != NULL)
(*session->session_close)(session);
if (session->deref != NULL)
(*session->deref)(session);
+ l2tp_session_dec_refcount(session);
+
write_lock_bh(&tunnel->hlist_lock);
/* Now restart from the beginning of this hash
}
write_unlock_bh(&tunnel->hlist_lock);
}
+EXPORT_SYMBOL_GPL(l2tp_tunnel_closeall);
+
+/* Tunnel socket destroy hook for UDP encapsulation */
+static void l2tp_udp_encap_destroy(struct sock *sk)
+{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+}
/* Really kill the tunnel.
* Come here only when all sessions have been cleared from the tunnel.
return;
sock = sk->sk_socket;
- BUG_ON(!sock);
- /* If the tunnel socket was created directly by the kernel, use the
- * sk_* API to release the socket now. Otherwise go through the
- * inet_* layer to shut the socket down, and let userspace close it.
+ /* If the tunnel socket was created by userspace, then go through the
+ * inet layer to shut the socket down, and let userspace close it.
+ * Otherwise, if we created the socket directly within the kernel, use
+ * the sk API to release it here.
* In either case the tunnel resources are freed in the socket
* destructor when the tunnel socket goes away.
*/
- if (sock->file == NULL) {
- kernel_sock_shutdown(sock, SHUT_RDWR);
- sk_release_kernel(sk);
+ if (tunnel->fd >= 0) {
+ if (sock)
+ inet_shutdown(sock, 2);
} else {
- inet_shutdown(sock, 2);
+ if (sock)
+ kernel_sock_shutdown(sock, SHUT_RDWR);
+ sk_release_kernel(sk);
}
l2tp_tunnel_sock_put(sk);
/* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
udp_sk(sk)->encap_type = UDP_ENCAP_L2TPINUDP;
udp_sk(sk)->encap_rcv = l2tp_udp_encap_recv;
+ udp_sk(sk)->encap_destroy = l2tp_udp_encap_destroy;
#if IS_ENABLED(CONFIG_IPV6)
if (sk->sk_family == PF_INET6)
udpv6_encap_enable();
*/
int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
{
+ l2tp_tunnel_closeall(tunnel);
return (false == queue_work(l2tp_wq, &tunnel->del_work));
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
*/
void l2tp_session_free(struct l2tp_session *session)
{
- struct l2tp_tunnel *tunnel;
+ struct l2tp_tunnel *tunnel = session->tunnel;
BUG_ON(atomic_read(&session->ref_count) != 0);
- tunnel = session->tunnel;
- if (tunnel != NULL) {
+ if (tunnel) {
BUG_ON(tunnel->magic != L2TP_TUNNEL_MAGIC);
+ if (session->session_id != 0)
+ atomic_dec(&l2tp_session_count);
+ sock_put(tunnel->sock);
+ session->tunnel = NULL;
+ l2tp_tunnel_dec_refcount(tunnel);
+ }
+
+ kfree(session);
- /* Delete the session from the hash */
+ return;
+}
+EXPORT_SYMBOL_GPL(l2tp_session_free);
+
+/* Remove an l2tp session from l2tp_core's hash lists.
+ * Provides a tidyup interface for pseudowire code which can't just route all
+ * shutdown via. l2tp_session_delete and a pseudowire-specific session_close
+ * callback.
+ */
+void __l2tp_session_unhash(struct l2tp_session *session)
+{
+ struct l2tp_tunnel *tunnel = session->tunnel;
+
+ /* Remove the session from core hashes */
+ if (tunnel) {
+ /* Remove from the per-tunnel hash */
write_lock_bh(&tunnel->hlist_lock);
hlist_del_init(&session->hlist);
write_unlock_bh(&tunnel->hlist_lock);
- /* Unlink from the global hash if not L2TPv2 */
+ /* For L2TPv3 we have a per-net hash: remove from there, too */
if (tunnel->version != L2TP_HDR_VER_2) {
struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
-
spin_lock_bh(&pn->l2tp_session_hlist_lock);
hlist_del_init_rcu(&session->global_hlist);
spin_unlock_bh(&pn->l2tp_session_hlist_lock);
synchronize_rcu();
}
-
- if (session->session_id != 0)
- atomic_dec(&l2tp_session_count);
-
- sock_put(tunnel->sock);
-
- /* This will delete the tunnel context if this
- * is the last session on the tunnel.
- */
- session->tunnel = NULL;
- l2tp_tunnel_dec_refcount(tunnel);
}
-
- kfree(session);
-
- return;
}
-EXPORT_SYMBOL_GPL(l2tp_session_free);
+EXPORT_SYMBOL_GPL(__l2tp_session_unhash);
/* This function is used by the netlink SESSION_DELETE command and by
pseudowire modules.
*/
int l2tp_session_delete(struct l2tp_session *session)
{
+ if (session->ref)
+ (*session->ref)(session);
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
if (session->session_close != NULL)
(*session->session_close)(session);
-
+ if (session->deref)
+ (*session->ref)(session);
l2tp_session_dec_refcount(session);
-
return 0;
}
EXPORT_SYMBOL_GPL(l2tp_session_delete);
-
/* We come here whenever a session's send_seq, cookie_len or
* l2specific_len parameters are set.
*/
struct sk_buff;
struct l2tp_stats {
- u64 tx_packets;
- u64 tx_bytes;
- u64 tx_errors;
- u64 rx_packets;
- u64 rx_bytes;
- u64 rx_seq_discards;
- u64 rx_oos_packets;
- u64 rx_errors;
- u64 rx_cookie_discards;
- struct u64_stats_sync syncp;
+ atomic_long_t tx_packets;
+ atomic_long_t tx_bytes;
+ atomic_long_t tx_errors;
+ atomic_long_t rx_packets;
+ atomic_long_t rx_bytes;
+ atomic_long_t rx_seq_discards;
+ atomic_long_t rx_oos_packets;
+ atomic_long_t rx_errors;
+ atomic_long_t rx_cookie_discards;
};
struct l2tp_tunnel;
extern struct l2tp_tunnel *l2tp_tunnel_find_nth(struct net *net, int nth);
extern int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg, struct l2tp_tunnel **tunnelp);
+extern void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
extern int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel);
extern struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg);
+extern void __l2tp_session_unhash(struct l2tp_session *session);
extern int l2tp_session_delete(struct l2tp_session *session);
extern void l2tp_session_free(struct l2tp_session *session);
extern void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb, unsigned char *ptr, unsigned char *optr, u16 hdrflags, int length, int (*payload_hook)(struct sk_buff *skb));
+extern int l2tp_session_queue_purge(struct l2tp_session *session);
extern int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb);
extern int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len);
tunnel->sock ? atomic_read(&tunnel->sock->sk_refcnt) : 0,
atomic_read(&tunnel->ref_count));
- seq_printf(m, " %08x rx %llu/%llu/%llu rx %llu/%llu/%llu\n",
+ seq_printf(m, " %08x rx %ld/%ld/%ld rx %ld/%ld/%ld\n",
tunnel->debug,
- (unsigned long long)tunnel->stats.tx_packets,
- (unsigned long long)tunnel->stats.tx_bytes,
- (unsigned long long)tunnel->stats.tx_errors,
- (unsigned long long)tunnel->stats.rx_packets,
- (unsigned long long)tunnel->stats.rx_bytes,
- (unsigned long long)tunnel->stats.rx_errors);
+ atomic_long_read(&tunnel->stats.tx_packets),
+ atomic_long_read(&tunnel->stats.tx_bytes),
+ atomic_long_read(&tunnel->stats.tx_errors),
+ atomic_long_read(&tunnel->stats.rx_packets),
+ atomic_long_read(&tunnel->stats.rx_bytes),
+ atomic_long_read(&tunnel->stats.rx_errors));
if (tunnel->show != NULL)
tunnel->show(m, tunnel);
seq_printf(m, "\n");
}
- seq_printf(m, " %hu/%hu tx %llu/%llu/%llu rx %llu/%llu/%llu\n",
+ seq_printf(m, " %hu/%hu tx %ld/%ld/%ld rx %ld/%ld/%ld\n",
session->nr, session->ns,
- (unsigned long long)session->stats.tx_packets,
- (unsigned long long)session->stats.tx_bytes,
- (unsigned long long)session->stats.tx_errors,
- (unsigned long long)session->stats.rx_packets,
- (unsigned long long)session->stats.rx_bytes,
- (unsigned long long)session->stats.rx_errors);
+ atomic_long_read(&session->stats.tx_packets),
+ atomic_long_read(&session->stats.tx_bytes),
+ atomic_long_read(&session->stats.tx_errors),
+ atomic_long_read(&session->stats.rx_packets),
+ atomic_long_read(&session->stats.rx_bytes),
+ atomic_long_read(&session->stats.rx_errors));
if (session->show != NULL)
session->show(m, session);
static void l2tp_ip_destroy_sock(struct sock *sk)
{
struct sk_buff *skb;
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
while ((skb = __skb_dequeue_tail(&sk->sk_write_queue)) != NULL)
kfree_skb(skb);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+
sk_refcnt_debug_dec(sk);
}
static void l2tp_ip6_destroy_sock(struct sock *sk)
{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+
lock_sock(sk);
ip6_flush_pending_frames(sk);
release_sock(sk);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+
inet6_destroy_sock(sk);
}
lsa->l2tp_addr = ipv6_hdr(skb)->saddr;
lsa->l2tp_flowinfo = 0;
lsa->l2tp_scope_id = 0;
+ lsa->l2tp_conn_id = 0;
if (ipv6_addr_type(&lsa->l2tp_addr) & IPV6_ADDR_LINKLOCAL)
lsa->l2tp_scope_id = IP6CB(skb)->iif;
}
#if IS_ENABLED(CONFIG_IPV6)
struct ipv6_pinfo *np = NULL;
#endif
- struct l2tp_stats stats;
- unsigned int start;
hdr = genlmsg_put(skb, portid, seq, &l2tp_nl_family, flags,
L2TP_CMD_TUNNEL_GET);
if (nest == NULL)
goto nla_put_failure;
- do {
- start = u64_stats_fetch_begin(&tunnel->stats.syncp);
- stats.tx_packets = tunnel->stats.tx_packets;
- stats.tx_bytes = tunnel->stats.tx_bytes;
- stats.tx_errors = tunnel->stats.tx_errors;
- stats.rx_packets = tunnel->stats.rx_packets;
- stats.rx_bytes = tunnel->stats.rx_bytes;
- stats.rx_errors = tunnel->stats.rx_errors;
- stats.rx_seq_discards = tunnel->stats.rx_seq_discards;
- stats.rx_oos_packets = tunnel->stats.rx_oos_packets;
- } while (u64_stats_fetch_retry(&tunnel->stats.syncp, start));
-
- if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) ||
- nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) ||
- nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) ||
+ if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
+ atomic_long_read(&tunnel->stats.tx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
+ atomic_long_read(&tunnel->stats.tx_bytes)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
+ atomic_long_read(&tunnel->stats.tx_errors)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
+ atomic_long_read(&tunnel->stats.rx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
+ atomic_long_read(&tunnel->stats.rx_bytes)) ||
nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
- stats.rx_seq_discards) ||
+ atomic_long_read(&tunnel->stats.rx_seq_discards)) ||
nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
- stats.rx_oos_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors))
+ atomic_long_read(&tunnel->stats.rx_oos_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
+ atomic_long_read(&tunnel->stats.rx_errors)))
goto nla_put_failure;
nla_nest_end(skb, nest);
struct nlattr *nest;
struct l2tp_tunnel *tunnel = session->tunnel;
struct sock *sk = NULL;
- struct l2tp_stats stats;
- unsigned int start;
sk = tunnel->sock;
if (nest == NULL)
goto nla_put_failure;
- do {
- start = u64_stats_fetch_begin(&session->stats.syncp);
- stats.tx_packets = session->stats.tx_packets;
- stats.tx_bytes = session->stats.tx_bytes;
- stats.tx_errors = session->stats.tx_errors;
- stats.rx_packets = session->stats.rx_packets;
- stats.rx_bytes = session->stats.rx_bytes;
- stats.rx_errors = session->stats.rx_errors;
- stats.rx_seq_discards = session->stats.rx_seq_discards;
- stats.rx_oos_packets = session->stats.rx_oos_packets;
- } while (u64_stats_fetch_retry(&session->stats.syncp, start));
-
- if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) ||
- nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) ||
- nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) ||
+ if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
+ atomic_long_read(&session->stats.tx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
+ atomic_long_read(&session->stats.tx_bytes)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
+ atomic_long_read(&session->stats.tx_errors)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
+ atomic_long_read(&session->stats.rx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
+ atomic_long_read(&session->stats.rx_bytes)) ||
nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
- stats.rx_seq_discards) ||
+ atomic_long_read(&session->stats.rx_seq_discards)) ||
nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
- stats.rx_oos_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors))
+ atomic_long_read(&session->stats.rx_oos_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
+ atomic_long_read(&session->stats.rx_errors)))
goto nla_put_failure;
nla_nest_end(skb, nest);
#include <net/ip.h>
#include <net/udp.h>
#include <net/xfrm.h>
+#include <net/inet_common.h>
#include <asm/byteorder.h>
#include <linux/atomic.h>
session->name);
/* Not bound. Nothing we can do, so discard. */
- session->stats.rx_errors++;
+ atomic_long_inc(&session->stats.rx_errors);
kfree_skb(skb);
}
{
struct pppol2tp_session *ps = l2tp_session_priv(session);
struct sock *sk = ps->sock;
- struct sk_buff *skb;
+ struct socket *sock = sk->sk_socket;
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
- if (session->session_id == 0)
- goto out;
-
- if (sk != NULL) {
- lock_sock(sk);
-
- if (sk->sk_state & (PPPOX_CONNECTED | PPPOX_BOUND)) {
- pppox_unbind_sock(sk);
- sk->sk_state = PPPOX_DEAD;
- sk->sk_state_change(sk);
- }
-
- /* Purge any queued data */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
- while ((skb = skb_dequeue(&session->reorder_q))) {
- kfree_skb(skb);
- sock_put(sk);
- }
- release_sock(sk);
+ if (sock) {
+ inet_shutdown(sock, 2);
+ /* Don't let the session go away before our socket does */
+ l2tp_session_inc_refcount(session);
}
-
-out:
return;
}
*/
static void pppol2tp_session_destruct(struct sock *sk)
{
- struct l2tp_session *session;
-
- if (sk->sk_user_data != NULL) {
- session = sk->sk_user_data;
- if (session == NULL)
- goto out;
-
+ struct l2tp_session *session = sk->sk_user_data;
+ if (session) {
sk->sk_user_data = NULL;
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
l2tp_session_dec_refcount(session);
}
-
-out:
return;
}
session = pppol2tp_sock_to_session(sk);
/* Purge any queued data */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
if (session != NULL) {
- struct sk_buff *skb;
- while ((skb = skb_dequeue(&session->reorder_q))) {
- kfree_skb(skb);
- sock_put(sk);
- }
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
sock_put(sk);
}
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
release_sock(sk);
return error;
}
-/* Called when deleting sessions via the netlink interface.
- */
-static int pppol2tp_session_delete(struct l2tp_session *session)
-{
- struct pppol2tp_session *ps = l2tp_session_priv(session);
-
- if (ps->sock == NULL)
- l2tp_session_dec_refcount(session);
-
- return 0;
-}
-
#endif /* CONFIG_L2TP_V3 */
/* getname() support.
static void pppol2tp_copy_stats(struct pppol2tp_ioc_stats *dest,
struct l2tp_stats *stats)
{
- dest->tx_packets = stats->tx_packets;
- dest->tx_bytes = stats->tx_bytes;
- dest->tx_errors = stats->tx_errors;
- dest->rx_packets = stats->rx_packets;
- dest->rx_bytes = stats->rx_bytes;
- dest->rx_seq_discards = stats->rx_seq_discards;
- dest->rx_oos_packets = stats->rx_oos_packets;
- dest->rx_errors = stats->rx_errors;
+ dest->tx_packets = atomic_long_read(&stats->tx_packets);
+ dest->tx_bytes = atomic_long_read(&stats->tx_bytes);
+ dest->tx_errors = atomic_long_read(&stats->tx_errors);
+ dest->rx_packets = atomic_long_read(&stats->rx_packets);
+ dest->rx_bytes = atomic_long_read(&stats->rx_bytes);
+ dest->rx_seq_discards = atomic_long_read(&stats->rx_seq_discards);
+ dest->rx_oos_packets = atomic_long_read(&stats->rx_oos_packets);
+ dest->rx_errors = atomic_long_read(&stats->rx_errors);
}
/* Session ioctl helper.
tunnel->name,
(tunnel == tunnel->sock->sk_user_data) ? 'Y' : 'N',
atomic_read(&tunnel->ref_count) - 1);
- seq_printf(m, " %08x %llu/%llu/%llu %llu/%llu/%llu\n",
+ seq_printf(m, " %08x %ld/%ld/%ld %ld/%ld/%ld\n",
tunnel->debug,
- (unsigned long long)tunnel->stats.tx_packets,
- (unsigned long long)tunnel->stats.tx_bytes,
- (unsigned long long)tunnel->stats.tx_errors,
- (unsigned long long)tunnel->stats.rx_packets,
- (unsigned long long)tunnel->stats.rx_bytes,
- (unsigned long long)tunnel->stats.rx_errors);
+ atomic_long_read(&tunnel->stats.tx_packets),
+ atomic_long_read(&tunnel->stats.tx_bytes),
+ atomic_long_read(&tunnel->stats.tx_errors),
+ atomic_long_read(&tunnel->stats.rx_packets),
+ atomic_long_read(&tunnel->stats.rx_bytes),
+ atomic_long_read(&tunnel->stats.rx_errors));
}
static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
session->lns_mode ? "LNS" : "LAC",
session->debug,
jiffies_to_msecs(session->reorder_timeout));
- seq_printf(m, " %hu/%hu %llu/%llu/%llu %llu/%llu/%llu\n",
+ seq_printf(m, " %hu/%hu %ld/%ld/%ld %ld/%ld/%ld\n",
session->nr, session->ns,
- (unsigned long long)session->stats.tx_packets,
- (unsigned long long)session->stats.tx_bytes,
- (unsigned long long)session->stats.tx_errors,
- (unsigned long long)session->stats.rx_packets,
- (unsigned long long)session->stats.rx_bytes,
- (unsigned long long)session->stats.rx_errors);
+ atomic_long_read(&session->stats.tx_packets),
+ atomic_long_read(&session->stats.tx_bytes),
+ atomic_long_read(&session->stats.tx_errors),
+ atomic_long_read(&session->stats.rx_packets),
+ atomic_long_read(&session->stats.rx_bytes),
+ atomic_long_read(&session->stats.rx_errors));
if (po)
seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan));
static const struct l2tp_nl_cmd_ops pppol2tp_nl_cmd_ops = {
.session_create = pppol2tp_session_create,
- .session_delete = pppol2tp_session_delete,
+ .session_delete = l2tp_session_delete,
};
#endif /* CONFIG_L2TP_V3 */
int target; /* Read at least this many bytes */
long timeo;
+ msg->msg_namelen = 0;
+
lock_sock(sk);
copied = -ENOTCONN;
if (unlikely(sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN))
list_del(&dep->list);
mutex_unlock(&local->mtx);
- ieee80211_roc_notify_destroy(dep);
+ ieee80211_roc_notify_destroy(dep, true);
return 0;
}
ieee80211_start_next_roc(local);
mutex_unlock(&local->mtx);
- ieee80211_roc_notify_destroy(found);
+ ieee80211_roc_notify_destroy(found, true);
} else {
/* work may be pending so use it all the time */
found->abort = true;
/* work will clean up etc */
flush_delayed_work(&found->work);
+ WARN_ON(!found->to_be_freed);
+ kfree(found);
}
return 0;
enum ieee80211_chanctx_mode mode)
{
struct ieee80211_chanctx *ctx;
+ u32 changed;
int err;
lockdep_assert_held(&local->chanctx_mtx);
ctx->conf.rx_chains_dynamic = 1;
ctx->mode = mode;
+ /* acquire mutex to prevent idle from changing */
+ mutex_lock(&local->mtx);
+ /* turn idle off *before* setting channel -- some drivers need that */
+ changed = ieee80211_idle_off(local);
+ if (changed)
+ ieee80211_hw_config(local, changed);
+
if (!local->use_chanctx) {
local->_oper_channel_type =
cfg80211_get_chandef_type(chandef);
err = drv_add_chanctx(local, ctx);
if (err) {
kfree(ctx);
- return ERR_PTR(err);
+ ctx = ERR_PTR(err);
+
+ ieee80211_recalc_idle(local);
+ goto out;
}
}
+ /* and keep the mutex held until the new chanctx is on the list */
list_add_rcu(&ctx->list, &local->chanctx_list);
- mutex_lock(&local->mtx);
- ieee80211_recalc_idle(local);
+ out:
mutex_unlock(&local->mtx);
return ctx;
struct ieee80211_channel *chan;
bool started, abort, hw_begun, notified;
+ bool to_be_freed;
unsigned long hw_start_time;
void ieee80211_roc_setup(struct ieee80211_local *local);
void ieee80211_start_next_roc(struct ieee80211_local *local);
void ieee80211_roc_purge(struct ieee80211_sub_if_data *sdata);
-void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc);
+void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc, bool free);
void ieee80211_sw_roc_work(struct work_struct *work);
void ieee80211_handle_roc_started(struct ieee80211_roc_work *roc);
enum nl80211_iftype type);
void ieee80211_if_remove(struct ieee80211_sub_if_data *sdata);
void ieee80211_remove_interfaces(struct ieee80211_local *local);
+u32 ieee80211_idle_off(struct ieee80211_local *local);
void ieee80211_recalc_idle(struct ieee80211_local *local);
void ieee80211_adjust_monitor_flags(struct ieee80211_sub_if_data *sdata,
const int offset);
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_TXPOWER);
}
-static u32 ieee80211_idle_off(struct ieee80211_local *local)
+static u32 __ieee80211_idle_off(struct ieee80211_local *local)
{
if (!(local->hw.conf.flags & IEEE80211_CONF_IDLE))
return 0;
return IEEE80211_CONF_CHANGE_IDLE;
}
-static u32 ieee80211_idle_on(struct ieee80211_local *local)
+static u32 __ieee80211_idle_on(struct ieee80211_local *local)
{
if (local->hw.conf.flags & IEEE80211_CONF_IDLE)
return 0;
return IEEE80211_CONF_CHANGE_IDLE;
}
-void ieee80211_recalc_idle(struct ieee80211_local *local)
+static u32 __ieee80211_recalc_idle(struct ieee80211_local *local,
+ bool force_active)
{
bool working = false, scanning, active;
unsigned int led_trig_start = 0, led_trig_stop = 0;
struct ieee80211_roc_work *roc;
- u32 change;
lockdep_assert_held(&local->mtx);
- active = !list_empty(&local->chanctx_list) || local->monitors;
+ active = force_active ||
+ !list_empty(&local->chanctx_list) ||
+ local->monitors;
if (!local->ops->remain_on_channel) {
list_for_each_entry(roc, &local->roc_list, list) {
ieee80211_mod_tpt_led_trig(local, led_trig_start, led_trig_stop);
if (working || scanning || active)
- change = ieee80211_idle_off(local);
- else
- change = ieee80211_idle_on(local);
+ return __ieee80211_idle_off(local);
+ return __ieee80211_idle_on(local);
+}
+
+u32 ieee80211_idle_off(struct ieee80211_local *local)
+{
+ return __ieee80211_recalc_idle(local, true);
+}
+
+void ieee80211_recalc_idle(struct ieee80211_local *local)
+{
+ u32 change = __ieee80211_recalc_idle(local, false);
if (change)
ieee80211_hw_config(local, change);
}
static int ieee80211_add_virtual_monitor(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
- int ret = 0;
+ int ret;
if (!(local->hw.flags & IEEE80211_HW_WANT_MONITOR_VIF))
return 0;
- mutex_lock(&local->iflist_mtx);
+ ASSERT_RTNL();
if (local->monitor_sdata)
- goto out_unlock;
+ return 0;
sdata = kzalloc(sizeof(*sdata) + local->hw.vif_data_size, GFP_KERNEL);
- if (!sdata) {
- ret = -ENOMEM;
- goto out_unlock;
- }
+ if (!sdata)
+ return -ENOMEM;
/* set up data */
sdata->local = local;
if (WARN_ON(ret)) {
/* ok .. stupid driver, it asked for this! */
kfree(sdata);
- goto out_unlock;
+ return ret;
}
ret = ieee80211_check_queues(sdata);
if (ret) {
kfree(sdata);
- goto out_unlock;
+ return ret;
}
ret = ieee80211_vif_use_channel(sdata, &local->monitor_chandef,
if (ret) {
drv_remove_interface(local, sdata);
kfree(sdata);
- goto out_unlock;
+ return ret;
}
+ mutex_lock(&local->iflist_mtx);
rcu_assign_pointer(local->monitor_sdata, sdata);
- out_unlock:
mutex_unlock(&local->iflist_mtx);
- return ret;
+
+ return 0;
}
static void ieee80211_del_virtual_monitor(struct ieee80211_local *local)
if (!(local->hw.flags & IEEE80211_HW_WANT_MONITOR_VIF))
return;
+ ASSERT_RTNL();
+
mutex_lock(&local->iflist_mtx);
sdata = rcu_dereference_protected(local->monitor_sdata,
lockdep_is_held(&local->iflist_mtx));
- if (!sdata)
- goto out_unlock;
+ if (!sdata) {
+ mutex_unlock(&local->iflist_mtx);
+ return;
+ }
rcu_assign_pointer(local->monitor_sdata, NULL);
+ mutex_unlock(&local->iflist_mtx);
+
synchronize_net();
ieee80211_vif_release_channel(sdata);
drv_remove_interface(local, sdata);
kfree(sdata);
- out_unlock:
- mutex_unlock(&local->iflist_mtx);
}
/*
rcu_read_lock();
list_for_each_entry_rcu(sdata, &local->interfaces, list)
- if (ieee80211_vif_is_mesh(&sdata->vif))
+ if (ieee80211_vif_is_mesh(&sdata->vif) &&
+ ieee80211_sdata_running(sdata))
ieee80211_queue_work(&local->hw, &sdata->work);
rcu_read_unlock();
}
/* Restart STA timers */
rcu_read_lock();
- list_for_each_entry_rcu(sdata, &local->interfaces, list)
- ieee80211_restart_sta_timer(sdata);
+ list_for_each_entry_rcu(sdata, &local->interfaces, list) {
+ if (ieee80211_sdata_running(sdata))
+ ieee80211_restart_sta_timer(sdata);
+ }
rcu_read_unlock();
}
/* prep auth_data so we don't go into idle on disassoc */
ifmgd->auth_data = auth_data;
- if (ifmgd->associated)
- ieee80211_set_disassoc(sdata, 0, 0, false, NULL);
+ if (ifmgd->associated) {
+ u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
+
+ ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ WLAN_REASON_UNSPECIFIED,
+ false, frame_buf);
+
+ __cfg80211_send_deauth(sdata->dev, frame_buf,
+ sizeof(frame_buf));
+ }
sdata_info(sdata, "authenticate with %pM\n", req->bss->bssid);
mutex_lock(&ifmgd->mtx);
- if (ifmgd->associated)
- ieee80211_set_disassoc(sdata, 0, 0, false, NULL);
+ if (ifmgd->associated) {
+ u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
+
+ ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ WLAN_REASON_UNSPECIFIED,
+ false, frame_buf);
+
+ __cfg80211_send_deauth(sdata->dev, frame_buf,
+ sizeof(frame_buf));
+ }
if (ifmgd->auth_data && !ifmgd->auth_data->done) {
err = -EBUSY;
}
}
-void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc)
+void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc, bool free)
{
struct ieee80211_roc_work *dep, *tmp;
+ if (WARN_ON(roc->to_be_freed))
+ return;
+
/* was never transmitted */
if (roc->frame) {
cfg80211_mgmt_tx_status(&roc->sdata->wdev,
GFP_KERNEL);
list_for_each_entry_safe(dep, tmp, &roc->dependents, list)
- ieee80211_roc_notify_destroy(dep);
+ ieee80211_roc_notify_destroy(dep, true);
- kfree(roc);
+ if (free)
+ kfree(roc);
+ else
+ roc->to_be_freed = true;
}
void ieee80211_sw_roc_work(struct work_struct *work)
mutex_lock(&local->mtx);
+ if (roc->to_be_freed)
+ goto out_unlock;
+
if (roc->abort)
goto finish;
finish:
list_del(&roc->list);
started = roc->started;
- ieee80211_roc_notify_destroy(roc);
+ ieee80211_roc_notify_destroy(roc, !roc->abort);
if (started) {
drv_flush(local, false);
list_del(&roc->list);
- ieee80211_roc_notify_destroy(roc);
+ ieee80211_roc_notify_destroy(roc, true);
/* if there's another roc, start it now */
ieee80211_start_next_roc(local);
list_for_each_entry_safe(roc, tmp, &tmp_list, list) {
if (local->ops->remain_on_channel) {
list_del(&roc->list);
- ieee80211_roc_notify_destroy(roc);
+ ieee80211_roc_notify_destroy(roc, true);
} else {
ieee80211_queue_delayed_work(&local->hw, &roc->work, 0);
/* work will clean up etc */
flush_delayed_work(&roc->work);
+ WARN_ON(!roc->to_be_freed);
+ kfree(roc);
}
}
memset(nskb->cb, 0, sizeof(nskb->cb));
- ieee80211_tx_skb(rx->sdata, nskb);
+ if (rx->sdata->vif.type == NL80211_IFTYPE_P2P_DEVICE) {
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(nskb);
+
+ info->flags = IEEE80211_TX_CTL_TX_OFFCHAN |
+ IEEE80211_TX_INTFL_OFFCHAN_TX_OK |
+ IEEE80211_TX_CTL_NO_CCK_RATE;
+ if (local->hw.flags & IEEE80211_HW_QUEUE_CONTROL)
+ info->hw_queue =
+ local->hw.offchannel_tx_hw_queue;
+ }
+
+ __ieee80211_tx_skb_tid_band(rx->sdata, nskb, 7,
+ status->band);
}
dev_kfree_skb(rx->skb);
return RX_QUEUED;
struct ieee80211_local *local;
struct ieee80211_sub_if_data *sdata;
int ret, i;
+ bool have_key = false;
might_sleep();
list_del_rcu(&sta->list);
mutex_lock(&local->key_mtx);
- for (i = 0; i < NUM_DEFAULT_KEYS; i++)
+ for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
__ieee80211_key_free(key_mtx_dereference(local, sta->gtk[i]));
- if (sta->ptk)
+ have_key = true;
+ }
+ if (sta->ptk) {
__ieee80211_key_free(key_mtx_dereference(local, sta->ptk));
+ have_key = true;
+ }
mutex_unlock(&local->key_mtx);
+ if (!have_key)
+ synchronize_net();
+
sta->dead = true;
local->num_sta--;
nla_put_failure:
nla_nest_cancel(skb, nested);
ipset_nest_end(skb, atd);
- return -EMSGSIZE;
+ if (unlikely(id == first)) {
+ cb->args[2] = 0;
+ return -EMSGSIZE;
+ }
+ return 0;
}
static int
dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
}
+static inline void
+hash_ipportnet4_data_reset_flags(struct hash_ipportnet4_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
+}
+
static inline int
hash_ipportnet4_data_match(const struct hash_ipportnet4_elem *elem)
{
dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
}
+static inline void
+hash_ipportnet6_data_reset_flags(struct hash_ipportnet6_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
+}
+
static inline int
hash_ipportnet6_data_match(const struct hash_ipportnet6_elem *elem)
{
static inline void
hash_net4_data_flags(struct hash_net4_elem *dst, u32 flags)
{
- dst->nomatch = flags & IPSET_FLAG_NOMATCH;
+ dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
+}
+
+static inline void
+hash_net4_data_reset_flags(struct hash_net4_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
}
static inline int
static inline void
hash_net6_data_flags(struct hash_net6_elem *dst, u32 flags)
{
- dst->nomatch = flags & IPSET_FLAG_NOMATCH;
+ dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
+}
+
+static inline void
+hash_net6_data_reset_flags(struct hash_net6_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
}
static inline int
static inline void
hash_netiface4_data_flags(struct hash_netiface4_elem *dst, u32 flags)
{
- dst->nomatch = flags & IPSET_FLAG_NOMATCH;
+ dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
+}
+
+static inline void
+hash_netiface4_data_reset_flags(struct hash_netiface4_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
}
static inline int
static inline void
hash_netiface6_data_flags(struct hash_netiface6_elem *dst, u32 flags)
{
- dst->nomatch = flags & IPSET_FLAG_NOMATCH;
+ dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
}
static inline int
return elem->nomatch ? -ENOTEMPTY : 1;
}
+static inline void
+hash_netiface6_data_reset_flags(struct hash_netiface6_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
+}
+
static inline void
hash_netiface6_data_zero_out(struct hash_netiface6_elem *elem)
{
dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
}
+static inline void
+hash_netport4_data_reset_flags(struct hash_netport4_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
+}
+
static inline int
hash_netport4_data_match(const struct hash_netport4_elem *elem)
{
dst->nomatch = !!(flags & IPSET_FLAG_NOMATCH);
}
+static inline void
+hash_netport6_data_reset_flags(struct hash_netport6_elem *dst, u32 *flags)
+{
+ if (dst->nomatch) {
+ *flags = IPSET_FLAG_NOMATCH;
+ dst->nomatch = 0;
+ }
+}
+
static inline int
hash_netport6_data_match(const struct hash_netport6_elem *elem)
{
{
const struct set_elem *e = list_set_elem(map, i);
- if (i == map->size - 1 && e->id != IPSET_INVALID_ID)
- /* Last element replaced: e.g. add new,before,last */
- ip_set_put_byindex(e->id);
+ if (e->id != IPSET_INVALID_ID) {
+ const struct set_elem *x = list_set_elem(map, map->size - 1);
+
+ /* Last element replaced or pushed off */
+ if (x->id != IPSET_INVALID_ID)
+ ip_set_put_byindex(x->id);
+ }
if (with_timeout(map->timeout))
list_elem_tadd(map, i, id, ip_set_timeout_set(timeout));
else
skb_reset_network_header(skb);
IP_VS_DBG(12, "ICMP for IPIP %pI4->%pI4: mtu=%u\n",
&ip_hdr(skb)->saddr, &ip_hdr(skb)->daddr, mtu);
- rcu_read_lock();
ipv4_update_pmtu(skb, dev_net(skb->dev),
mtu, 0, 0, 0, 0);
- rcu_read_unlock();
/* Client uses PMTUD? */
if (!(cih->frag_off & htons(IP_DF)))
goto ignore_ipip;
}
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
ip_vs_fill_iph_skb(af, skb, &iph);
}
IP_VS_DBG_PKT(11, af, pp, skb, 0, "Incoming packet");
- ipvs = net_ipvs(net);
/* Check the server status */
if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) {
/* the destination server is not available */
{
int r;
struct net *net;
+ struct netns_ipvs *ipvs;
if (ip_hdr(skb)->protocol != IPPROTO_ICMP)
return NF_ACCEPT;
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
return ip_vs_in_icmp(skb, &r, hooknum);
{
int r;
struct net *net;
+ struct netns_ipvs *ipvs;
struct ip_vs_iphdr iphdr;
ip_vs_fill_iph_skb(AF_INET6, skb, &iphdr);
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
return ip_vs_in_icmp_v6(skb, &r, hooknum, &iphdr);
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "backup_only",
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
#ifdef CONFIG_IP_VS_DEBUG
{
.procname = "debug_level",
tbl[idx++].data = &ipvs->sysctl_nat_icmp_send;
ipvs->sysctl_pmtu_disc = 1;
tbl[idx++].data = &ipvs->sysctl_pmtu_disc;
+ tbl[idx++].data = &ipvs->sysctl_backup_only;
ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
sctp_chunkhdr_t _sctpch, *sch;
unsigned char chunk_type;
int event, next_state;
- int ihl;
+ int ihl, cofs;
#ifdef CONFIG_IP_VS_IPV6
ihl = cp->af == AF_INET ? ip_hdrlen(skb) : sizeof(struct ipv6hdr);
ihl = ip_hdrlen(skb);
#endif
- sch = skb_header_pointer(skb, ihl + sizeof(sctp_sctphdr_t),
- sizeof(_sctpch), &_sctpch);
+ cofs = ihl + sizeof(sctp_sctphdr_t);
+ sch = skb_header_pointer(skb, cofs, sizeof(_sctpch), &_sctpch);
if (sch == NULL)
return;
*/
if ((sch->type == SCTP_CID_COOKIE_ECHO) ||
(sch->type == SCTP_CID_COOKIE_ACK)) {
- sch = skb_header_pointer(skb, (ihl + sizeof(sctp_sctphdr_t) +
- sch->length), sizeof(_sctpch), &_sctpch);
- if (sch) {
- if (sch->type == SCTP_CID_ABORT)
+ int clen = ntohs(sch->length);
+
+ if (clen >= sizeof(sctp_chunkhdr_t)) {
+ sch = skb_header_pointer(skb, cofs + ALIGN(clen, 4),
+ sizeof(_sctpch), &_sctpch);
+ if (sch && sch->type == SCTP_CID_ABORT)
chunk_type = sch->type;
}
}
{
int ret;
+ ret = register_pernet_subsys(&dccp_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&dccp_proto4);
if (ret < 0)
goto out_dccp4;
if (ret < 0)
goto out_dccp6;
- ret = register_pernet_subsys(&dccp_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&dccp_proto6);
out_dccp6:
nf_ct_l4proto_unregister(&dccp_proto4);
out_dccp4:
+ unregister_pernet_subsys(&dccp_net_ops);
+out_pernet:
return ret;
}
{
int ret;
- ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
- if (ret < 0)
- goto out_gre4;
-
ret = register_pernet_subsys(&proto_gre_net_ops);
if (ret < 0)
goto out_pernet;
+ ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
+ if (ret < 0)
+ goto out_gre4;
+
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_gre4);
out_gre4:
+ unregister_pernet_subsys(&proto_gre_net_ops);
+out_pernet:
return ret;
}
{
int ret;
+ ret = register_pernet_subsys(&sctp_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_sctp4);
if (ret < 0)
goto out_sctp4;
if (ret < 0)
goto out_sctp6;
- ret = register_pernet_subsys(&sctp_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp6);
out_sctp6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp4);
out_sctp4:
+ unregister_pernet_subsys(&sctp_net_ops);
+out_pernet:
return ret;
}
{
int ret;
+ ret = register_pernet_subsys(&udplite_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udplite4);
if (ret < 0)
goto out_udplite4;
if (ret < 0)
goto out_udplite6;
- ret = register_pernet_subsys(&udplite_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite6);
out_udplite6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite4);
out_udplite4:
+ unregister_pernet_subsys(&udplite_net_ops);
+out_pernet:
return ret;
}
end += strlen("\r\n\r\n") + clen;
msglen = origlen = end - dptr;
- if (msglen > datalen) {
- nf_ct_helper_log(skb, ct, "incomplete/bad SIP message");
- return NF_DROP;
- }
+ if (msglen > datalen)
+ return NF_ACCEPT;
ret = process_sip_msg(skb, ct, protoff, dataoff,
&dptr, &msglen);
register_net_sysctl(&init_net, "net", nf_ct_netfilter_table);
if (!nf_ct_netfilter_header) {
pr_err("nf_conntrack: can't register to sysctl.\n");
+ ret = -ENOMEM;
goto out_sysctl;
}
#endif
struct nf_nat_proto_clean {
u8 l3proto;
u8 l4proto;
- bool hash;
};
-/* Clear NAT section of all conntracks, in case we're loaded again. */
-static int nf_nat_proto_clean(struct nf_conn *i, void *data)
+/* kill conntracks with affected NAT section */
+static int nf_nat_proto_remove(struct nf_conn *i, void *data)
{
const struct nf_nat_proto_clean *clean = data;
struct nf_conn_nat *nat = nfct_nat(i);
if (!nat)
return 0;
- if (!(i->status & IPS_SRC_NAT_DONE))
- return 0;
+
if ((clean->l3proto && nf_ct_l3num(i) != clean->l3proto) ||
(clean->l4proto && nf_ct_protonum(i) != clean->l4proto))
return 0;
- if (clean->hash) {
- spin_lock_bh(&nf_nat_lock);
- hlist_del_rcu(&nat->bysource);
- spin_unlock_bh(&nf_nat_lock);
- } else {
- memset(nat, 0, sizeof(*nat));
- i->status &= ~(IPS_NAT_MASK | IPS_NAT_DONE_MASK |
- IPS_SEQ_ADJUST);
- }
- return 0;
+ return i->status & IPS_NAT_MASK ? 1 : 0;
}
static void nf_nat_l4proto_clean(u8 l3proto, u8 l4proto)
struct net *net;
rtnl_lock();
- /* Step 1 - remove from bysource hash */
- clean.hash = true;
for_each_net(net)
- nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean);
- synchronize_rcu();
-
- /* Step 2 - clean NAT section */
- clean.hash = false;
- for_each_net(net)
- nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean);
+ nf_ct_iterate_cleanup(net, nf_nat_proto_remove, &clean);
rtnl_unlock();
}
struct net *net;
rtnl_lock();
- /* Step 1 - remove from bysource hash */
- clean.hash = true;
- for_each_net(net)
- nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean);
- synchronize_rcu();
- /* Step 2 - clean NAT section */
- clean.hash = false;
for_each_net(net)
- nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean);
+ nf_ct_iterate_cleanup(net, nf_nat_proto_remove, &clean);
rtnl_unlock();
}
{
struct nf_nat_proto_clean clean = {};
- nf_ct_iterate_cleanup(net, &nf_nat_proto_clean, &clean);
+ nf_ct_iterate_cleanup(net, &nf_nat_proto_remove, &clean);
synchronize_rcu();
nf_ct_free_hashtable(net->ct.nat_bysource, net->ct.nat_htable_size);
}
return -EINVAL;
acct_name = nla_data(tb[NFACCT_NAME]);
+ if (strlen(acct_name) == 0)
+ return -EINVAL;
list_for_each_entry(nfacct, &nfnl_acct_list, head) {
if (strncmp(nfacct->name, acct_name, NFACCT_NAME_MAX) != 0)
inst->queue_num = queue_num;
inst->peer_portid = portid;
inst->queue_maxlen = NFQNL_QMAX_DEFAULT;
- inst->copy_range = 0xfffff;
+ inst->copy_range = 0xffff;
inst->copy_mode = NFQNL_COPY_NONE;
spin_lock_init(&inst->lock);
INIT_LIST_HEAD(&inst->queue_list);
#ifdef CONFIG_PROC_FS
if (!proc_create("nfnetlink_queue", 0440,
- proc_net_netfilter, &nfqnl_file_ops))
+ proc_net_netfilter, &nfqnl_file_ops)) {
+ status = -ENOMEM;
goto cleanup_subsys;
+ }
#endif
register_netdevice_notifier(&nfqnl_dev_notifier);
int err = 0;
BUG_ON(grp->name[0] == '\0');
+ BUG_ON(memchr(grp->name, '\0', GENL_NAMSIZ) == NULL);
genl_lock();
}
if (sax != NULL) {
+ memset(sax, 0, sizeof(*sax));
sax->sax25_family = AF_NETROM;
skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call,
AX25_ADDR_LEN);
accept_sk->sk_state_change(sk);
bh_unlock_sock(accept_sk);
-
- sock_orphan(accept_sk);
}
if (listen == true) {
bh_unlock_sock(sk);
- sock_orphan(sk);
-
sk_del_node_init(sk);
}
bh_unlock_sock(sk);
- sock_orphan(sk);
-
sk_del_node_init(sk);
}
skb_get(skb);
} else {
pr_err("Receive queue is full\n");
- kfree_skb(skb);
}
nfc_llcp_sock_put(llcp_sock);
skb_get(skb);
} else {
pr_err("Receive queue is full\n");
- kfree_skb(skb);
}
}
}
if (sk->sk_state == LLCP_CONNECTED || !newsock) {
- nfc_llcp_accept_unlink(sk);
+ list_del_init(&lsk->accept_queue);
+ sock_put(sk);
+
if (newsock)
sock_graft(sk, newsock);
nfc_llcp_accept_unlink(accept_sk);
release_sock(accept_sk);
-
- sock_orphan(accept_sk);
}
}
pr_debug("%p %zu\n", sk, len);
+ msg->msg_namelen = 0;
+
lock_sock(sk);
if (sk->sk_state == LLCP_CLOSED &&
pr_debug("Datagram socket %d %d\n", ui_cb->dsap, ui_cb->ssap);
+ memset(sockaddr, 0, sizeof(*sockaddr));
sockaddr->sa_family = AF_NFC;
sockaddr->nfc_protocol = NFC_PROTO_NFC_DEP;
sockaddr->dsap = ui_cb->dsap;
return ERR_PTR(-ENOMEM);
retval = ovs_vport_cmd_fill_info(vport, skb, portid, seq, 0, cmd);
- if (retval < 0) {
- kfree_skb(skb);
- return ERR_PTR(retval);
- }
+ BUG_ON(retval < 0);
+
return skb;
}
nla_get_u32(a[OVS_VPORT_ATTR_TYPE]) != vport->ops->type)
err = -EINVAL;
+ reply = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!reply) {
+ err = -ENOMEM;
+ goto exit_unlock;
+ }
+
if (!err && a[OVS_VPORT_ATTR_OPTIONS])
err = ovs_vport_set_options(vport, a[OVS_VPORT_ATTR_OPTIONS]);
if (err)
- goto exit_unlock;
+ goto exit_free;
+
if (a[OVS_VPORT_ATTR_UPCALL_PID])
vport->upcall_portid = nla_get_u32(a[OVS_VPORT_ATTR_UPCALL_PID]);
- reply = ovs_vport_cmd_build_info(vport, info->snd_portid, info->snd_seq,
- OVS_VPORT_CMD_NEW);
- if (IS_ERR(reply)) {
- netlink_set_err(sock_net(skb->sk)->genl_sock, 0,
- ovs_dp_vport_multicast_group.id, PTR_ERR(reply));
- goto exit_unlock;
- }
+ err = ovs_vport_cmd_fill_info(vport, reply, info->snd_portid,
+ info->snd_seq, 0, OVS_VPORT_CMD_NEW);
+ BUG_ON(err < 0);
genl_notify(reply, genl_info_net(info), info->snd_portid,
ovs_dp_vport_multicast_group.id, info->nlhdr, GFP_KERNEL);
+ rtnl_unlock();
+ return 0;
+
+exit_free:
+ kfree_skb(reply);
exit_unlock:
rtnl_unlock();
return err;
void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow)
{
+ BUG_ON(table->count == 0);
hlist_del_rcu(&flow->hash_node[table->node_ver]);
table->count--;
- BUG_ON(table->count < 0);
}
/* The size of the argument for each %OVS_KEY_ATTR_* Netlink attribute. */
skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (srose != NULL) {
+ memset(srose, 0, msg->msg_namelen);
srose->srose_family = AF_ROSE;
srose->srose_addr = rose->dest_addr;
srose->srose_call = rose->dest_call;
if (err < 0)
return err;
- err = -EINVAL;
if (tb[TCA_FW_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
tcf_bind_filter(tp, &f->res, base);
}
#endif /* CONFIG_NET_CLS_IND */
+ err = -EINVAL;
if (tb[TCA_FW_MASK]) {
mask = nla_get_u32(tb[TCA_FW_MASK]);
if (mask != head->mask)
cbq_update(q);
if ((incr -= incr2) < 0)
incr = 0;
+ q->now += incr;
+ } else {
+ if (now > q->now)
+ q->now = now;
}
- q->now += incr;
q->now_rt = now;
for (;;) {
flow->deficit = q->quantum;
flow->dropped = 0;
}
- if (++sch->q.qlen < sch->limit)
+ if (++sch->q.qlen <= sch->limit)
return NET_XMIT_SUCCESS;
q->drop_overlimit++;
u64 mult;
int shift;
- r->rate_bps = rate << 3;
+ r->rate_bps = (u64)rate << 3;
r->shift = 0;
r->mult = 1;
/*
err = rpciod_up();
if (err)
goto out_no_rpciod;
- err = -EINVAL;
- if (!xprt)
- goto out_no_xprt;
+ err = -EINVAL;
if (args->version >= program->nrvers)
goto out_err;
version = program->version[args->version];
out_no_stats:
kfree(clnt);
out_err:
- xprt_put(xprt);
-out_no_xprt:
rpciod_down();
out_no_rpciod:
+ xprt_put(xprt);
return ERR_PTR(err);
}
new = rpc_new_client(args, xprt);
if (IS_ERR(new)) {
err = PTR_ERR(new);
- goto out_put;
+ goto out_err;
}
atomic_inc(&clnt->cl_count);
new->cl_chatty = clnt->cl_chatty;
return new;
-out_put:
- xprt_put(xprt);
out_err:
dprintk("RPC: %s: returned error %d\n", __func__, err);
return ERR_PTR(err);
list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
task->tk_waitqueue = queue;
queue->qlen++;
+ /* barrier matches the read in rpc_wake_up_task_queue_locked() */
+ smp_wmb();
rpc_set_queued(task);
dprintk("RPC: %5u added to queue %p \"%s\"\n",
*/
static void rpc_wake_up_task_queue_locked(struct rpc_wait_queue *queue, struct rpc_task *task)
{
- if (RPC_IS_QUEUED(task) && task->tk_waitqueue == queue)
- __rpc_do_wake_up_task(queue, task);
+ if (RPC_IS_QUEUED(task)) {
+ smp_rmb();
+ if (task->tk_waitqueue == queue)
+ __rpc_do_wake_up_task(queue, task);
+ }
}
/*
if (addr) {
addr->family = AF_TIPC;
addr->addrtype = TIPC_ADDR_ID;
+ memset(&addr->addr, 0, sizeof(addr->addr));
addr->addr.id.ref = msg_origport(msg);
addr->addr.id.node = msg_orignode(msg);
addr->addr.name.domain = 0; /* could leave uninitialized */
goto exit;
}
+ /* will be updated in set_orig_addr() if needed */
+ m->msg_namelen = 0;
+
timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
restart:
goto exit;
}
+ /* will be updated in set_orig_addr() if needed */
+ m->msg_namelen = 0;
+
target = sock_rcvlowat(sk, flags & MSG_WAITALL, buf_len);
timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
#endif
}
-static int unix_release_sock(struct sock *sk, int embrion)
+static void unix_release_sock(struct sock *sk, int embrion)
{
struct unix_sock *u = unix_sk(sk);
struct path path;
if (unix_tot_inflight)
unix_gc(); /* Garbage collect fds */
-
- return 0;
}
static void init_peercred(struct sock *sk)
if (!sk)
return 0;
+ unix_release_sock(sk, 0);
sock->sk = NULL;
- return unix_release_sock(sk, 0);
+ return 0;
}
static int unix_autobind(struct socket *sock)
if ((UNIXCB(skb).pid != siocb->scm->pid) ||
(UNIXCB(skb).cred != siocb->scm->cred))
break;
- } else {
+ } else if (test_bit(SOCK_PASSCRED, &sock->flags)) {
/* Copy credentials */
scm_set_cred(siocb->scm, UNIXCB(skb).pid, UNIXCB(skb).cred);
check_creds = 1;
struct vsock_sock *vsk;
list_for_each_entry(vsk, vsock_bound_sockets(addr), bound_table)
- if (vsock_addr_equals_addr_any(addr, &vsk->local_addr))
+ if (addr->svm_port == vsk->local_addr.svm_port)
return sk_vsock(vsk);
return NULL;
list_for_each_entry(vsk, vsock_connected_sockets(src, dst),
connected_table) {
- if (vsock_addr_equals_addr(src, &vsk->remote_addr)
- && vsock_addr_equals_addr(dst, &vsk->local_addr)) {
+ if (vsock_addr_equals_addr(src, &vsk->remote_addr) &&
+ dst->svm_port == vsk->local_addr.svm_port) {
return sk_vsock(vsk);
}
}
vsk = vsock_sk(sk);
err = 0;
+ msg->msg_namelen = 0;
+
lock_sock(sk);
if (sk->sk_state != SS_CONNECTED) {
struct vsock_sock *vlistener;
struct vsock_sock *vpending;
struct sock *pending;
+ struct sockaddr_vm src;
+
+ vsock_addr_init(&src, pkt->dg.src.context, pkt->src_port);
vlistener = vsock_sk(listener);
list_for_each_entry(vpending, &vlistener->pending_links,
pending_links) {
- struct sockaddr_vm src;
- struct sockaddr_vm dst;
-
- vsock_addr_init(&src, pkt->dg.src.context, pkt->src_port);
- vsock_addr_init(&dst, pkt->dg.dst.context, pkt->dst_port);
-
if (vsock_addr_equals_addr(&src, &vpending->remote_addr) &&
- vsock_addr_equals_addr(&dst, &vpending->local_addr)) {
+ pkt->dst_port == vpending->local_addr.svm_port) {
pending = sk_vsock(vpending);
sock_hold(pending);
goto found;
*/
bh_lock_sock(sk);
- if (!sock_owned_by_user(sk) && sk->sk_state == SS_CONNECTED)
- vmci_trans(vsk)->notify_ops->handle_notify_pkt(
- sk, pkt, true, &dst, &src,
- &bh_process_pkt);
+ if (!sock_owned_by_user(sk)) {
+ /* The local context ID may be out of date, update it. */
+ vsk->local_addr.svm_cid = dst.svm_cid;
+
+ if (sk->sk_state == SS_CONNECTED)
+ vmci_trans(vsk)->notify_ops->handle_notify_pkt(
+ sk, pkt, true, &dst, &src,
+ &bh_process_pkt);
+ }
bh_unlock_sock(sk);
lock_sock(sk);
+ /* The local context ID may be out of date. */
+ vsock_sk(sk)->local_addr.svm_cid = pkt->dg.dst.context;
+
switch (sk->sk_state) {
case SS_LISTEN:
vmci_transport_recv_listen(sk, pkt);
pending = vmci_transport_get_pending(sk, pkt);
if (pending) {
lock_sock(pending);
+
+ /* The local context ID may be out of date. */
+ vsock_sk(pending)->local_addr.svm_cid = pkt->dg.dst.context;
+
switch (pending->sk_state) {
case SS_CONNECTING:
err = vmci_transport_recv_connecting_server(sk,
if (flags & MSG_OOB || flags & MSG_ERRQUEUE)
return -EOPNOTSUPP;
+ msg->msg_namelen = 0;
+
/* Retrieve the head sk_buff from the socket's receive queue. */
err = 0;
skb = skb_recv_datagram(&vsk->sk, flags, noblock, &err);
if (err)
goto out;
- msg->msg_namelen = 0;
if (msg->msg_name) {
struct sockaddr_vm *vm_addr;
}
EXPORT_SYMBOL_GPL(vsock_addr_equals_addr);
-bool vsock_addr_equals_addr_any(const struct sockaddr_vm *addr,
- const struct sockaddr_vm *other)
-{
- return (addr->svm_cid == VMADDR_CID_ANY ||
- other->svm_cid == VMADDR_CID_ANY ||
- addr->svm_cid == other->svm_cid) &&
- addr->svm_port == other->svm_port;
-}
-EXPORT_SYMBOL_GPL(vsock_addr_equals_addr_any);
-
int vsock_addr_cast(const struct sockaddr *addr,
size_t len, struct sockaddr_vm **out_addr)
{
void vsock_addr_unbind(struct sockaddr_vm *addr);
bool vsock_addr_equals_addr(const struct sockaddr_vm *addr,
const struct sockaddr_vm *other);
-bool vsock_addr_equals_addr_any(const struct sockaddr_vm *addr,
- const struct sockaddr_vm *other);
int vsock_addr_cast(const struct sockaddr *addr, size_t len,
struct sockaddr_vm **out_addr);
rdev_rfkill_poll(rdev);
}
+void cfg80211_stop_p2p_device(struct cfg80211_registered_device *rdev,
+ struct wireless_dev *wdev)
+{
+ lockdep_assert_held(&rdev->devlist_mtx);
+ lockdep_assert_held(&rdev->sched_scan_mtx);
+
+ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_P2P_DEVICE))
+ return;
+
+ if (!wdev->p2p_started)
+ return;
+
+ rdev_stop_p2p_device(rdev, wdev);
+ wdev->p2p_started = false;
+
+ rdev->opencount--;
+
+ if (rdev->scan_req && rdev->scan_req->wdev == wdev) {
+ bool busy = work_busy(&rdev->scan_done_wk);
+
+ /*
+ * If the work isn't pending or running (in which case it would
+ * be waiting for the lock we hold) the driver didn't properly
+ * cancel the scan when the interface was removed. In this case
+ * warn and leak the scan request object to not crash later.
+ */
+ WARN_ON(!busy);
+
+ rdev->scan_req->aborted = true;
+ ___cfg80211_scan_done(rdev, !busy);
+ }
+}
+
static int cfg80211_rfkill_set_block(void *data, bool blocked)
{
struct cfg80211_registered_device *rdev = data;
return 0;
rtnl_lock();
- mutex_lock(&rdev->devlist_mtx);
+
+ /* read-only iteration need not hold the devlist_mtx */
list_for_each_entry(wdev, &rdev->wdev_list, list) {
if (wdev->netdev) {
/* otherwise, check iftype */
switch (wdev->iftype) {
case NL80211_IFTYPE_P2P_DEVICE:
- if (!wdev->p2p_started)
- break;
- rdev_stop_p2p_device(rdev, wdev);
- wdev->p2p_started = false;
- rdev->opencount--;
+ /* but this requires it */
+ mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
+ cfg80211_stop_p2p_device(rdev, wdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
+ mutex_unlock(&rdev->devlist_mtx);
break;
default:
break;
}
}
- mutex_unlock(&rdev->devlist_mtx);
rtnl_unlock();
return 0;
wdev = container_of(work, struct wireless_dev, cleanup_work);
rdev = wiphy_to_dev(wdev->wiphy);
- cfg80211_lock_rdev(rdev);
+ mutex_lock(&rdev->sched_scan_mtx);
if (WARN_ON(rdev->scan_req && rdev->scan_req->wdev == wdev)) {
rdev->scan_req->aborted = true;
___cfg80211_scan_done(rdev, true);
}
- cfg80211_unlock_rdev(rdev);
-
- mutex_lock(&rdev->sched_scan_mtx);
-
if (WARN_ON(rdev->sched_scan_req &&
rdev->sched_scan_req->dev == wdev->netdev)) {
__cfg80211_stop_sched_scan(rdev, false);
return;
mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
list_del_rcu(&wdev->list);
rdev->devlist_generation++;
switch (wdev->iftype) {
case NL80211_IFTYPE_P2P_DEVICE:
- if (!wdev->p2p_started)
- break;
- rdev_stop_p2p_device(rdev, wdev);
- wdev->p2p_started = false;
- rdev->opencount--;
+ cfg80211_stop_p2p_device(rdev, wdev);
break;
default:
WARN_ON_ONCE(1);
break;
}
+ mutex_unlock(&rdev->sched_scan_mtx);
mutex_unlock(&rdev->devlist_mtx);
}
EXPORT_SYMBOL(cfg80211_unregister_wdev);
cfg80211_update_iface_num(rdev, wdev->iftype, 1);
cfg80211_lock_rdev(rdev);
mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
wdev_lock(wdev);
switch (wdev->iftype) {
#ifdef CONFIG_CFG80211_WEXT
break;
}
wdev_unlock(wdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
rdev->opencount++;
mutex_unlock(&rdev->devlist_mtx);
cfg80211_unlock_rdev(rdev);
void cfg80211_update_iface_num(struct cfg80211_registered_device *rdev,
enum nl80211_iftype iftype, int num);
+void cfg80211_stop_p2p_device(struct cfg80211_registered_device *rdev,
+ struct wireless_dev *wdev);
+
#define CFG80211_MAX_NUM_DIFFERENT_CHANNELS 10
#ifdef CONFIG_CFG80211_DEVELOPER_WARNINGS
if (!rdev->ops->scan)
return -EOPNOTSUPP;
- if (rdev->scan_req)
- return -EBUSY;
+ mutex_lock(&rdev->sched_scan_mtx);
+ if (rdev->scan_req) {
+ err = -EBUSY;
+ goto unlock;
+ }
if (info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]) {
n_channels = validate_scan_freqs(
info->attrs[NL80211_ATTR_SCAN_FREQUENCIES]);
- if (!n_channels)
- return -EINVAL;
+ if (!n_channels) {
+ err = -EINVAL;
+ goto unlock;
+ }
} else {
enum ieee80211_band band;
n_channels = 0;
nla_for_each_nested(attr, info->attrs[NL80211_ATTR_SCAN_SSIDS], tmp)
n_ssids++;
- if (n_ssids > wiphy->max_scan_ssids)
- return -EINVAL;
+ if (n_ssids > wiphy->max_scan_ssids) {
+ err = -EINVAL;
+ goto unlock;
+ }
if (info->attrs[NL80211_ATTR_IE])
ie_len = nla_len(info->attrs[NL80211_ATTR_IE]);
else
ie_len = 0;
- if (ie_len > wiphy->max_scan_ie_len)
- return -EINVAL;
+ if (ie_len > wiphy->max_scan_ie_len) {
+ err = -EINVAL;
+ goto unlock;
+ }
request = kzalloc(sizeof(*request)
+ sizeof(*request->ssids) * n_ssids
+ sizeof(*request->channels) * n_channels
+ ie_len, GFP_KERNEL);
- if (!request)
- return -ENOMEM;
+ if (!request) {
+ err = -ENOMEM;
+ goto unlock;
+ }
if (n_ssids)
request->ssids = (void *)&request->channels[n_channels];
kfree(request);
}
+ unlock:
+ mutex_unlock(&rdev->sched_scan_mtx);
return err;
}
if (!rdev->ops->stop_p2p_device)
return -EOPNOTSUPP;
- if (!wdev->p2p_started)
- return 0;
-
- rdev_stop_p2p_device(rdev, wdev);
- wdev->p2p_started = false;
-
- mutex_lock(&rdev->devlist_mtx);
- rdev->opencount--;
- mutex_unlock(&rdev->devlist_mtx);
-
- if (WARN_ON(rdev->scan_req && rdev->scan_req->wdev == wdev)) {
- rdev->scan_req->aborted = true;
- ___cfg80211_scan_done(rdev, true);
- }
+ mutex_lock(&rdev->sched_scan_mtx);
+ cfg80211_stop_p2p_device(rdev, wdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
return 0;
}
struct nlattr *nest;
int i;
- ASSERT_RDEV_LOCK(rdev);
+ lockdep_assert_held(&rdev->sched_scan_mtx);
if (WARN_ON(!req))
return 0;
union iwreq_data wrqu;
#endif
- ASSERT_RDEV_LOCK(rdev);
+ lockdep_assert_held(&rdev->sched_scan_mtx);
request = rdev->scan_req;
rdev = container_of(wk, struct cfg80211_registered_device,
scan_done_wk);
- cfg80211_lock_rdev(rdev);
+ mutex_lock(&rdev->sched_scan_mtx);
___cfg80211_scan_done(rdev, false);
- cfg80211_unlock_rdev(rdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
}
void cfg80211_scan_done(struct cfg80211_scan_request *request, bool aborted)
found = rb_find_bss(dev, tmp, BSS_CMP_REGULAR);
if (found) {
- found->pub.beacon_interval = tmp->pub.beacon_interval;
- found->pub.signal = tmp->pub.signal;
- found->pub.capability = tmp->pub.capability;
- found->ts = tmp->ts;
-
/* Update IEs */
if (rcu_access_pointer(tmp->pub.proberesp_ies)) {
const struct cfg80211_bss_ies *old;
if (found->pub.hidden_beacon_bss &&
!list_empty(&found->hidden_list)) {
+ const struct cfg80211_bss_ies *f;
+
/*
* The found BSS struct is one of the probe
* response members of a group, but we're
* SSID to showing it, which is confusing so
* drop this information.
*/
+
+ f = rcu_access_pointer(tmp->pub.beacon_ies);
+ kfree_rcu((struct cfg80211_bss_ies *)f,
+ rcu_head);
goto drop;
}
kfree_rcu((struct cfg80211_bss_ies *)old,
rcu_head);
}
+
+ found->pub.beacon_interval = tmp->pub.beacon_interval;
+ found->pub.signal = tmp->pub.signal;
+ found->pub.capability = tmp->pub.capability;
+ found->ts = tmp->ts;
} else {
struct cfg80211_internal_bss *new;
struct cfg80211_internal_bss *hidden;
if (IS_ERR(rdev))
return PTR_ERR(rdev);
+ mutex_lock(&rdev->sched_scan_mtx);
if (rdev->scan_req) {
err = -EBUSY;
goto out;
dev_hold(dev);
}
out:
+ mutex_unlock(&rdev->sched_scan_mtx);
kfree(creq);
cfg80211_unlock_rdev(rdev);
return err;
ASSERT_RTNL();
ASSERT_RDEV_LOCK(rdev);
ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_held(&rdev->sched_scan_mtx);
if (rdev->scan_req)
return -EBUSY;
rtnl_lock();
cfg80211_lock_rdev(rdev);
mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
list_for_each_entry(wdev, &rdev->wdev_list, list) {
wdev_lock(wdev);
wdev_unlock(wdev);
}
+ mutex_unlock(&rdev->sched_scan_mtx);
mutex_unlock(&rdev->devlist_mtx);
cfg80211_unlock_rdev(rdev);
rtnl_unlock();
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
- mutex_lock(&wiphy_to_dev(wdev->wiphy)->devlist_mtx);
wdev_lock(wdev);
__cfg80211_sme_scan_done(dev);
wdev_unlock(wdev);
- mutex_unlock(&wiphy_to_dev(wdev->wiphy)->devlist_mtx);
}
void cfg80211_sme_rx_auth(struct net_device *dev,
int err;
mutex_lock(&rdev->devlist_mtx);
+ /* might request scan - scan_mtx -> wdev_mtx dependency */
+ mutex_lock(&rdev->sched_scan_mtx);
wdev_lock(dev->ieee80211_ptr);
err = __cfg80211_connect(rdev, dev, connect, connkeys, NULL);
wdev_unlock(dev->ieee80211_ptr);
+ mutex_unlock(&rdev->sched_scan_mtx);
mutex_unlock(&rdev->devlist_mtx);
return err;
#define WIPHY_PR_ARG __entry->wiphy_name
#define WDEV_ENTRY __field(u32, id)
-#define WDEV_ASSIGN (__entry->id) = (wdev ? wdev->identifier : 0)
+#define WDEV_ASSIGN (__entry->id) = (!IS_ERR_OR_NULL(wdev) \
+ ? wdev->identifier : 0)
#define WDEV_PR_FMT "wdev(%u)"
#define WDEV_PR_ARG (__entry->id)
),
TP_fast_assign(
WIPHY_ASSIGN;
- WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
__entry->acl_policy = params->acl_policy;
),
TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", acl policy: %d",
cfg80211_lock_rdev(rdev);
mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
wdev_lock(wdev);
if (wdev->sme_state != CFG80211_SME_IDLE) {
err = cfg80211_mgd_wext_connect(rdev, wdev);
out:
wdev_unlock(wdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
mutex_unlock(&rdev->devlist_mtx);
cfg80211_unlock_rdev(rdev);
return err;
cfg80211_lock_rdev(rdev);
mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
wdev_lock(wdev);
err = 0;
err = cfg80211_mgd_wext_connect(rdev, wdev);
out:
wdev_unlock(wdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
mutex_unlock(&rdev->devlist_mtx);
cfg80211_unlock_rdev(rdev);
return err;
cfg80211_lock_rdev(rdev);
mutex_lock(&rdev->devlist_mtx);
+ mutex_lock(&rdev->sched_scan_mtx);
wdev_lock(wdev);
if (wdev->sme_state != CFG80211_SME_IDLE) {
err = cfg80211_mgd_wext_connect(rdev, wdev);
out:
wdev_unlock(wdev);
+ mutex_unlock(&rdev->sched_scan_mtx);
mutex_unlock(&rdev->devlist_mtx);
cfg80211_unlock_rdev(rdev);
return err;
x->xflags &= ~XFRM_TIME_DEFER;
}
+static void xfrm_replay_notify_esn(struct xfrm_state *x, int event)
+{
+ u32 seq_diff, oseq_diff;
+ struct km_event c;
+ struct xfrm_replay_state_esn *replay_esn = x->replay_esn;
+ struct xfrm_replay_state_esn *preplay_esn = x->preplay_esn;
+
+ /* we send notify messages in case
+ * 1. we updated on of the sequence numbers, and the seqno difference
+ * is at least x->replay_maxdiff, in this case we also update the
+ * timeout of our timer function
+ * 2. if x->replay_maxage has elapsed since last update,
+ * and there were changes
+ *
+ * The state structure must be locked!
+ */
+
+ switch (event) {
+ case XFRM_REPLAY_UPDATE:
+ if (!x->replay_maxdiff)
+ break;
+
+ if (replay_esn->seq_hi == preplay_esn->seq_hi)
+ seq_diff = replay_esn->seq - preplay_esn->seq;
+ else
+ seq_diff = ~preplay_esn->seq + replay_esn->seq + 1;
+
+ if (replay_esn->oseq_hi == preplay_esn->oseq_hi)
+ oseq_diff = replay_esn->oseq - preplay_esn->oseq;
+ else
+ oseq_diff = ~preplay_esn->oseq + replay_esn->oseq + 1;
+
+ if (seq_diff < x->replay_maxdiff &&
+ oseq_diff < x->replay_maxdiff) {
+
+ if (x->xflags & XFRM_TIME_DEFER)
+ event = XFRM_REPLAY_TIMEOUT;
+ else
+ return;
+ }
+
+ break;
+
+ case XFRM_REPLAY_TIMEOUT:
+ if (memcmp(x->replay_esn, x->preplay_esn,
+ xfrm_replay_state_esn_len(replay_esn)) == 0) {
+ x->xflags |= XFRM_TIME_DEFER;
+ return;
+ }
+
+ break;
+ }
+
+ memcpy(x->preplay_esn, x->replay_esn,
+ xfrm_replay_state_esn_len(replay_esn));
+ c.event = XFRM_MSG_NEWAE;
+ c.data.aevent = event;
+ km_state_notify(x, &c);
+
+ if (x->replay_maxage &&
+ !mod_timer(&x->rtimer, jiffies + x->replay_maxage))
+ x->xflags &= ~XFRM_TIME_DEFER;
+}
+
static int xfrm_replay_overflow_esn(struct xfrm_state *x, struct sk_buff *skb)
{
int err = 0;
.advance = xfrm_replay_advance_esn,
.check = xfrm_replay_check_esn,
.recheck = xfrm_replay_recheck_esn,
- .notify = xfrm_replay_notify_bmp,
+ .notify = xfrm_replay_notify_esn,
.overflow = xfrm_replay_overflow_esn,
};
$dstat !~ /^'X'$/ && # character constants
$dstat !~ /$exceptions/ &&
$dstat !~ /^\.$Ident\s*=/ && # .foo =
+ $dstat !~ /^(?:\#\s*$Ident|\#\s*$Constant)\s*$/ && # stringification #foo
$dstat !~ /^do\s*$Constant\s*while\s*$Constant;?$/ && # do {...} while (...); // do {...} while (...)
$dstat !~ /^for\s*$Constant$/ && # for (...)
$dstat !~ /^for\s*$Constant\s+(?:$Ident|-?$Constant)$/ && # for (...) bar()
{
return 0;
}
+
+static void cap_skb_owned_by(struct sk_buff *skb, struct sock *sk)
+{
+}
+
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
set_to_cap_if_null(ops, tun_dev_open);
set_to_cap_if_null(ops, tun_dev_attach_queue);
set_to_cap_if_null(ops, tun_dev_attach);
+ set_to_cap_if_null(ops, skb_owned_by);
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
set_to_cap_if_null(ops, xfrm_policy_alloc_security);
}
EXPORT_SYMBOL(security_tun_dev_open);
+void security_skb_owned_by(struct sk_buff *skb, struct sock *sk)
+{
+ security_ops->skb_owned_by(skb, sk);
+}
+
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
#include <linux/tty.h>
#include <net/icmp.h>
#include <net/ip.h> /* for local_port_range[] */
+#include <net/sock.h>
#include <net/tcp.h> /* struct or_callable used in sock_rcv_skb */
#include <net/net_namespace.h>
#include <net/netlabel.h>
selinux_skb_peerlbl_sid(skb, family, &sksec->peer_sid);
}
+static void selinux_skb_owned_by(struct sk_buff *skb, struct sock *sk)
+{
+ skb_set_owner_w(skb, sk);
+}
+
static int selinux_secmark_relabel_packet(u32 sid)
{
const struct task_security_struct *__tsec;
.tun_dev_attach_queue = selinux_tun_dev_attach_queue,
.tun_dev_attach = selinux_tun_dev_attach,
.tun_dev_open = selinux_tun_dev_open,
+ .skb_owned_by = selinux_skb_owned_by,
#ifdef CONFIG_SECURITY_NETWORK_XFRM
.xfrm_policy_alloc_security = selinux_xfrm_policy_alloc,
/* Only disallow PTRACE_TRACEME on more aggressive settings. */
switch (ptrace_scope) {
case YAMA_SCOPE_CAPABILITY:
- rcu_read_lock();
- if (!ns_capable(__task_cred(parent)->user_ns, CAP_SYS_PTRACE))
+ if (!has_ns_capability(parent, current_user_ns(), CAP_SYS_PTRACE))
rc = -EPERM;
- rcu_read_unlock();
break;
case YAMA_SCOPE_NO_ATTACH:
rc = -EPERM;
int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream,
struct vm_area_struct *area)
{
- long size;
- unsigned long offset;
+ struct snd_pcm_runtime *runtime = substream->runtime;;
area->vm_page_prot = pgprot_noncached(area->vm_page_prot);
- area->vm_flags |= VM_IO;
- size = area->vm_end - area->vm_start;
- offset = area->vm_pgoff << PAGE_SHIFT;
- if (io_remap_pfn_range(area, area->vm_start,
- (substream->runtime->dma_addr + offset) >> PAGE_SHIFT,
- size, area->vm_page_prot))
- return -EAGAIN;
- return 0;
+ return vm_iomap_memory(area, runtime->dma_addr, runtime->dma_bytes);
}
EXPORT_SYMBOL(snd_pcm_lib_mmap_iomem);
"Line Out", "Speaker", "HP Out", "CD",
"SPDIF Out", "Digital Out", "Modem Line", "Modem Hand",
"Line In", "Aux", "Mic", "Telephony",
- "SPDIF In", "Digitial In", "Reserved", "Other"
+ "SPDIF In", "Digital In", "Reserved", "Other"
};
return jack_types[(cfg & AC_DEFCFG_DEVICE)
if (val & AC_DIG1_PROFESSIONAL)
sbits |= IEC958_AES0_PROFESSIONAL;
if (sbits & IEC958_AES0_PROFESSIONAL) {
- if (sbits & AC_DIG1_EMPHASIS)
+ if (val & AC_DIG1_EMPHASIS)
sbits |= IEC958_AES0_PRO_EMPHASIS_5015;
} else {
if (val & AC_DIG1_EMPHASIS)
unsigned char *buf, int *eld_size)
{
int i;
- int ret;
+ int ret = 0;
int size;
/*
static void path_power_down_sync(struct hda_codec *codec, struct nid_path *path)
{
struct hda_gen_spec *spec = codec->spec;
- bool changed;
+ bool changed = false;
int i;
if (!spec->power_down_unused || path->active)
BAD_NO_EXTRA_SURR_DAC = 0x101,
/* Primary DAC shared with main surrounds */
BAD_SHARED_SURROUND = 0x100,
+ /* No independent HP possible */
+ BAD_NO_INDEP_HP = 0x40,
/* Primary DAC shared with main CLFE */
BAD_SHARED_CLFE = 0x10,
/* Primary DAC shared with extra surrounds */
return snd_hda_get_path_idx(codec, path);
}
+/* check whether the independent HP is available with the current config */
+static bool indep_hp_possible(struct hda_codec *codec)
+{
+ struct hda_gen_spec *spec = codec->spec;
+ struct auto_pin_cfg *cfg = &spec->autocfg;
+ struct nid_path *path;
+ int i, idx;
+
+ if (cfg->line_out_type == AUTO_PIN_HP_OUT)
+ idx = spec->out_paths[0];
+ else
+ idx = spec->hp_paths[0];
+ path = snd_hda_get_path_from_idx(codec, idx);
+ if (!path)
+ return false;
+
+ /* assume no path conflicts unless aamix is involved */
+ if (!spec->mixer_nid || !is_nid_contained(path, spec->mixer_nid))
+ return true;
+
+ /* check whether output paths contain aamix */
+ for (i = 0; i < cfg->line_outs; i++) {
+ if (spec->out_paths[i] == idx)
+ break;
+ path = snd_hda_get_path_from_idx(codec, spec->out_paths[i]);
+ if (path && is_nid_contained(path, spec->mixer_nid))
+ return false;
+ }
+ for (i = 0; i < cfg->speaker_outs; i++) {
+ path = snd_hda_get_path_from_idx(codec, spec->speaker_paths[i]);
+ if (path && is_nid_contained(path, spec->mixer_nid))
+ return false;
+ }
+
+ return true;
+}
+
/* fill the empty entries in the dac array for speaker/hp with the
* shared dac pointed by the paths
*/
badness += BAD_MULTI_IO;
}
+ if (spec->indep_hp && !indep_hp_possible(codec))
+ badness += BAD_NO_INDEP_HP;
+
/* re-fill the shared DAC for speaker / headphone */
if (cfg->line_out_type != AUTO_PIN_HP_OUT)
refill_shared_dacs(codec, cfg->hp_outs,
cfg->speaker_pins, val);
}
+ /* clear indep_hp flag if not available */
+ if (spec->indep_hp && !indep_hp_possible(codec))
+ spec->indep_hp = 0;
+
kfree(best_cfg);
return 0;
}
* this may give more power-saving, but will take longer time to
* wake up.
*/
-static int power_save_controller = -1;
-module_param(power_save_controller, bint, 0644);
+static bool power_save_controller = 1;
+module_param(power_save_controller, bool, 0644);
MODULE_PARM_DESC(power_save_controller, "Reset controller in power save mode.");
#endif /* CONFIG_PM */
unsigned int opened :1;
unsigned int running :1;
unsigned int irq_pending :1;
+ unsigned int prepared:1;
+ unsigned int locked:1;
/*
* For VIA:
* A flag to ensure DMA position is 0
struct timecounter azx_tc;
struct cyclecounter azx_cc;
+
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+ struct mutex dsp_mutex;
+#endif
};
+/* DSP lock helpers */
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+#define dsp_lock_init(dev) mutex_init(&(dev)->dsp_mutex)
+#define dsp_lock(dev) mutex_lock(&(dev)->dsp_mutex)
+#define dsp_unlock(dev) mutex_unlock(&(dev)->dsp_mutex)
+#define dsp_is_locked(dev) ((dev)->locked)
+#else
+#define dsp_lock_init(dev) do {} while (0)
+#define dsp_lock(dev) do {} while (0)
+#define dsp_unlock(dev) do {} while (0)
+#define dsp_is_locked(dev) 0
+#endif
+
/* CORB/RIRB */
struct azx_rb {
u32 *buf; /* CORB/RIRB buffer
/* card list (for power_save trigger) */
struct list_head list;
+
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+ struct azx_dev saved_azx_dev;
+#endif
};
#define CREATE_TRACE_POINTS
dev = chip->capture_index_offset;
nums = chip->capture_streams;
}
- for (i = 0; i < nums; i++, dev++)
- if (!chip->azx_dev[dev].opened) {
- res = &chip->azx_dev[dev];
- if (res->assigned_key == key)
- break;
+ for (i = 0; i < nums; i++, dev++) {
+ struct azx_dev *azx_dev = &chip->azx_dev[dev];
+ dsp_lock(azx_dev);
+ if (!azx_dev->opened && !dsp_is_locked(azx_dev)) {
+ res = azx_dev;
+ if (res->assigned_key == key) {
+ res->opened = 1;
+ res->assigned_key = key;
+ dsp_unlock(azx_dev);
+ return azx_dev;
+ }
}
+ dsp_unlock(azx_dev);
+ }
if (res) {
+ dsp_lock(res);
res->opened = 1;
res->assigned_key = key;
+ dsp_unlock(res);
}
return res;
}
struct azx_dev *azx_dev = get_azx_dev(substream);
int ret;
+ dsp_lock(azx_dev);
+ if (dsp_is_locked(azx_dev)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+
mark_runtime_wc(chip, azx_dev, substream, false);
azx_dev->bufsize = 0;
azx_dev->period_bytes = 0;
ret = snd_pcm_lib_malloc_pages(substream,
params_buffer_bytes(hw_params));
if (ret < 0)
- return ret;
+ goto unlock;
mark_runtime_wc(chip, azx_dev, substream, true);
+ unlock:
+ dsp_unlock(azx_dev);
return ret;
}
struct hda_pcm_stream *hinfo = apcm->hinfo[substream->stream];
/* reset BDL address */
- azx_sd_writel(azx_dev, SD_BDLPL, 0);
- azx_sd_writel(azx_dev, SD_BDLPU, 0);
- azx_sd_writel(azx_dev, SD_CTL, 0);
- azx_dev->bufsize = 0;
- azx_dev->period_bytes = 0;
- azx_dev->format_val = 0;
+ dsp_lock(azx_dev);
+ if (!dsp_is_locked(azx_dev)) {
+ azx_sd_writel(azx_dev, SD_BDLPL, 0);
+ azx_sd_writel(azx_dev, SD_BDLPU, 0);
+ azx_sd_writel(azx_dev, SD_CTL, 0);
+ azx_dev->bufsize = 0;
+ azx_dev->period_bytes = 0;
+ azx_dev->format_val = 0;
+ }
snd_hda_codec_cleanup(apcm->codec, hinfo, substream);
mark_runtime_wc(chip, azx_dev, substream, false);
+ azx_dev->prepared = 0;
+ dsp_unlock(azx_dev);
return snd_pcm_lib_free_pages(substream);
}
snd_hda_spdif_out_of_nid(apcm->codec, hinfo->nid);
unsigned short ctls = spdif ? spdif->ctls : 0;
+ dsp_lock(azx_dev);
+ if (dsp_is_locked(azx_dev)) {
+ err = -EBUSY;
+ goto unlock;
+ }
+
azx_stream_reset(chip, azx_dev);
format_val = snd_hda_calc_stream_format(runtime->rate,
runtime->channels,
snd_printk(KERN_ERR SFX
"%s: invalid format_val, rate=%d, ch=%d, format=%d\n",
pci_name(chip->pci), runtime->rate, runtime->channels, runtime->format);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock;
}
bufsize = snd_pcm_lib_buffer_bytes(substream);
azx_dev->no_period_wakeup = runtime->no_period_wakeup;
err = azx_setup_periods(chip, substream, azx_dev);
if (err < 0)
- return err;
+ goto unlock;
}
/* wallclk has 24Mhz clock source */
if ((chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) &&
stream_tag > chip->capture_streams)
stream_tag -= chip->capture_streams;
- return snd_hda_codec_prepare(apcm->codec, hinfo, stream_tag,
+ err = snd_hda_codec_prepare(apcm->codec, hinfo, stream_tag,
azx_dev->format_val, substream);
+
+ unlock:
+ if (!err)
+ azx_dev->prepared = 1;
+ dsp_unlock(azx_dev);
+ return err;
}
static int azx_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
azx_dev = get_azx_dev(substream);
trace_azx_pcm_trigger(chip, azx_dev, cmd);
+ if (dsp_is_locked(azx_dev) || !azx_dev->prepared)
+ return -EPIPE;
+
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
rstart = 1;
struct azx_dev *azx_dev;
int err;
- if (snd_hda_lock_devices(bus))
- return -EBUSY;
+ azx_dev = azx_get_dsp_loader_dev(chip);
+
+ dsp_lock(azx_dev);
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->running || azx_dev->locked) {
+ spin_unlock_irq(&chip->reg_lock);
+ err = -EBUSY;
+ goto unlock;
+ }
+ azx_dev->prepared = 0;
+ chip->saved_azx_dev = *azx_dev;
+ azx_dev->locked = 1;
+ spin_unlock_irq(&chip->reg_lock);
err = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG,
snd_dma_pci_data(chip->pci),
byte_size, bufp);
if (err < 0)
- goto unlock;
+ goto err_alloc;
mark_pages_wc(chip, bufp, true);
- azx_dev = azx_get_dsp_loader_dev(chip);
azx_dev->bufsize = byte_size;
azx_dev->period_bytes = byte_size;
azx_dev->format_val = format;
goto error;
azx_setup_controller(chip, azx_dev);
+ dsp_unlock(azx_dev);
return azx_dev->stream_tag;
error:
mark_pages_wc(chip, bufp, false);
snd_dma_free_pages(bufp);
-unlock:
- snd_hda_unlock_devices(bus);
+ err_alloc:
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->opened)
+ *azx_dev = chip->saved_azx_dev;
+ azx_dev->locked = 0;
+ spin_unlock_irq(&chip->reg_lock);
+ unlock:
+ dsp_unlock(azx_dev);
return err;
}
struct azx *chip = bus->private_data;
struct azx_dev *azx_dev = azx_get_dsp_loader_dev(chip);
- if (!dmab->area)
+ if (!dmab->area || !azx_dev->locked)
return;
+ dsp_lock(azx_dev);
/* reset BDL address */
azx_sd_writel(azx_dev, SD_BDLPL, 0);
azx_sd_writel(azx_dev, SD_BDLPU, 0);
snd_dma_free_pages(dmab);
dmab->area = NULL;
- snd_hda_unlock_devices(bus);
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->opened)
+ *azx_dev = chip->saved_azx_dev;
+ azx_dev->locked = 0;
+ spin_unlock_irq(&chip->reg_lock);
+ dsp_unlock(azx_dev);
}
#endif /* CONFIG_SND_HDA_DSP_LOADER */
struct snd_card *card = dev_get_drvdata(dev);
struct azx *chip = card->private_data;
- if (power_save_controller > 0)
- return 0;
if (!power_save_controller ||
!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME))
return -EBUSY;
}
for (i = 0; i < chip->num_streams; i++) {
+ dsp_lock_init(&chip->azx_dev[i]);
/* allocate memory for the BDL for each stream */
err = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV,
snd_dma_pci_data(chip->pci),
snd_hda_gen_update_outputs(codec);
if (spec->gpio_eapd_hp) {
- unsigned int gpio = spec->gen.hp_jack_present ?
+ spec->gpio_data = spec->gen.hp_jack_present ?
spec->gpio_eapd_hp : spec->gpio_eapd_speaker;
snd_hda_codec_write(codec, 0x01, 0,
- AC_VERB_SET_GPIO_DATA, gpio);
+ AC_VERB_SET_GPIO_DATA, spec->gpio_data);
}
}
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
return 0;
}
+static void cx_auto_free(struct hda_codec *codec)
+{
+ snd_hda_detach_beep_device(codec);
+ snd_hda_gen_free(codec);
+}
+
static const struct hda_codec_ops cx_auto_patch_ops = {
.build_controls = cx_auto_build_controls,
.build_pcms = snd_hda_gen_build_pcms,
.init = snd_hda_gen_init,
- .free = snd_hda_gen_free,
+ .free = cx_auto_free,
.unsol_event = snd_hda_jack_unsol_event,
#ifdef CONFIG_PM
.check_power_status = snd_hda_gen_check_power_status,
codec->patch_ops = cx_auto_patch_ops;
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
/* Some laptops with Conexant chips show stalls in S3 resume,
* which falls into the single-cmd mode.
_snd_printd(SND_PR_VERBOSE,
"HDMI status: Codec=%d Pin=%d Presence_Detect=%d ELD_Valid=%d\n",
- codec->addr, pin_nid, eld->monitor_present, eld->eld_valid);
+ codec->addr, pin_nid, pin_eld->monitor_present, eld->eld_valid);
if (eld->eld_valid) {
if (snd_hdmi_get_eld(codec, pin_nid, eld->eld_buffer,
const hda_nid_t *ssids;
if (codec->vendor_id == 0x10ec0272 || codec->vendor_id == 0x10ec0663 ||
- codec->vendor_id == 0x10ec0665 || codec->vendor_id == 0x10ec0670)
+ codec->vendor_id == 0x10ec0665 || codec->vendor_id == 0x10ec0670 ||
+ codec->vendor_id == 0x10ec0671)
ssids = alc663_ssids;
else
ssids = alc662_ssids;
{ .id = 0x10ec0665, .name = "ALC665", .patch = patch_alc662 },
{ .id = 0x10ec0668, .name = "ALC668", .patch = patch_alc662 },
{ .id = 0x10ec0670, .name = "ALC670", .patch = patch_alc662 },
+ { .id = 0x10ec0671, .name = "ALC671", .patch = patch_alc662 },
{ .id = 0x10ec0680, .name = "ALC680", .patch = patch_alc680 },
{ .id = 0x10ec0880, .name = "ALC880", .patch = patch_alc880 },
{ .id = 0x10ec0882, .name = "ALC882", .patch = patch_alc882 },
switch (params_format(params)) {
case SNDRV_PCM_FORMAT_S8:
width = SI476X_PCM_FORMAT_S8;
+ break;
case SNDRV_PCM_FORMAT_S16_LE:
width = SI476X_PCM_FORMAT_S16_LE;
break;
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
- struct arizona *arizona = dev_get_drvdata(codec->dev);
+ struct arizona *arizona = dev_get_drvdata(codec->dev->parent);
struct regmap *regmap = codec->control_data;
const struct reg_default *patch = NULL;
int i, patch_size;
{ "ROP", NULL, "Right Speaker PGA" },
{ "RON", NULL, "Right Speaker PGA" },
+ { "Charge Pump", NULL, "CLK_DSP" },
+
{ "Left Headphone Output PGA", NULL, "Charge Pump" },
{ "Right Headphone Output PGA", NULL, "Charge Pump" },
{ "Left Line Output PGA", NULL, "Charge Pump" },
&buf_list);
if (!buf) {
adsp_err(dsp, "Out of memory\n");
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_fw;
}
adsp_dbg(dsp, "%s.%d: Writing %d bytes at %x\n",
wm_adsp_buf_free(&buf_list);
out:
kfree(file);
- return 0;
+ return ret;
}
int wm_adsp1_init(struct wm_adsp *adsp)
if (imx_ssi->ac97_reset)
imx_ssi->ac97_reset(ac97);
+ /* First read sometimes fails, do a dummy read */
+ imx_ssi_ac97_read(ac97, 0);
}
static void imx_ssi_ac97_warm_reset(struct snd_ac97 *ac97)
if (imx_ssi->ac97_warm_reset)
imx_ssi->ac97_warm_reset(ac97);
+
+ /* First read sometimes fails, do a dummy read */
+ imx_ssi_ac97_read(ac97, 0);
}
struct snd_ac97_bus_ops soc_ac97_ops = {
.num_links = ARRAY_SIZE(pcm030_fabric_dai),
};
-static int __init pcm030_fabric_probe(struct platform_device *op)
+static int pcm030_fabric_probe(struct platform_device *op)
{
struct device_node *np = op->dev.of_node;
struct device_node *platform_np;
static struct i2s_dai *i2s_alloc_dai(struct platform_device *pdev, bool sec)
{
struct i2s_dai *i2s;
+ int ret;
i2s = devm_kzalloc(&pdev->dev, sizeof(struct i2s_dai), GFP_KERNEL);
if (i2s == NULL)
i2s->i2s_dai_drv.capture.channels_max = 2;
i2s->i2s_dai_drv.capture.rates = SAMSUNG_I2S_RATES;
i2s->i2s_dai_drv.capture.formats = SAMSUNG_I2S_FMTS;
+ dev_set_drvdata(&i2s->pdev->dev, i2s);
} else { /* Create a new platform_device for Secondary */
- i2s->pdev = platform_device_register_resndata(NULL,
- "samsung-i2s-sec", -1, NULL, 0, NULL, 0);
+ i2s->pdev = platform_device_alloc("samsung-i2s-sec", -1);
if (IS_ERR(i2s->pdev))
return NULL;
- }
- /* Pre-assign snd_soc_dai_set_drvdata */
- dev_set_drvdata(&i2s->pdev->dev, i2s);
+ platform_set_drvdata(i2s->pdev, i2s);
+ ret = platform_device_add(i2s->pdev);
+ if (ret < 0)
+ return NULL;
+ }
return i2s;
}
if (samsung_dai_type == TYPE_SEC) {
sec_dai = dev_get_drvdata(&pdev->dev);
+ if (!sec_dai) {
+ dev_err(&pdev->dev, "Unable to get drvdata\n");
+ return -EFAULT;
+ }
snd_soc_register_dai(&sec_dai->pdev->dev,
&sec_dai->i2s_dai_drv);
asoc_dma_platform_register(&pdev->dev);
return 0;
}
-static struct snd_soc_platform sh7760_soc_platform = {
- .pcm_ops = &camelot_pcm_ops,
+static struct snd_soc_platform_driver sh7760_soc_platform = {
+ .ops = &camelot_pcm_ops,
.pcm_new = camelot_pcm_new,
.pcm_free = camelot_pcm_free,
};
if (platform->driver->compr_ops && platform->driver->compr_ops->set_params) {
ret = platform->driver->compr_ops->set_params(cstream, params);
if (ret < 0)
- goto out;
+ goto err;
}
if (rtd->dai_link->compr_ops && rtd->dai_link->compr_ops->set_params) {
ret = rtd->dai_link->compr_ops->set_params(cstream);
if (ret < 0)
- goto out;
+ goto err;
}
snd_soc_dapm_stream_event(rtd, SNDRV_PCM_STREAM_PLAYBACK,
SND_SOC_DAPM_STREAM_START);
-out:
+ /* cancel any delayed stream shutdown that is pending */
+ rtd->pop_wait = 0;
+ mutex_unlock(&rtd->pcm_mutex);
+
+ cancel_delayed_work_sync(&rtd->delayed_work);
+
+ return ret;
+
+err:
mutex_unlock(&rtd->pcm_mutex);
return ret;
}
val = val << shift;
ret = snd_soc_update_bits_locked(codec, reg, val_mask, val);
- if (ret != 0)
+ if (ret < 0)
return ret;
if (snd_soc_volsw_is_stereo(mc)) {
if (params->mask) {
ret = regmap_read(codec->control_data, params->base, &val);
if (ret != 0)
- return ret;
+ goto out;
val &= params->mask;
((u32 *)data)[0] |= cpu_to_be32(val);
break;
default:
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
}
}
ret = regmap_raw_write(codec->control_data, params->base,
data, len);
+out:
kfree(data);
return ret;
dev_err(card->dev,
"ASoC: Property '%s' index %d could not be read: %d\n",
propname, 2 * i, ret);
- kfree(routes);
return -EINVAL;
}
ret = of_property_read_string_index(np, propname,
dev_err(card->dev,
"ASoC: Property '%s' index %d could not be read: %d\n",
propname, (2 * i) + 1, ret);
- kfree(routes);
return -EINVAL;
}
}
if (path->weak)
continue;
+ if (path->walking)
+ return 1;
+
if (path->walked)
continue;
if (path->sink && path->connect) {
path->walked = 1;
+ path->walking = 1;
/* do we need to add this widget to the list ? */
if (list) {
dev_err(widget->dapm->dev,
"ASoC: could not add widget %s\n",
widget->name);
+ path->walking = 0;
return con;
}
}
con += is_connected_output_ep(path->sink, list);
+
+ path->walking = 0;
}
}
if (path->weak)
continue;
+ if (path->walking)
+ return 1;
+
if (path->walked)
continue;
if (path->source && path->connect) {
path->walked = 1;
+ path->walking = 1;
/* do we need to add this widget to the list ? */
if (list) {
dev_err(widget->dapm->dev,
"ASoC: could not add widget %s\n",
widget->name);
+ path->walking = 0;
return con;
}
}
con += is_connected_input_ep(path->source, list);
+
+ path->walking = 0;
}
}
static u64 spear_pcm_dmamask = DMA_BIT_MASK(32);
-static int spear_pcm_new(struct snd_card *card,
- struct snd_soc_dai *dai, struct snd_pcm *pcm)
+static int spear_pcm_new(struct snd_soc_pcm_runtime *rtd)
{
+ struct snd_card *card = rtd->card->snd_card;
int ret;
if (!card->dev->dma_mask)
if (!card->dev->coherent_dma_mask)
card->dev->coherent_dma_mask = DMA_BIT_MASK(32);
- if (dai->driver->playback.channels_min) {
- ret = spear_pcm_preallocate_dma_buffer(pcm,
+ if (rtd->cpu_dai->driver->playback.channels_min) {
+ ret = spear_pcm_preallocate_dma_buffer(rtd->pcm,
SNDRV_PCM_STREAM_PLAYBACK,
spear_pcm_hardware.buffer_bytes_max);
if (ret)
return ret;
}
- if (dai->driver->capture.channels_min) {
- ret = spear_pcm_preallocate_dma_buffer(pcm,
+ if (rtd->cpu_dai->driver->capture.channels_min) {
+ ret = spear_pcm_preallocate_dma_buffer(rtd->pcm,
SNDRV_PCM_STREAM_CAPTURE,
spear_pcm_hardware.buffer_bytes_max);
if (ret)
static const struct snd_pcm_hardware tegra_pcm_hardware = {
.info = SNDRV_PCM_INFO_MMAP |
SNDRV_PCM_INFO_MMAP_VALID |
- SNDRV_PCM_INFO_PAUSE |
- SNDRV_PCM_INFO_RESUME |
SNDRV_PCM_INFO_INTERLEAVED,
.formats = SNDRV_PCM_FMTBIT_S16_LE,
.channels_min = 2,
return 0;
}
-static int tegra_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
-{
- switch (cmd) {
- case SNDRV_PCM_TRIGGER_START:
- case SNDRV_PCM_TRIGGER_RESUME:
- case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
- return snd_dmaengine_pcm_trigger(substream,
- SNDRV_PCM_TRIGGER_START);
-
- case SNDRV_PCM_TRIGGER_STOP:
- case SNDRV_PCM_TRIGGER_SUSPEND:
- case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
- return snd_dmaengine_pcm_trigger(substream,
- SNDRV_PCM_TRIGGER_STOP);
- default:
- return -EINVAL;
- }
- return 0;
-}
-
static int tegra_pcm_mmap(struct snd_pcm_substream *substream,
struct vm_area_struct *vma)
{
.ioctl = snd_pcm_lib_ioctl,
.hw_params = tegra_pcm_hw_params,
.hw_free = tegra_pcm_hw_free,
- .trigger = tegra_pcm_trigger,
+ .trigger = snd_dmaengine_pcm_trigger,
.pointer = snd_dmaengine_pcm_pointer,
.mmap = tegra_pcm_mmap,
};
{
struct usb_device *dev = chip->dev;
unsigned char data[4];
- int err, crate;
+ int err, cur_rate, prev_rate;
int clock = snd_usb_clock_find_source(chip, fmt->clock);
if (clock < 0)
return -ENXIO;
}
+ err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), UAC2_CS_CUR,
+ USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN,
+ UAC2_CS_CONTROL_SAM_FREQ << 8,
+ snd_usb_ctrl_intf(chip) | (clock << 8),
+ data, sizeof(data));
+ if (err < 0) {
+ snd_printk(KERN_WARNING "%d:%d:%d: cannot get freq (v2)\n",
+ dev->devnum, iface, fmt->altsetting);
+ prev_rate = 0;
+ } else {
+ prev_rate = data[0] | (data[1] << 8) | (data[2] << 16) | (data[3] << 24);
+ }
+
data[0] = rate;
data[1] = rate >> 8;
data[2] = rate >> 16;
return err;
}
- if ((err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), UAC2_CS_CUR,
- USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN,
- UAC2_CS_CONTROL_SAM_FREQ << 8,
- snd_usb_ctrl_intf(chip) | (clock << 8),
- data, sizeof(data))) < 0) {
+ err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), UAC2_CS_CUR,
+ USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN,
+ UAC2_CS_CONTROL_SAM_FREQ << 8,
+ snd_usb_ctrl_intf(chip) | (clock << 8),
+ data, sizeof(data));
+ if (err < 0) {
snd_printk(KERN_WARNING "%d:%d:%d: cannot get freq (v2)\n",
dev->devnum, iface, fmt->altsetting);
- return err;
+ cur_rate = 0;
+ } else {
+ cur_rate = data[0] | (data[1] << 8) | (data[2] << 16) | (data[3] << 24);
}
- crate = data[0] | (data[1] << 8) | (data[2] << 16) | (data[3] << 24);
- if (crate != rate)
- snd_printd(KERN_WARNING "current rate %d is different from the runtime rate %d\n", crate, rate);
+ if (cur_rate != rate) {
+ snd_printd(KERN_WARNING
+ "current rate %d is different from the runtime rate %d\n",
+ cur_rate, rate);
+ }
+
+ /* Some devices doesn't respond to sample rate changes while the
+ * interface is active. */
+ if (rate != prev_rate) {
+ usb_set_interface(dev, iface, 0);
+ usb_set_interface(dev, iface, fmt->altsetting);
+ }
return 0;
}
case UAC2_CLOCK_SELECTOR: {
struct uac_selector_unit_descriptor *d = p1;
/* call recursively to retrieve the channel info */
- if (check_input_term(state, d->baSourceID[0], term) < 0)
- return -ENODEV;
+ err = check_input_term(state, d->baSourceID[0], term);
+ if (err < 0)
+ return err;
term->type = d->bDescriptorSubtype << 16; /* virtual type */
term->id = id;
term->name = uac_selector_unit_iSelector(d);
case UAC1_PROCESSING_UNIT:
case UAC1_EXTENSION_UNIT:
/* UAC2_PROCESSING_UNIT_V2 */
- /* UAC2_EFFECT_UNIT */ {
+ /* UAC2_EFFECT_UNIT */
+ case UAC2_EXTENSION_UNIT_V2: {
struct uac_processing_unit_descriptor *d = p1;
if (state->mixer->protocol == UAC_VERSION_2 &&
return err;
/* determine the input source type and name */
- if (check_input_term(state, hdr->bSourceID, &iterm) < 0)
- return -EINVAL;
+ err = check_input_term(state, hdr->bSourceID, &iterm);
+ if (err < 0)
+ return err;
master_bits = snd_usb_combine_bytes(bmaControls, csize);
/* master configuration quirks */
return parse_audio_extension_unit(state, unitid, p1);
else /* UAC_VERSION_2 */
return parse_audio_processing_unit(state, unitid, p1);
+ case UAC2_EXTENSION_UNIT_V2:
+ return parse_audio_extension_unit(state, unitid, p1);
default:
snd_printk(KERN_ERR "usbaudio: unit %u: unexpected type 0x%02x\n", unitid, p1[2]);
return -EINVAL;
state.oterm.type = le16_to_cpu(desc->wTerminalType);
state.oterm.name = desc->iTerminal;
err = parse_audio_unit(&state, desc->bSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
} else { /* UAC_VERSION_2 */
struct uac2_output_terminal_descriptor *desc = p;
state.oterm.type = le16_to_cpu(desc->wTerminalType);
state.oterm.name = desc->iTerminal;
err = parse_audio_unit(&state, desc->bSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
/* for UAC2, use the same approach to also add the clock selectors */
err = parse_audio_unit(&state, desc->bCSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
}
}
else
ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), bRequest,
USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
- 0, cpu_to_le16(wIndex),
+ 0, wIndex,
&tmp, sizeof(tmp), 1000);
up_read(&mixer->chip->shutdown_rwsem);
else
ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), bRequest,
USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
- cpu_to_le16(wValue), cpu_to_le16(wIndex),
+ wValue, wIndex,
NULL, 0, 1000);
up_read(&mixer->chip->shutdown_rwsem);
{
int ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
0xaf, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- cpu_to_le16(1), 0, NULL, 0, 1000);
+ 1, 0, NULL, 0, 1000);
if (ret < 0)
return ret;
case 0x3C: /* HSW */
case 0x3F: /* HSW */
case 0x45: /* HSW */
+ case 0x46: /* HSW */
return 1;
case 0x2E: /* Nehalem-EX Xeon - Beckton */
case 0x2F: /* Westmere-EX Xeon - Eagleton */
case 0x3C: /* HSW */
case 0x3F: /* HSW */
case 0x45: /* HSW */
+ case 0x46: /* HSW */
do_rapl = RAPL_PKG | RAPL_CORES | RAPL_GFX;
break;
case 0x2D:
case 0x3C: /* HSW */
case 0x3F: /* HSW */
case 0x45: /* HSW */
+ case 0x46: /* HSW */
return 1;
}
return 0;
cmdline(argc, argv);
if (verbose)
- fprintf(stderr, "turbostat v3.2 February 11, 2013"
+ fprintf(stderr, "turbostat v3.3 March 15, 2013"
" - Len Brown <lenb@kernel.org>\n");
turbostat_init();
}
int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
- gpa_t gpa)
+ gpa_t gpa, unsigned long len)
{
struct kvm_memslots *slots = kvm_memslots(kvm);
int offset = offset_in_page(gpa);
- gfn_t gfn = gpa >> PAGE_SHIFT;
+ gfn_t start_gfn = gpa >> PAGE_SHIFT;
+ gfn_t end_gfn = (gpa + len - 1) >> PAGE_SHIFT;
+ gfn_t nr_pages_needed = end_gfn - start_gfn + 1;
+ gfn_t nr_pages_avail;
ghc->gpa = gpa;
ghc->generation = slots->generation;
- ghc->memslot = gfn_to_memslot(kvm, gfn);
- ghc->hva = gfn_to_hva_many(ghc->memslot, gfn, NULL);
- if (!kvm_is_error_hva(ghc->hva))
+ ghc->len = len;
+ ghc->memslot = gfn_to_memslot(kvm, start_gfn);
+ ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
+ if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) {
ghc->hva += offset;
- else
- return -EFAULT;
-
+ } else {
+ /*
+ * If the requested region crosses two memslots, we still
+ * verify that the entire region is valid here.
+ */
+ while (start_gfn <= end_gfn) {
+ ghc->memslot = gfn_to_memslot(kvm, start_gfn);
+ ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn,
+ &nr_pages_avail);
+ if (kvm_is_error_hva(ghc->hva))
+ return -EFAULT;
+ start_gfn += nr_pages_avail;
+ }
+ /* Use the slow path for cross page reads and writes. */
+ ghc->memslot = NULL;
+ }
return 0;
}
EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init);
struct kvm_memslots *slots = kvm_memslots(kvm);
int r;
+ BUG_ON(len > ghc->len);
+
if (slots->generation != ghc->generation)
- kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa);
+ kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa, ghc->len);
+
+ if (unlikely(!ghc->memslot))
+ return kvm_write_guest(kvm, ghc->gpa, data, len);
if (kvm_is_error_hva(ghc->hva))
return -EFAULT;
struct kvm_memslots *slots = kvm_memslots(kvm);
int r;
+ BUG_ON(len > ghc->len);
+
if (slots->generation != ghc->generation)
- kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa);
+ kvm_gfn_to_hva_cache_init(kvm, ghc, ghc->gpa, ghc->len);
+
+ if (unlikely(!ghc->memslot))
+ return kvm_read_guest(kvm, ghc->gpa, data, len);
if (kvm_is_error_hva(ghc->hva))
return -EFAULT;