00-INDEX
- This file
-as-iosched.txt
- - Anticipatory IO scheduler
barrier.txt
- I/O Barriers
biodoc.txt
+++ /dev/null
-Anticipatory IO scheduler
--------------------------
-Nick Piggin <piggin@cyberone.com.au> 13 Sep 2003
-
-Attention! Database servers, especially those using "TCQ" disks should
-investigate performance with the 'deadline' IO scheduler. Any system with high
-disk performance requirements should do so, in fact.
-
-If you see unusual performance characteristics of your disk systems, or you
-see big performance regressions versus the deadline scheduler, please email
-me. Database users don't bother unless you're willing to test a lot of patches
-from me ;) its a known issue.
-
-Also, users with hardware RAID controllers, doing striping, may find
-highly variable performance results with using the as-iosched. The
-as-iosched anticipatory implementation is based on the notion that a disk
-device has only one physical seeking head. A striped RAID controller
-actually has a head for each physical device in the logical RAID device.
-
-However, setting the antic_expire (see tunable parameters below) produces
-very similar behavior to the deadline IO scheduler.
-
-Selecting IO schedulers
------------------------
-Refer to Documentation/block/switching-sched.txt for information on
-selecting an io scheduler on a per-device basis.
-
-Anticipatory IO scheduler Policies
-----------------------------------
-The as-iosched implementation implements several layers of policies
-to determine when an IO request is dispatched to the disk controller.
-Here are the policies outlined, in order of application.
-
-1. one-way Elevator algorithm.
-
-The elevator algorithm is similar to that used in deadline scheduler, with
-the addition that it allows limited backward movement of the elevator
-(i.e. seeks backwards). A seek backwards can occur when choosing between
-two IO requests where one is behind the elevator's current position, and
-the other is in front of the elevator's position. If the seek distance to
-the request in back of the elevator is less than half the seek distance to
-the request in front of the elevator, then the request in back can be chosen.
-Backward seeks are also limited to a maximum of MAXBACK (1024*1024) sectors.
-This favors forward movement of the elevator, while allowing opportunistic
-"short" backward seeks.
-
-2. FIFO expiration times for reads and for writes.
-
-This is again very similar to the deadline IO scheduler. The expiration
-times for requests on these lists is tunable using the parameters read_expire
-and write_expire discussed below. When a read or a write expires in this way,
-the IO scheduler will interrupt its current elevator sweep or read anticipation
-to service the expired request.
-
-3. Read and write request batching
-
-A batch is a collection of read requests or a collection of write
-requests. The as scheduler alternates dispatching read and write batches
-to the driver. In the case a read batch, the scheduler submits read
-requests to the driver as long as there are read requests to submit, and
-the read batch time limit has not been exceeded (read_batch_expire).
-The read batch time limit begins counting down only when there are
-competing write requests pending.
-
-In the case of a write batch, the scheduler submits write requests to
-the driver as long as there are write requests available, and the
-write batch time limit has not been exceeded (write_batch_expire).
-However, the length of write batches will be gradually shortened
-when read batches frequently exceed their time limit.
-
-When changing between batch types, the scheduler waits for all requests
-from the previous batch to complete before scheduling requests for the
-next batch.
-
-The read and write fifo expiration times described in policy 2 above
-are checked only when in scheduling IO of a batch for the corresponding
-(read/write) type. So for example, the read FIFO timeout values are
-tested only during read batches. Likewise, the write FIFO timeout
-values are tested only during write batches. For this reason,
-it is generally not recommended for the read batch time
-to be longer than the write expiration time, nor for the write batch
-time to exceed the read expiration time (see tunable parameters below).
-
-When the IO scheduler changes from a read to a write batch,
-it begins the elevator from the request that is on the head of the
-write expiration FIFO. Likewise, when changing from a write batch to
-a read batch, scheduler begins the elevator from the first entry
-on the read expiration FIFO.
-
-4. Read anticipation.
-
-Read anticipation occurs only when scheduling a read batch.
-This implementation of read anticipation allows only one read request
-to be dispatched to the disk controller at a time. In
-contrast, many write requests may be dispatched to the disk controller
-at a time during a write batch. It is this characteristic that can make
-the anticipatory scheduler perform anomalously with controllers supporting
-TCQ, or with hardware striped RAID devices. Setting the antic_expire
-queue parameter (see below) to zero disables this behavior, and the
-anticipatory scheduler behaves essentially like the deadline scheduler.
-
-When read anticipation is enabled (antic_expire is not zero), reads
-are dispatched to the disk controller one at a time.
-At the end of each read request, the IO scheduler examines its next
-candidate read request from its sorted read list. If that next request
-is from the same process as the request that just completed,
-or if the next request in the queue is "very close" to the
-just completed request, it is dispatched immediately. Otherwise,
-statistics (average think time, average seek distance) on the process
-that submitted the just completed request are examined. If it seems
-likely that that process will submit another request soon, and that
-request is likely to be near the just completed request, then the IO
-scheduler will stop dispatching more read requests for up to (antic_expire)
-milliseconds, hoping that process will submit a new request near the one
-that just completed. If such a request is made, then it is dispatched
-immediately. If the antic_expire wait time expires, then the IO scheduler
-will dispatch the next read request from the sorted read queue.
-
-To decide whether an anticipatory wait is worthwhile, the scheduler
-maintains statistics for each process that can be used to compute
-mean "think time" (the time between read requests), and mean seek
-distance for that process. One observation is that these statistics
-are associated with each process, but those statistics are not associated
-with a specific IO device. So for example, if a process is doing IO
-on several file systems on separate devices, the statistics will be
-a combination of IO behavior from all those devices.
-
-
-Tuning the anticipatory IO scheduler
-------------------------------------
-When using 'as', the anticipatory IO scheduler there are 5 parameters under
-/sys/block/*/queue/iosched/. All are units of milliseconds.
-
-The parameters are:
-* read_expire
- Controls how long until a read request becomes "expired". It also controls the
- interval between which expired requests are served, so set to 50, a request
- might take anywhere < 100ms to be serviced _if_ it is the next on the
- expired list. Obviously request expiration strategies won't make the disk
- go faster. The result basically equates to the timeslice a single reader
- gets in the presence of other IO. 100*((seek time / read_expire) + 1) is
- very roughly the % streaming read efficiency your disk should get with
- multiple readers.
-
-* read_batch_expire
- Controls how much time a batch of reads is given before pending writes are
- served. A higher value is more efficient. This might be set below read_expire
- if writes are to be given higher priority than reads, but reads are to be
- as efficient as possible when there are no writes. Generally though, it
- should be some multiple of read_expire.
-
-* write_expire, and
-* write_batch_expire are equivalent to the above, for writes.
-
-* antic_expire
- Controls the maximum amount of time we can anticipate a good read (one
- with a short seek distance from the most recently completed request) before
- giving up. Many other factors may cause anticipation to be stopped early,
- or some processes will not be "anticipated" at all. Should be a bit higher
- for big seek time devices though not a linear correspondence - most
- processes have only a few ms thinktime.
-
-In addition to the tunables above there is a read-only file named est_time
-which, when read, will show:
-
- - The probability of a task exiting without a cooperating task
- submitting an anticipated IO.
-
- - The current mean think time.
-
- - The seek distance used to determine if an incoming IO is better.
-
also be used to enable or disable barriers, for
consistency with other ext4 mount options.
-inode_readahead=n This tuning parameter controls the maximum
+inode_readahead_blks=n This tuning parameter controls the maximum
number of inode table blocks that ext4's inode
table readahead algorithm will pre-read into
the buffer cache. The default value is 32 blocks.
acpi_sleep= [HW,ACPI] Sleep options
Format: { s3_bios, s3_mode, s3_beep, s4_nohwsig,
- old_ordering, s4_nonvs }
+ old_ordering, s4_nonvs, sci_force_enable }
See Documentation/power/video.txt for information on
s3_bios and s3_mode.
s3_beep is for debugging; it makes the PC's speaker beep
of _PTS is used by default).
s4_nonvs prevents the kernel from saving/restoring the
ACPI NVS memory during hibernation.
+ sci_force_enable causes the kernel to set SCI_EN directly
+ on resume from S1/S3 (which is against the ACPI spec,
+ but some broken systems don't work without it).
acpi_use_timer_override [HW,ACPI]
Use timer override. For some broken Nvidia NF5 boards
__u8 pad;
} nmi;
__u32 sipi_vector;
- __u32 flags; /* must be zero */
+ __u32 flags;
};
4.30 KVM_SET_VCPU_EVENTS
See KVM_GET_VCPU_EVENTS for the data structure.
+Fields that may be modified asynchronously by running VCPUs can be excluded
+from the update. These fields are nmi.pending and sipi_vector. Keep the
+corresponding bits in the flags field cleared to suppress overwriting the
+current in-kernel state. The bits are:
+
+KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
+KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
+
5. The kvm_run structure
its level up and down at every change.
-Volume control
---------------
+Volume control (Console Audio control)
+--------------------------------------
procfs: /proc/acpi/ibm/volume
ALSA: "ThinkPad Console Audio Control", default ID: "ThinkPadEC"
Software volume control should be done only in the main AC97/HDA
mixer.
-This feature allows volume control on ThinkPad models with a digital
-volume knob (when available, not all models have it), as well as
-mute/unmute control. The available commands are:
+
+About the ThinkPad Console Audio control:
+
+ThinkPads have a built-in amplifier and muting circuit that drives the
+console headphone and speakers. This circuit is after the main AC97
+or HDA mixer in the audio path, and under exclusive control of the
+firmware.
+
+ThinkPads have three special hotkeys to interact with the console
+audio control: volume up, volume down and mute.
+
+It is worth noting that the normal way the mute function works (on
+ThinkPads that do not have a "mute LED") is:
+
+1. Press mute to mute. It will *always* mute, you can press it as
+ many times as you want, and the sound will remain mute.
+
+2. Press either volume key to unmute the ThinkPad (it will _not_
+ change the volume, it will just unmute).
+
+This is a very superior design when compared to the cheap software-only
+mute-toggle solution found on normal consumer laptops: you can be
+absolutely sure the ThinkPad will not make noise if you press the mute
+button, no matter the previous state.
+
+The IBM ThinkPads, and the earlier Lenovo ThinkPads have variable-gain
+amplifiers driving the speakers and headphone output, and the firmware
+also handles volume control for the headphone and speakers on these
+ThinkPads without any help from the operating system (this volume
+control stage exists after the main AC97 or HDA mixer in the audio
+path).
+
+The newer Lenovo models only have firmware mute control, and depend on
+the main HDA mixer to do volume control (which is done by the operating
+system). In this case, the volume keys are filtered out for unmute
+key press (there are some firmware bugs in this area) and delivered as
+normal key presses to the operating system (thinkpad-acpi is not
+involved).
+
+
+The ThinkPad-ACPI volume control:
+
+The preferred way to interact with the Console Audio control is the
+ALSA interface.
+
+The legacy procfs interface allows one to read the current state,
+and if volume control is enabled, accepts the following commands:
echo up >/proc/acpi/ibm/volume
echo down >/proc/acpi/ibm/volume
echo 'level <level>' >/proc/acpi/ibm/volume
The <level> number range is 0 to 14 although not all of them may be
-distinct. The unmute the volume after the mute command, use either the
+distinct. To unmute the volume after the mute command, use either the
up or down command (the level command will not unmute the volume), or
the unmute command.
-The current volume level and mute state is shown in the file.
-
You can use the volume_capabilities parameter to tell the driver
whether your thinkpad has volume control or mute-only control:
volume_capabilities=1 for mixers with mute and volume control,
It takes an integer value, can be changed by writing to this
file, such as
- # cat 5 > /proc/asound/card0/pcm0p/xrun_debug
+ # echo 5 > /proc/asound/card0/pcm0p/xrun_debug
The value consists of the following bit flags:
bit 0 = Enable XRUN/jiffies debug messages
For example, if the function foo() calls bar(), when the bar() function calls
mcount(), the arguments mcount() will pass to the tracer are:
"frompc" - the address bar() will use to return to foo()
- "selfpc" - the address bar() (with _mcount() size adjustment)
+ "selfpc" - the address bar() (with mcount() size adjustment)
Also keep in mind that this mcount function will be called *a lot*, so
optimizing for the default case of no tracer will help the smooth running of
your system when tracing is disabled. So the start of the mcount function is
-typically the bare min with checking things before returning. That also means
-the code flow should usually kept linear (i.e. no branching in the nop case).
-This is of course an optimization and not a hard requirement.
+typically the bare minimum with checking things before returning. That also
+means the code flow should usually be kept linear (i.e. no branching in the nop
+case). This is of course an optimization and not a hard requirement.
Here is some pseudo code that should help (these functions should actually be
implemented in assembly):
The mcount function should check the function pointers ftrace_graph_return
(compare to ftrace_stub) and ftrace_graph_entry (compare to
-ftrace_graph_entry_stub). If either of those are not set to the relevant stub
+ftrace_graph_entry_stub). If either of those is not set to the relevant stub
function, call the arch-specific function ftrace_graph_caller which in turn
calls the arch-specific function prepare_ftrace_return. Neither of these
-function names are strictly required, but you should use them anyways to stay
+function names is strictly required, but you should use them anyway to stay
consistent across the architecture ports -- easier to compare & contrast
things.
located on the stack. This allows the function to hijack the return address
temporarily to have it point to the arch-specific function return_to_handler.
That function will simply call the common ftrace_return_to_handler function and
-that will return the original return address with which, you can return to the
+that will return the original return address with which you can return to the
original call site.
Here is the updated mcount pseudo code:
Usage
-----
-Make sure debugfs is mounted to /sys/kernel/debug. If not, (requires root privileges)
+Make sure debugfs is mounted to /sys/kernel/debug.
+If not (requires root privileges):
$ mount -t debugfs debugfs /sys/kernel/debug
Check that the driver you are about to trace is not loaded.
$ tar zcf pciid-nick-mmiotrace.tar.gz mydump.txt lspci.txt dmesg.txt
and then send the .tar.gz file. The trace compresses considerably. Replace
"pciid" and "nick" with the PCI ID or model name of your piece of hardware
-under investigation and your nick name.
+under investigation and your nickname.
How Mmiotrace Works
Access to hardware IO-memory is gained by mapping addresses from PCI bus by
calling one of the ioremap_*() functions. Mmiotrace is hooked into the
__ioremap() function and gets called whenever a mapping is created. Mapping is
-an event that is recorded into the trace log. Note, that ISA range mappings
+an event that is recorded into the trace log. Note that ISA range mappings
are not caught, since the mapping always exists and is returned directly.
MMIO accesses are recorded via page faults. Just before __ioremap() returns,
----------------
The raw log is text and easily filtered with e.g. grep and awk. One record is
-one line in the log. A record starts with a keyword, followed by keyword
-dependant arguments. Arguments are separated by a space, or continue until the
+one line in the log. A record starts with a keyword, followed by keyword-
+dependent arguments. Arguments are separated by a space, or continue until the
end of line. The format for version 20070824 is as follows:
-Explanation Keyword Space separated arguments
+Explanation Keyword Space-separated arguments
---------------------------------------------------------------------------
read event R width, timestamp, map id, physical, value, PC, PID
marker MARK timestamp, text
version VERSION the string "20070824"
info for reader LSPCI one line from lspci -v
-PCI address map PCIDEV space separated /proc/bus/pci/devices data
+PCI address map PCIDEV space-separated /proc/bus/pci/devices data
unk. opcode UNKNOWN timestamp, map id, physical, data, PC, PID
Timestamp is in seconds with decimals. Physical is a PCI bus address, virtual
creating custom kernel modules to register probe functions using the event
tracing infrastructure.
-Simplistically, tracepoints will represent an important event that when can
-be taken in conjunction with other tracepoints to build a "Big Picture" of
+Simplistically, tracepoints represent important events that can be
+taken in conjunction with other tracepoints to build a "Big Picture" of
what is going on within the system. There are a large number of methods for
gathering and interpreting these events. Lacking any current Best Practises,
this document describes some of the methods that can be used.
will give a fair indication of the number of events available.
-2.2 PCL
+2.2 PCL (Performance Counters for Linux)
-------
-Discovery and enumeration of all counters and events, including tracepoints
+Discovery and enumeration of all counters and events, including tracepoints,
are available with the perf tool. Getting a list of available events is a
-simple case of
+simple case of:
$ perf list 2>&1 | grep Tracepoint
ext4:ext4_free_inode [Tracepoint event]
[ .... remaining output snipped .... ]
-2. Enabling Events
+3. Enabling Events
==================
-2.1 System-Wide Event Enabling
+3.1 System-Wide Event Enabling
------------------------------
See Documentation/trace/events.txt for a proper description on how events
can be enabled system-wide. A short example of enabling all events related
-to page allocation would look something like
+to page allocation would look something like:
$ for i in `find /sys/kernel/debug/tracing/events -name "enable" | grep mm_`; do echo 1 > $i; done
-2.2 System-Wide Event Enabling with SystemTap
+3.2 System-Wide Event Enabling with SystemTap
---------------------------------------------
In SystemTap, tracepoints are accessible using the kernel.trace() function
print_count()
}
-2.3 System-Wide Event Enabling with PCL
+3.3 System-Wide Event Enabling with PCL
---------------------------------------
By specifying the -a switch and analysing sleep, the system-wide events
Similarly, one could execute a shell and exit it as desired to get a report
at that point.
-2.4 Local Event Enabling
+3.4 Local Event Enabling
------------------------
Documentation/trace/ftrace.txt describes how to enable events on a per-thread
basis using set_ftrace_pid.
-2.5 Local Event Enablement with PCL
+3.5 Local Event Enablement with PCL
-----------------------------------
-Events can be activate and tracked for the duration of a process on a local
+Events can be activated and tracked for the duration of a process on a local
basis using PCL such as follows.
$ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
0.973913387 seconds time elapsed
-3. Event Filtering
+4. Event Filtering
==================
Documentation/trace/ftrace.txt covers in-depth how to filter events in
ftrace. Obviously using grep and awk of trace_pipe is an option as well
as any script reading trace_pipe.
-4. Analysing Event Variances with PCL
+5. Analysing Event Variances with PCL
=====================================
Any workload can exhibit variances between runs and it can be important
-to know what the standard deviation in. By and large, this is left to the
+to know what the standard deviation is. By and large, this is left to the
performance analyst to do it by hand. In the event that the discrete event
occurrences are useful to the performance analyst, then perf can be used.
aggregation of discrete events, then a script would need to be developed.
Using --repeat, it is also possible to view how events are fluctuating over
-time on a system wide basis using -a and sleep.
+time on a system-wide basis using -a and sleep.
$ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
-e kmem:mm_pagevec_free \
1.002251757 seconds time elapsed ( +- 0.005% )
-5. Higher-Level Analysis with Helper Scripts
+6. Higher-Level Analysis with Helper Scripts
============================================
When events are enabled the events that are triggering can be read from
o Reading information from /proc for the PID that triggered the event
o Deriving a higher-level event from a series of lower-level events.
- o Calculate latencies between two events
+ o Calculating latencies between two events
Documentation/trace/postprocess/trace-pagealloc-postprocess.pl is an example
script that can read trace_pipe from STDIN or a copy of a trace. When used
-on-line, it can be interrupted once to generate a report without existing
+on-line, it can be interrupted once to generate a report without exiting
and twice to exit.
Simplistically, the script just reads STDIN and counts up events but it
processes, the parent process responsible for creating all the helpers
can be identified
-6. Lower-Level Analysis with PCL
+7. Lower-Level Analysis with PCL
================================
-There may also be a requirement to identify what functions with a program
+There may also be a requirement to identify what functions within a program
were generating events within the kernel. To begin this sort of analysis, the
-data must be recorded. At the time of writing, this required root
+data must be recorded. At the time of writing, this required root:
$ perf record -c 1 \
-e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
# (For more details, try: perf report --sort comm,dso,symbol)
#
-According to this, the vast majority of events occured triggered on events
-within the VDSO. With simple binaries, this will often be the case so lets
+According to this, the vast majority of events triggered on events
+within the VDSO. With simple binaries, this will often be the case so let's
take a slightly different example. In the course of writing this, it was
-noticed that X was generating an insane amount of page allocations so lets look
-at it
+noticed that X was generating an insane amount of page allocations so let's look
+at it:
$ perf record -c 1 -f \
-e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
# (For more details, try: perf report --sort comm,dso,symbol)
#
-So, almost half of the events are occuring in a library. To get an idea which
-symbol.
+So, almost half of the events are occurring in a library. To get an idea which
+symbol:
$ perf report --sort comm,dso,symbol
# Samples: 27666
0.01% Xorg /opt/gfx-test/lib/libpixman-1.so.0.13.1 [.] get_fast_path
0.00% Xorg [kernel] [k] ftrace_trace_userstack
-To see where within the function pixmanFillsse2 things are going wrong
+To see where within the function pixmanFillsse2 things are going wrong:
$ perf annotate pixmanFillsse2
[ ... ]
----------------
To use the vga arbiter char device it was implemented an API inside the
-libpciaccess library. One fieldd was added to struct pci_device (each device
+libpciaccess library. One field was added to struct pci_device (each device
on the system):
/* the type of resource decoded by the device */
F: include/linux/f75375s.h
FIREWIRE SUBSYSTEM
-M: Kristian Hoegsberg <krh@redhat.com>
M: Stefan Richter <stefanr@s5r6.in-berlin.de>
L: linux1394-devel@lists.sourceforge.net
-W: http://www.linux1394.org/
+W: http://ieee1394.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git
S: Maintained
F: drivers/firewire/
F: drivers/idle/i7300_idle.c
IEEE 1394 SUBSYSTEM
-M: Ben Collins <ben.collins@ubuntu.com>
M: Stefan Richter <stefanr@s5r6.in-berlin.de>
L: linux1394-devel@lists.sourceforge.net
-W: http://www.linux1394.org/
+W: http://ieee1394.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git
-S: Maintained
+S: Obsolete
F: Documentation/debugging-via-ohci1394.txt
F: drivers/ieee1394/
-IEEE 1394 RAW I/O DRIVER
-M: Dan Dennedy <dan@dennedy.org>
-M: Stefan Richter <stefanr@s5r6.in-berlin.de>
-L: linux1394-devel@lists.sourceforge.net
-S: Maintained
-F: drivers/ieee1394/raw1394*
-
IEEE 802.15.4 SUBSYSTEM
M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
M: Sergey Lapin <slapin@ossfans.org>
#include "common.h"
-#define ARRAY_AND_SIZE(x) (x), ARRAY_SIZE(x)
-
static unsigned long ttc_dkb_pin_config[] __initdata = {
/* UART2 */
GPIO47_UART2_RXD,
config MACH_LITTLETON
bool "PXA3xx Form Factor Platform (aka Littleton)"
select PXA3xx
+ select CPU_PXA300
+ select CPU_PXA310
select PXA_SSP
config MACH_TAVOREVB
/* the following variables are processor specific and initialized
* by the corresponding zylonite_pxa3xx_init()
*/
-struct platform_mmc_slot {
- int gpio_cd;
- int gpio_wp;
-};
-
-extern struct platform_mmc_slot zylonite_mmc_slot[];
-
extern int gpio_eth_irq;
extern int gpio_debug_led1;
extern int gpio_debug_led2;
GPIO7_MMC1_CLK,
GPIO8_MMC1_CMD,
GPIO15_GPIO, /* card detect */
+
+ /* UART3 */
+ GPIO107_UART3_CTS,
+ GPIO108_UART3_RTS,
+ GPIO109_UART3_TXD,
+ GPIO110_UART3_RXD,
};
static struct resource smc91x_resources[] = {
.init = poodle_mci_init,
.setpower = poodle_mci_setpower,
.exit = poodle_mci_exit,
- .gpio_card_detect = POODLE_IRQ_GPIO_nSD_DETECT,
+ .gpio_card_detect = POODLE_GPIO_nSD_DETECT,
.gpio_card_ro = POODLE_GPIO_nSD_WP,
.gpio_power = -1,
};
#include <linux/mtd/physmap.h>
#include <linux/i2c.h>
#include <linux/i2c/pca953x.h>
+#include <linux/apm-emulation.h>
#include <asm/mach-types.h>
#include <asm/mach/arch.h>
pxa27x_cpu_suspend(PWRMODE_DEEPSLEEP);
}
-int zeus_get_pcb_info(struct i2c_client *client, unsigned gpio,
- unsigned ngpio, void *context)
+#ifdef CONFIG_APM_EMULATION
+static void zeus_get_power_status(struct apm_power_info *info)
+{
+ /* Power supply is always present */
+ info->ac_line_status = APM_AC_ONLINE;
+ info->battery_status = APM_BATTERY_STATUS_NOT_PRESENT;
+ info->battery_flag = APM_BATTERY_FLAG_NOT_PRESENT;
+}
+
+static inline void zeus_setup_apm(void)
+{
+ apm_get_power_status = zeus_get_power_status;
+}
+#else
+static inline void zeus_setup_apm(void)
+{
+}
+#endif
+
+static int zeus_get_pcb_info(struct i2c_client *client, unsigned gpio,
+ unsigned ngpio, void *context)
{
int i;
u8 pcb_info = 0;
GPIO99_GPIO, /* CF RDY */
};
+/*
+ * DM9k MSCx settings: SRAM, 16 bits
+ * 17 cycles delay first access
+ * 5 cycles delay next access
+ * 13 cycles recovery time
+ * faster device
+ */
+#define DM9K_MSC_VALUE 0xe4c9
+
static void __init zeus_init(void)
{
- u16 dm9000_msc = 0xe279;
+ u16 dm9000_msc = DM9K_MSC_VALUE;
system_rev = __raw_readw(ZEUS_CPLD_VERSION);
pr_info("Zeus CPLD V%dI%d\n", (system_rev & 0xf0) >> 4, (system_rev & 0x0f));
MSC1 = (MSC1 & 0xffff0000) | dm9000_msc;
pm_power_off = zeus_power_off;
+ zeus_setup_apm();
pxa2xx_mfp_config(ARRAY_AND_SIZE(zeus_pin_config));
#include "devices.h"
#include "generic.h"
-#define MAX_SLOTS 3
-struct platform_mmc_slot zylonite_mmc_slot[MAX_SLOTS];
-
int gpio_eth_irq;
int gpio_debug_led1;
int gpio_debug_led2;
#endif
#if defined(CONFIG_MMC)
-static int zylonite_mci_ro(struct device *dev)
-{
- struct platform_device *pdev = to_platform_device(dev);
-
- return gpio_get_value(zylonite_mmc_slot[pdev->id].gpio_wp);
-}
-
-static int zylonite_mci_init(struct device *dev,
- irq_handler_t zylonite_detect_int,
- void *data)
-{
- struct platform_device *pdev = to_platform_device(dev);
- int err, cd_irq, gpio_cd, gpio_wp;
-
- cd_irq = gpio_to_irq(zylonite_mmc_slot[pdev->id].gpio_cd);
- gpio_cd = zylonite_mmc_slot[pdev->id].gpio_cd;
- gpio_wp = zylonite_mmc_slot[pdev->id].gpio_wp;
-
- /*
- * setup GPIO for Zylonite MMC controller
- */
- err = gpio_request(gpio_cd, "mmc card detect");
- if (err)
- goto err_request_cd;
- gpio_direction_input(gpio_cd);
-
- err = gpio_request(gpio_wp, "mmc write protect");
- if (err)
- goto err_request_wp;
- gpio_direction_input(gpio_wp);
-
- err = request_irq(cd_irq, zylonite_detect_int,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
- "MMC card detect", data);
- if (err) {
- printk(KERN_ERR "%s: MMC/SD/SDIO: "
- "can't request card detect IRQ\n", __func__);
- goto err_request_irq;
- }
-
- return 0;
-
-err_request_irq:
- gpio_free(gpio_wp);
-err_request_wp:
- gpio_free(gpio_cd);
-err_request_cd:
- return err;
-}
-
-static void zylonite_mci_exit(struct device *dev, void *data)
-{
- struct platform_device *pdev = to_platform_device(dev);
- int cd_irq, gpio_cd, gpio_wp;
-
- cd_irq = gpio_to_irq(zylonite_mmc_slot[pdev->id].gpio_cd);
- gpio_cd = zylonite_mmc_slot[pdev->id].gpio_cd;
- gpio_wp = zylonite_mmc_slot[pdev->id].gpio_wp;
-
- free_irq(cd_irq, data);
- gpio_free(gpio_cd);
- gpio_free(gpio_wp);
-}
-
static struct pxamci_platform_data zylonite_mci_platform_data = {
.detect_delay = 20,
.ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
- .init = zylonite_mci_init,
- .exit = zylonite_mci_exit,
- .get_ro = zylonite_mci_ro,
- .gpio_card_detect = -1,
- .gpio_card_ro = -1,
+ .gpio_card_detect = EXT_GPIO(0),
+ .gpio_card_ro = EXT_GPIO(2),
.gpio_power = -1,
};
static struct pxamci_platform_data zylonite_mci2_platform_data = {
.detect_delay = 20,
.ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
+ .gpio_card_detect = EXT_GPIO(1),
+ .gpio_card_ro = EXT_GPIO(3),
+ .gpio_power = -1,
+};
+
+static struct pxamci_platform_data zylonite_mci3_platform_data = {
+ .detect_delay = 20,
+ .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
+ .gpio_card_detect = EXT_GPIO(30),
+ .gpio_card_ro = EXT_GPIO(31),
+ .gpio_power = -1,
};
static void __init zylonite_init_mmc(void)
pxa_set_mci_info(&zylonite_mci_platform_data);
pxa3xx_set_mci2_info(&zylonite_mci2_platform_data);
if (cpu_is_pxa310())
- pxa3xx_set_mci3_info(&zylonite_mci_platform_data);
+ pxa3xx_set_mci3_info(&zylonite_mci3_platform_data);
}
#else
static inline void zylonite_init_mmc(void) {}
GPIO22_I2C_SDA,
/* GPIO */
- GPIO18_GPIO, /* GPIO Expander #0 INT_N */
- GPIO19_GPIO, /* GPIO Expander #1 INT_N */
+ GPIO18_GPIO | MFP_PULL_HIGH, /* GPIO Expander #0 INT_N */
+ GPIO19_GPIO | MFP_PULL_HIGH, /* GPIO Expander #1 INT_N */
};
static mfp_cfg_t pxa300_mfp_cfg[] __initdata = {
/* detect LCD panel */
zylonite_detect_lcd_panel();
- /* MMC card detect & write protect for controller 0 */
- zylonite_mmc_slot[0].gpio_cd = EXT_GPIO(0);
- zylonite_mmc_slot[0].gpio_wp = EXT_GPIO(2);
-
/* WM9713 IRQ */
wm9713_irq = mfp_to_gpio(MFP_PIN_GPIO26);
if (cpu_is_pxa310()) {
pxa3xx_mfp_config(ARRAY_AND_SIZE(pxa310_mfp_cfg));
gpio_eth_irq = mfp_to_gpio(MFP_PIN_GPIO102);
-
- /* MMC card detect & write protect for controller 2 */
- zylonite_mmc_slot[2].gpio_cd = EXT_GPIO(30);
- zylonite_mmc_slot[2].gpio_wp = EXT_GPIO(31);
}
/* GPIOs for Debug LEDs */
gpio_debug_led1 = mfp_to_gpio(MFP_PIN_GPIO1_2);
gpio_debug_led2 = mfp_to_gpio(MFP_PIN_GPIO4_2);
- /* MMC card detect & write protect for controller 0 */
- zylonite_mmc_slot[0].gpio_cd = mfp_to_gpio(MFP_PIN_GPIO1);
- zylonite_mmc_slot[0].gpio_wp = mfp_to_gpio(MFP_PIN_GPIO5);
-
/* WM9713 IRQ */
wm9713_irq = mfp_to_gpio(MFP_PIN_GPIO15);
}
goto err_free_clk;
}
- r = request_mem_region(r->start, r->end - r->start + 1, pdev->name);
+ r = request_mem_region(r->start, resource_size(r), pdev->name);
if (r == NULL) {
dev_err(&pdev->dev, "failed to request memory resource\n");
ret = -EBUSY;
goto err_free_clk;
}
- pwm->mmio_base = ioremap(r->start, r->end - r->start + 1);
+ pwm->mmio_base = ioremap(r->start, resource_size(r));
if (pwm->mmio_base == NULL) {
dev_err(&pdev->dev, "failed to ioremap() registers\n");
ret = -ENODEV;
return 0;
err_free_mem:
- release_mem_region(r->start, r->end - r->start + 1);
+ release_mem_region(r->start, resource_size(r));
err_free_clk:
clk_put(pwm->clk);
err_free:
iounmap(pwm->mmio_base);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- release_mem_region(r->start, r->end - r->start + 1);
+ release_mem_region(r->start, resource_size(r));
clk_put(pwm->clk);
kfree(pwm);
#define _vmm_raw_spin_lock(x) do {}while(0)
#define _vmm_raw_spin_unlock(x) do {}while(0)
#else
+typedef struct {
+ volatile unsigned int lock;
+} vmm_spinlock_t;
#define _vmm_raw_spin_lock(x) \
do { \
__u32 *ia64_spinlock_ptr = (__u32 *) (x); \
#define _vmm_raw_spin_unlock(x) \
do { barrier(); \
- ((spinlock_t *)x)->raw_lock.lock = 0; } \
+ ((vmm_spinlock_t *)x)->lock = 0; } \
while (0)
#endif
-void vmm_spin_lock(spinlock_t *lock);
-void vmm_spin_unlock(spinlock_t *lock);
+void vmm_spin_lock(vmm_spinlock_t *lock);
+void vmm_spin_unlock(vmm_spinlock_t *lock);
enum {
I_TLB = 1,
D_TLB = 2
return ;
}
-void vmm_spin_lock(spinlock_t *lock)
+void vmm_spin_lock(vmm_spinlock_t *lock)
{
_vmm_raw_spin_lock(lock);
}
-void vmm_spin_unlock(spinlock_t *lock)
+void vmm_spin_unlock(vmm_spinlock_t *lock)
{
_vmm_raw_spin_unlock(lock);
}
{
u64 i, dirty_pages = 1;
u64 base_gfn = (pte&_PAGE_PPN_MASK) >> PAGE_SHIFT;
- spinlock_t *lock = __kvm_va(v->arch.dirty_log_lock_pa);
+ vmm_spinlock_t *lock = __kvm_va(v->arch.dirty_log_lock_pa);
void *dirty_bitmap = (void *)KVM_MEM_DIRTY_LOG_BASE;
dirty_pages <<= ps <= PAGE_SHIFT ? 0 : ps - PAGE_SHIFT;
list_for_each_entry(dev, &bus->devices, bus_list) {
struct dev_archdata *sd = &dev->dev.archdata;
+ /* Cardbus can call us to add new devices to a bus, so ignore
+ * those who are already fully discovered
+ */
+ if (dev->is_added)
+ continue;
+
/* Setup OF node pointer in archdata */
sd->of_node = pci_device_to_OF_node(dev);
}
EXPORT_SYMBOL(pcibios_fixup_bus);
+void __devinit pci_fixup_cardbus(struct pci_bus *bus)
+{
+ /* Now fixup devices on that bus */
+ pcibios_setup_bus_devices(bus);
+}
+
+
static int skip_isa_ioresource_align(struct pci_dev *dev)
{
if ((ppc_pci_flags & PPC_PCI_CAN_SKIP_ISA_ALIGN) &&
{
u64 rb = 0, rs = 0;
+ /*
+ * According to Book3 2.01 mtsrin is implemented as:
+ *
+ * The SLB entry specified by (RB)32:35 is loaded from register
+ * RS, as follows.
+ *
+ * SLBE Bit Source SLB Field
+ *
+ * 0:31 0x0000_0000 ESID-0:31
+ * 32:35 (RB)32:35 ESID-32:35
+ * 36 0b1 V
+ * 37:61 0x00_0000|| 0b0 VSID-0:24
+ * 62:88 (RS)37:63 VSID-25:51
+ * 89:91 (RS)33:35 Ks Kp N
+ * 92 (RS)36 L ((RS)36 must be 0b0)
+ * 93 0b0 C
+ */
+
+ dprintk("KVM MMU: mtsrin(0x%x, 0x%lx)\n", srnum, value);
+
/* ESID = srnum */
rb |= (srnum & 0xf) << 28;
/* Set the valid bit */
/* VSID = VSID */
rs |= (value & 0xfffffff) << 12;
/* flags = flags */
- rs |= ((value >> 27) & 0xf) << 9;
+ rs |= ((value >> 28) & 0x7) << 9;
kvmppc_mmu_book3s_64_slbmte(vcpu, rs, rb);
}
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+cflags-$(CONFIG_X86_32) := -march=i386
cflags-$(CONFIG_X86_64) := -mcmodel=small
KBUILD_CFLAGS += $(cflags-y)
KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
__u8 reserved[31];
};
+/* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
+#define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
+#define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
+
/* for KVM_GET/SET_VCPU_EVENTS */
struct kvm_vcpu_events {
struct {
* contiguous (although various IO spaces may punch holes in
* it)..
*
- * N - Number of bits in the node portion of a socket physical
- * address.
+ * N - Number of bits in the node portion of a socket physical
+ * address.
*
- * NASID - network ID of a router, Mbrick or Cbrick. Nasid values of
- * routers always have low bit of 1, C/MBricks have low bit
- * equal to 0. Most addressing macros that target UV hub chips
- * right shift the NASID by 1 to exclude the always-zero bit.
- * NASIDs contain up to 15 bits.
+ * NASID - network ID of a router, Mbrick or Cbrick. Nasid values of
+ * routers always have low bit of 1, C/MBricks have low bit
+ * equal to 0. Most addressing macros that target UV hub chips
+ * right shift the NASID by 1 to exclude the always-zero bit.
+ * NASIDs contain up to 15 bits.
*
* GNODE - NASID right shifted by 1 bit. Most mmrs contain gnodes instead
* of nasids.
*
- * PNODE - the low N bits of the GNODE. The PNODE is the most useful variant
- * of the nasid for socket usage.
+ * PNODE - the low N bits of the GNODE. The PNODE is the most useful variant
+ * of the nasid for socket usage.
*
*
* NumaLink Global Physical Address Format:
*
*
* APICID format
- * NOTE!!!!!! This is the current format of the APICID. However, code
- * should assume that this will change in the future. Use functions
- * in this file for all APICID bit manipulations and conversion.
+ * NOTE!!!!!! This is the current format of the APICID. However, code
+ * should assume that this will change in the future. Use functions
+ * in this file for all APICID bit manipulations and conversion.
*
- * 1111110000000000
- * 5432109876543210
+ * 1111110000000000
+ * 5432109876543210
* pppppppppplc0cch
* sssssssssss
*
* Note: Processor only supports 12 bits in the APICID register. The ACPI
* tables hold all 16 bits. Software needs to be aware of this.
*
- * Unless otherwise specified, all references to APICID refer to
- * the FULL value contained in ACPI tables, not the subset in the
- * processor APICID register.
+ * Unless otherwise specified, all references to APICID refer to
+ * the FULL value contained in ACPI tables, not the subset in the
+ * processor APICID register.
*/
};
DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
-#define uv_hub_info (&__get_cpu_var(__uv_hub_info))
+#define uv_hub_info (&__get_cpu_var(__uv_hub_info))
#define uv_cpu_hub_info(cpu) (&per_cpu(__uv_hub_info, cpu))
/*
* Local & Global MMR space macros.
- * Note: macros are intended to be used ONLY by inline functions
- * in this file - not by other kernel code.
- * n - NASID (full 15-bit global nasid)
- * g - GNODE (full 15-bit global nasid, right shifted 1)
- * p - PNODE (local part of nsids, right shifted 1)
+ * Note: macros are intended to be used ONLY by inline functions
+ * in this file - not by other kernel code.
+ * n - NASID (full 15-bit global nasid)
+ * g - GNODE (full 15-bit global nasid, right shifted 1)
+ * p - PNODE (local part of nsids, right shifted 1)
*/
#define UV_NASID_TO_PNODE(n) (((n) >> 1) & uv_hub_info->pnode_mask)
#define UV_PNODE_TO_GNODE(p) ((p) |uv_hub_info->gnode_extra)
/*
* Macros for converting between kernel virtual addresses, socket local physical
* addresses, and UV global physical addresses.
- * Note: use the standard __pa() & __va() macros for converting
- * between socket virtual and socket physical addresses.
+ * Note: use the standard __pa() & __va() macros for converting
+ * between socket virtual and socket physical addresses.
*/
/* socket phys RAM --> UV global physical address */
* Access global MMRs using the low memory MMR32 space. This region supports
* faster MMR access but not all MMRs are accessible in this space.
*/
-static inline unsigned long *uv_global_mmr32_address(int pnode,
- unsigned long offset)
+static inline unsigned long *uv_global_mmr32_address(int pnode, unsigned long offset)
{
return __va(UV_GLOBAL_MMR32_BASE |
UV_GLOBAL_MMR32_PNODE_BITS(pnode) | offset);
}
-static inline void uv_write_global_mmr32(int pnode, unsigned long offset,
- unsigned long val)
+static inline void uv_write_global_mmr32(int pnode, unsigned long offset, unsigned long val)
{
writeq(val, uv_global_mmr32_address(pnode, offset));
}
-static inline unsigned long uv_read_global_mmr32(int pnode,
- unsigned long offset)
+static inline unsigned long uv_read_global_mmr32(int pnode, unsigned long offset)
{
return readq(uv_global_mmr32_address(pnode, offset));
}
* Access Global MMR space using the MMR space located at the top of physical
* memory.
*/
-static inline unsigned long *uv_global_mmr64_address(int pnode,
- unsigned long offset)
+static inline unsigned long *uv_global_mmr64_address(int pnode, unsigned long offset)
{
return __va(UV_GLOBAL_MMR64_BASE |
UV_GLOBAL_MMR64_PNODE_BITS(pnode) | offset);
}
-static inline void uv_write_global_mmr64(int pnode, unsigned long offset,
- unsigned long val)
+static inline void uv_write_global_mmr64(int pnode, unsigned long offset, unsigned long val)
{
writeq(val, uv_global_mmr64_address(pnode, offset));
}
-static inline unsigned long uv_read_global_mmr64(int pnode,
- unsigned long offset)
+static inline unsigned long uv_read_global_mmr64(int pnode, unsigned long offset)
{
return readq(uv_global_mmr64_address(pnode, offset));
}
return UV_GLOBAL_GRU_MMR_BASE | offset | (pnode << uv_hub_info->m_val);
}
+static inline void uv_write_global_mmr8(int pnode, unsigned long offset, unsigned char val)
+{
+ writeb(val, uv_global_mmr64_address(pnode, offset));
+}
+
+static inline unsigned char uv_read_global_mmr8(int pnode, unsigned long offset)
+{
+ return readb(uv_global_mmr64_address(pnode, offset));
+}
+
/*
* Access hub local MMRs. Faster than using global space but only local MMRs
* are accessible.
}
}
+static inline unsigned long uv_scir_offset(int apicid)
+{
+ return SCIR_LOCAL_MMR_BASE | (apicid & 0x3f);
+}
+
static inline void uv_set_cpu_scir_bits(int cpu, unsigned char value)
{
if (uv_cpu_hub_info(cpu)->scir.state != value) {
+ uv_write_global_mmr8(uv_cpu_to_pnode(cpu),
+ uv_cpu_hub_info(cpu)->scir.offset, value);
uv_cpu_hub_info(cpu)->scir.state = value;
- uv_write_local_mmr8(uv_cpu_hub_info(cpu)->scir.offset, value);
}
}
#endif
if (strncmp(str, "old_ordering", 12) == 0)
acpi_old_suspend_ordering();
+ if (strncmp(str, "sci_force_enable", 16) == 0)
+ acpi_set_sci_en_on_resume();
str = strchr(str, ',');
if (str != NULL)
str += strspn(str, ", \t");
bool amd_iommu_np_cache __read_mostly;
/*
+ * Set to true if ACPI table parsing and hardware intialization went properly
+ */
+static bool amd_iommu_initialized;
+
+/*
* List of protection domains - used during resume
*/
LIST_HEAD(amd_iommu_pd_list);
}
WARN_ON(p != end);
+ amd_iommu_initialized = true;
+
return 0;
}
if (acpi_table_parse("IVRS", init_iommu_all) != 0)
goto free;
+ if (!amd_iommu_initialized)
+ goto free;
+
if (acpi_table_parse("IVRS", init_memory_definitions) != 0)
goto free;
uv_rtc_init();
for_each_present_cpu(cpu) {
+ int apicid = per_cpu(x86_cpu_to_apicid, cpu);
+
nid = cpu_to_node(cpu);
- pnode = uv_apicid_to_pnode(per_cpu(x86_cpu_to_apicid, cpu));
+ pnode = uv_apicid_to_pnode(apicid);
blade = boot_pnode_to_blade(pnode);
lcpu = uv_blade_info[blade].nr_possible_cpus;
uv_blade_info[blade].nr_possible_cpus++;
uv_cpu_hub_info(cpu)->gnode_extra = gnode_extra;
uv_cpu_hub_info(cpu)->global_mmr_base = mmr_base;
uv_cpu_hub_info(cpu)->coherency_domain_number = sn_coherency_id;
- uv_cpu_hub_info(cpu)->scir.offset = SCIR_LOCAL_MMR_BASE + lcpu;
+ uv_cpu_hub_info(cpu)->scir.offset = uv_scir_offset(apicid);
uv_node_to_blade[nid] = blade;
uv_cpu_to_blade[cpu] = blade;
max_pnode = max(pnode, max_pnode);
- printk(KERN_DEBUG "UV: cpu %d, apicid 0x%x, pnode %d, nid %d, "
- "lcpu %d, blade %d\n",
- cpu, per_cpu(x86_cpu_to_apicid, cpu), pnode, nid,
- lcpu, blade);
+ printk(KERN_DEBUG "UV: cpu %d, apicid 0x%x, pnode %d, nid %d, lcpu %d, blade %d\n",
+ cpu, apicid, pnode, nid, lcpu, blade);
}
/* Add blade/pnode info for nodes without cpus */
callchain_store(entry, PERF_CONTEXT_KERNEL);
callchain_store(entry, regs->ip);
- dump_trace(NULL, regs, NULL, 0, &backtrace_ops, entry);
+ dump_trace(NULL, regs, NULL, regs->bp, &backtrace_ops, entry);
}
/*
if (!product)
product = "";
- printk("\n");
- printk(KERN_INFO "Pid: %d, comm: %.20s %s %s %.*s %s/%s\n",
+ printk(KERN_CONT "\n");
+ printk(KERN_DEFAULT "Pid: %d, comm: %.20s %s %s %.*s %s/%s\n",
current->pid, current->comm, print_tainted(),
init_utsname()->release,
(int)strcspn(init_utsname()->version, " "),
show_regs_common();
- printk("EIP: %04x:[<%08lx>] EFLAGS: %08lx CPU: %d\n",
+ printk(KERN_DEFAULT "EIP: %04x:[<%08lx>] EFLAGS: %08lx CPU: %d\n",
(u16)regs->cs, regs->ip, regs->flags,
smp_processor_id());
print_symbol("EIP is at %s\n", regs->ip);
- printk("EAX: %08lx EBX: %08lx ECX: %08lx EDX: %08lx\n",
+ printk(KERN_DEFAULT "EAX: %08lx EBX: %08lx ECX: %08lx EDX: %08lx\n",
regs->ax, regs->bx, regs->cx, regs->dx);
- printk("ESI: %08lx EDI: %08lx EBP: %08lx ESP: %08lx\n",
+ printk(KERN_DEFAULT "ESI: %08lx EDI: %08lx EBP: %08lx ESP: %08lx\n",
regs->si, regs->di, regs->bp, sp);
- printk(" DS: %04x ES: %04x FS: %04x GS: %04x SS: %04x\n",
+ printk(KERN_DEFAULT " DS: %04x ES: %04x FS: %04x GS: %04x SS: %04x\n",
(u16)regs->ds, (u16)regs->es, (u16)regs->fs, gs, ss);
if (!all)
cr2 = read_cr2();
cr3 = read_cr3();
cr4 = read_cr4_safe();
- printk("CR0: %08lx CR2: %08lx CR3: %08lx CR4: %08lx\n",
+ printk(KERN_DEFAULT "CR0: %08lx CR2: %08lx CR3: %08lx CR4: %08lx\n",
cr0, cr2, cr3, cr4);
get_debugreg(d0, 0);
get_debugreg(d1, 1);
get_debugreg(d2, 2);
get_debugreg(d3, 3);
- printk("DR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
+ printk(KERN_DEFAULT "DR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
d0, d1, d2, d3);
get_debugreg(d6, 6);
get_debugreg(d7, 7);
- printk("DR6: %08lx DR7: %08lx\n",
+ printk(KERN_DEFAULT "DR6: %08lx DR7: %08lx\n",
d6, d7);
}
unsigned int ds, cs, es;
show_regs_common();
- printk(KERN_INFO "RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->ip);
+ printk(KERN_DEFAULT "RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->ip);
printk_address(regs->ip, 1);
- printk(KERN_INFO "RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss,
+ printk(KERN_DEFAULT "RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss,
regs->sp, regs->flags);
- printk(KERN_INFO "RAX: %016lx RBX: %016lx RCX: %016lx\n",
+ printk(KERN_DEFAULT "RAX: %016lx RBX: %016lx RCX: %016lx\n",
regs->ax, regs->bx, regs->cx);
- printk(KERN_INFO "RDX: %016lx RSI: %016lx RDI: %016lx\n",
+ printk(KERN_DEFAULT "RDX: %016lx RSI: %016lx RDI: %016lx\n",
regs->dx, regs->si, regs->di);
- printk(KERN_INFO "RBP: %016lx R08: %016lx R09: %016lx\n",
+ printk(KERN_DEFAULT "RBP: %016lx R08: %016lx R09: %016lx\n",
regs->bp, regs->r8, regs->r9);
- printk(KERN_INFO "R10: %016lx R11: %016lx R12: %016lx\n",
+ printk(KERN_DEFAULT "R10: %016lx R11: %016lx R12: %016lx\n",
regs->r10, regs->r11, regs->r12);
- printk(KERN_INFO "R13: %016lx R14: %016lx R15: %016lx\n",
+ printk(KERN_DEFAULT "R13: %016lx R14: %016lx R15: %016lx\n",
regs->r13, regs->r14, regs->r15);
asm("movl %%ds,%0" : "=r" (ds));
cr3 = read_cr3();
cr4 = read_cr4();
- printk(KERN_INFO "FS: %016lx(%04x) GS:%016lx(%04x) knlGS:%016lx\n",
+ printk(KERN_DEFAULT "FS: %016lx(%04x) GS:%016lx(%04x) knlGS:%016lx\n",
fs, fsindex, gs, gsindex, shadowgs);
- printk(KERN_INFO "CS: %04x DS: %04x ES: %04x CR0: %016lx\n", cs, ds,
+ printk(KERN_DEFAULT "CS: %04x DS: %04x ES: %04x CR0: %016lx\n", cs, ds,
es, cr0);
- printk(KERN_INFO "CR2: %016lx CR3: %016lx CR4: %016lx\n", cr2, cr3,
+ printk(KERN_DEFAULT "CR2: %016lx CR3: %016lx CR4: %016lx\n", cr2, cr3,
cr4);
get_debugreg(d0, 0);
get_debugreg(d1, 1);
get_debugreg(d2, 2);
- printk(KERN_INFO "DR0: %016lx DR1: %016lx DR2: %016lx\n", d0, d1, d2);
+ printk(KERN_DEFAULT "DR0: %016lx DR1: %016lx DR2: %016lx\n", d0, d1, d2);
get_debugreg(d3, 3);
get_debugreg(d6, 6);
get_debugreg(d7, 7);
- printk(KERN_INFO "DR3: %016lx DR6: %016lx DR7: %016lx\n", d3, d6, d7);
+ printk(KERN_DEFAULT "DR3: %016lx DR6: %016lx DR7: %016lx\n", d3, d6, d7);
}
void show_regs(struct pt_regs *regs)
hrtimer_cancel(&apic->lapic_timer.timer);
update_divide_count(apic);
start_apic_timer(apic);
+ apic->irr_pending = true;
}
void __kvm_migrate_apic_timer(struct kvm_vcpu *vcpu)
static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
{
struct kvm_shadow_walk_iterator iterator;
- pt_element_t gpte;
- gpa_t pte_gpa = -1;
int level;
u64 *sptep;
int need_flush = 0;
if (level == PT_PAGE_TABLE_LEVEL ||
((level == PT_DIRECTORY_LEVEL && is_large_pte(*sptep))) ||
((level == PT_PDPE_LEVEL && is_large_pte(*sptep)))) {
- struct kvm_mmu_page *sp = page_header(__pa(sptep));
-
- pte_gpa = (sp->gfn << PAGE_SHIFT);
- pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
if (is_shadow_present_pte(*sptep)) {
rmap_remove(vcpu->kvm, sptep);
if (need_flush)
kvm_flush_remote_tlbs(vcpu->kvm);
spin_unlock(&vcpu->kvm->mmu_lock);
-
- if (pte_gpa == -1)
- return;
- if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &gpte,
- sizeof(pt_element_t)))
- return;
- if (is_present_gpte(gpte) && (gpte & PT_ACCESSED_MASK)) {
- if (mmu_topup_memory_caches(vcpu))
- return;
- kvm_mmu_pte_write(vcpu, pte_gpa, (const u8 *)&gpte,
- sizeof(pt_element_t), 0);
- }
}
static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr)
events->sipi_vector = vcpu->arch.sipi_vector;
- events->flags = 0;
+ events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR);
vcpu_put(vcpu);
}
static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
struct kvm_vcpu_events *events)
{
- if (events->flags)
+ if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR))
return -EINVAL;
vcpu_load(vcpu);
kvm_pic_clear_isr_ack(vcpu->kvm);
vcpu->arch.nmi_injected = events->nmi.injected;
- vcpu->arch.nmi_pending = events->nmi.pending;
+ if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
+ vcpu->arch.nmi_pending = events->nmi.pending;
kvm_x86_ops->set_nmi_mask(vcpu, events->nmi.masked);
- vcpu->arch.sipi_vector = events->sipi_vector;
+ if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR)
+ vcpu->arch.sipi_vector = events->sipi_vector;
vcpu_put(vcpu);
switch (e->type) {
case KMEMCHECK_ERROR_INVALID_ACCESS:
- printk(KERN_ERR "WARNING: kmemcheck: Caught %d-bit read "
- "from %s memory (%p)\n",
+ printk(KERN_WARNING "WARNING: kmemcheck: Caught %d-bit read from %s memory (%p)\n",
8 * e->size, e->state < ARRAY_SIZE(desc) ?
desc[e->state] : "(invalid shadow state)",
(void *) e->address);
- printk(KERN_INFO);
+ printk(KERN_WARNING);
for (i = 0; i < SHADOW_COPY_SIZE; ++i)
- printk("%02x", e->memory_copy[i]);
- printk("\n");
+ printk(KERN_CONT "%02x", e->memory_copy[i]);
+ printk(KERN_CONT "\n");
- printk(KERN_INFO);
+ printk(KERN_WARNING);
for (i = 0; i < SHADOW_COPY_SIZE; ++i) {
if (e->shadow_copy[i] < ARRAY_SIZE(short_desc))
- printk(" %c", short_desc[e->shadow_copy[i]]);
+ printk(KERN_CONT " %c", short_desc[e->shadow_copy[i]]);
else
- printk(" ?");
+ printk(KERN_CONT " ?");
}
- printk("\n");
- printk(KERN_INFO "%*c\n", 2 + 2
+ printk(KERN_CONT "\n");
+ printk(KERN_WARNING "%*c\n", 2 + 2
* (int) (e->address & (SHADOW_COPY_SIZE - 1)), '^');
break;
case KMEMCHECK_ERROR_BUG:
}
}
-void __init update_res(struct pci_root_info *info, size_t start,
+void __devinit update_res(struct pci_root_info *info, size_t start,
size_t end, unsigned long flags, int merge)
{
int i;
od_sver = 19;
}
-/^GNU/ {
- split($3, ver, ".");
+/^GNU objdump/ {
+ verstr = ""
+ for (i = 3; i <= NF; i++)
+ if (match($(i), "^[0-9]")) {
+ verstr = $(i);
+ break;
+ }
+ if (verstr == "") {
+ printf("Warning: Failed to find objdump version number.\n");
+ exit 0;
+ }
+ split(verstr, ver, ".");
if (ver[1] > od_ver ||
(ver[1] == od_ver && ver[2] >= od_sver)) {
exit 1;
} else {
printf("Warning: objdump version %s is older than %d.%d\n",
- $4, od_ver, od_sver);
+ verstr, od_ver, od_sver);
print("Warning: Skipping posttest.");
# Logic is inverted, because we just skip test without error.
exit 0;
* our current implementations need. If we'll ever need
* more the interface will need revisiting.
*/
- page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ page = alloc_page(gfp_mask | __GFP_ZERO);
if (!page)
goto out_free_bio;
if (bio_add_pc_page(q, bio, page, sector_size, 0) < sector_size)
/**
* blk_stack_limits - adjust queue_limits for stacked devices
- * @t: the stacking driver limits (top)
- * @b: the underlying queue limits (bottom)
+ * @t: the stacking driver limits (top device)
+ * @b: the underlying queue limits (bottom, component device)
* @offset: offset to beginning of data within component device
*
* Description:
- * Merges two queue_limit structs. Returns 0 if alignment didn't
- * change. Returns -1 if adding the bottom device caused
- * misalignment.
+ * This function is used by stacking drivers like MD and DM to ensure
+ * that all component devices have compatible block sizes and
+ * alignments. The stacking driver must provide a queue_limits
+ * struct (top) and then iteratively call the stacking function for
+ * all component (bottom) devices. The stacking function will
+ * attempt to combine the values and ensure proper alignment.
+ *
+ * Returns 0 if the top and bottom queue_limits are compatible. The
+ * top device's block sizes and alignment offsets may be adjusted to
+ * ensure alignment with the bottom device. If no compatible sizes
+ * and alignments exist, -1 is returned and the resulting top
+ * queue_limits will have the misaligned flag set to indicate that
+ * the alignment_offset is undefined.
*/
int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
sector_t offset)
{
- int ret;
-
- ret = 0;
+ sector_t alignment;
+ unsigned int top, bottom;
t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors);
t->max_segment_size = min_not_zero(t->max_segment_size,
b->max_segment_size);
+ alignment = queue_limit_alignment_offset(b, offset);
+
+ /* Bottom device has different alignment. Check that it is
+ * compatible with the current top alignment.
+ */
+ if (t->alignment_offset != alignment) {
+
+ top = max(t->physical_block_size, t->io_min)
+ + t->alignment_offset;
+ bottom = max(b->physical_block_size, b->io_min) + alignment;
+
+ /* Verify that top and bottom intervals line up */
+ if (max(top, bottom) & (min(top, bottom) - 1))
+ t->misaligned = 1;
+ }
+
t->logical_block_size = max(t->logical_block_size,
b->logical_block_size);
b->physical_block_size);
t->io_min = max(t->io_min, b->io_min);
+ t->io_opt = lcm(t->io_opt, b->io_opt);
+
t->no_cluster |= b->no_cluster;
t->discard_zeroes_data &= b->discard_zeroes_data;
- /* Bottom device offset aligned? */
- if (offset &&
- (offset & (b->physical_block_size - 1)) != b->alignment_offset) {
+ /* Physical block size a multiple of the logical block size? */
+ if (t->physical_block_size & (t->logical_block_size - 1)) {
+ t->physical_block_size = t->logical_block_size;
t->misaligned = 1;
- ret = -1;
}
- /*
- * Temporarily disable discard granularity. It's currently buggy
- * since we default to 0 for discard_granularity, hence this
- * "failure" will always trigger for non-zero offsets.
- */
-#if 0
- if (offset &&
- (offset & (b->discard_granularity - 1)) != b->discard_alignment) {
- t->discard_misaligned = 1;
- ret = -1;
+ /* Minimum I/O a multiple of the physical block size? */
+ if (t->io_min & (t->physical_block_size - 1)) {
+ t->io_min = t->physical_block_size;
+ t->misaligned = 1;
}
-#endif
-
- /* If top has no alignment offset, inherit from bottom */
- if (!t->alignment_offset)
- t->alignment_offset =
- b->alignment_offset & (b->physical_block_size - 1);
- if (!t->discard_alignment)
- t->discard_alignment =
- b->discard_alignment & (b->discard_granularity - 1);
-
- /* Top device aligned on logical block boundary? */
- if (t->alignment_offset & (t->logical_block_size - 1)) {
+ /* Optimal I/O a multiple of the physical block size? */
+ if (t->io_opt & (t->physical_block_size - 1)) {
+ t->io_opt = 0;
t->misaligned = 1;
- ret = -1;
}
- /* Find lcm() of optimal I/O size and granularity */
- t->io_opt = lcm(t->io_opt, b->io_opt);
- t->discard_granularity = lcm(t->discard_granularity,
- b->discard_granularity);
+ /* Find lowest common alignment_offset */
+ t->alignment_offset = lcm(t->alignment_offset, alignment)
+ & (max(t->physical_block_size, t->io_min) - 1);
- /* Verify that optimal I/O size is a multiple of io_min */
- if (t->io_min && t->io_opt % t->io_min)
- ret = -1;
+ /* Verify that new alignment_offset is on a logical block boundary */
+ if (t->alignment_offset & (t->logical_block_size - 1))
+ t->misaligned = 1;
+
+ /* Discard alignment and granularity */
+ if (b->discard_granularity) {
+ unsigned int granularity = b->discard_granularity;
+ offset &= granularity - 1;
+
+ alignment = (granularity + b->discard_alignment - offset)
+ & (granularity - 1);
+
+ if (t->discard_granularity != 0 &&
+ t->discard_alignment != alignment) {
+ top = t->discard_granularity + t->discard_alignment;
+ bottom = b->discard_granularity + alignment;
+
+ /* Verify that top and bottom intervals line up */
+ if (max(top, bottom) & (min(top, bottom) - 1))
+ t->discard_misaligned = 1;
+ }
+
+ t->max_discard_sectors = min_not_zero(t->max_discard_sectors,
+ b->max_discard_sectors);
+ t->discard_granularity = max(t->discard_granularity,
+ b->discard_granularity);
+ t->discard_alignment = lcm(t->discard_alignment, alignment) &
+ (t->discard_granularity - 1);
+ }
- return ret;
+ return t->misaligned ? -1 : 0;
}
EXPORT_SYMBOL(blk_stack_limits);
/* Root service tree for cfq_groups */
struct cfq_rb_root grp_service_tree;
struct cfq_group root_group;
- /* Number of active cfq groups on group service tree */
- int nr_groups;
/*
* The priority currently being served
static struct cfq_rb_root *service_tree_for(struct cfq_group *cfqg,
enum wl_prio_t prio,
- enum wl_type_t type,
- struct cfq_data *cfqd)
+ enum wl_type_t type)
{
if (!cfqg)
return NULL;
__cfq_group_service_tree_add(st, cfqg);
cfqg->on_st = true;
- cfqd->nr_groups++;
st->total_weight += cfqg->weight;
}
cfq_log_cfqg(cfqd, cfqg, "del_from_rr group");
cfqg->on_st = false;
- cfqd->nr_groups--;
st->total_weight -= cfqg->weight;
if (!RB_EMPTY_NODE(&cfqg->rb_node))
cfq_rb_erase(&cfqg->rb_node, st);
#endif
service_tree = service_tree_for(cfqq->cfqg, cfqq_prio(cfqq),
- cfqq_type(cfqq), cfqd);
+ cfqq_type(cfqq));
if (cfq_class_idle(cfqq)) {
rb_key = CFQ_IDLE_DELAY;
parent = rb_last(&service_tree->rb);
struct cfq_io_context *cic;
struct cfq_queue *cfqq;
- /* Deny merge if bio and rq don't belong to same cfq group */
- if ((RQ_CFQQ(rq))->cfqg != cfq_get_cfqg(cfqd, 0))
- return false;
/*
* Disallow merge of a sync bio into an async request.
*/
{
struct cfq_rb_root *service_tree =
service_tree_for(cfqd->serving_group, cfqd->serving_prio,
- cfqd->serving_type, cfqd);
+ cfqd->serving_type);
if (!cfqd->rq_queued)
return NULL;
#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
- struct request *rq)
+ struct request *rq, bool for_preempt)
{
sector_t sdist = cfqq->seek_mean;
if (!sample_valid(cfqq->seek_samples))
sdist = CFQQ_SEEK_THR;
+ /* if seek_mean is big, using it as close criteria is meaningless */
+ if (sdist > CFQQ_SEEK_THR && !for_preempt)
+ sdist = CFQQ_SEEK_THR;
+
return cfq_dist_from_last(cfqd, rq) <= sdist;
}
* will contain the closest sector.
*/
__cfqq = rb_entry(parent, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
return __cfqq;
if (blk_rq_pos(__cfqq->next_rq) < sector)
return NULL;
__cfqq = rb_entry(node, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
return __cfqq;
return NULL;
}
static enum wl_type_t cfq_choose_wl(struct cfq_data *cfqd,
- struct cfq_group *cfqg, enum wl_prio_t prio,
- bool prio_changed)
+ struct cfq_group *cfqg, enum wl_prio_t prio)
{
struct cfq_queue *queue;
int i;
unsigned long lowest_key = 0;
enum wl_type_t cur_best = SYNC_NOIDLE_WORKLOAD;
- if (prio_changed) {
- /*
- * When priorities switched, we prefer starting
- * from SYNC_NOIDLE (first choice), or just SYNC
- * over ASYNC
- */
- if (service_tree_for(cfqg, prio, cur_best, cfqd)->count)
- return cur_best;
- cur_best = SYNC_WORKLOAD;
- if (service_tree_for(cfqg, prio, cur_best, cfqd)->count)
- return cur_best;
-
- return ASYNC_WORKLOAD;
- }
-
- for (i = 0; i < 3; ++i) {
- /* otherwise, select the one with lowest rb_key */
- queue = cfq_rb_first(service_tree_for(cfqg, prio, i, cfqd));
+ for (i = 0; i <= SYNC_WORKLOAD; ++i) {
+ /* select the one with lowest rb_key */
+ queue = cfq_rb_first(service_tree_for(cfqg, prio, i));
if (queue &&
(!key_valid || time_before(queue->rb_key, lowest_key))) {
lowest_key = queue->rb_key;
static void choose_service_tree(struct cfq_data *cfqd, struct cfq_group *cfqg)
{
- enum wl_prio_t previous_prio = cfqd->serving_prio;
- bool prio_changed;
unsigned slice;
unsigned count;
struct cfq_rb_root *st;
* (SYNC, SYNC_NOIDLE, ASYNC), and to compute a workload
* expiration time
*/
- prio_changed = (cfqd->serving_prio != previous_prio);
- st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type,
- cfqd);
+ st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type);
count = st->count;
/*
- * If priority didn't change, check workload expiration,
- * and that we still have other queues ready
+ * check workload expiration, and that we still have other queues ready
*/
- if (!prio_changed && count &&
- !time_after(jiffies, cfqd->workload_expires))
+ if (count && !time_after(jiffies, cfqd->workload_expires))
return;
/* otherwise select new workload type */
cfqd->serving_type =
- cfq_choose_wl(cfqd, cfqg, cfqd->serving_prio, prio_changed);
- st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type,
- cfqd);
+ cfq_choose_wl(cfqd, cfqg, cfqd->serving_prio);
+ st = service_tree_for(cfqg, cfqd->serving_prio, cfqd->serving_type);
count = st->count;
/*
* if this request is as-good as one we would expect from the
* current cfqq, let it preempt
*/
- if (cfq_rq_close(cfqd, cfqq, rq))
+ if (cfq_rq_close(cfqd, cfqq, rq, true))
return true;
return false;
err += test(4, &tests);
if (NDISKS > 5)
err += test(5, &tests);
+ /* the 11 and 12 disk cases are special for ioatdma (p-disabled
+ * q-continuation without extended descriptor)
+ */
+ if (NDISKS > 12) {
+ err += test(11, &tests);
+ err += test(12, &tests);
+ }
err += test(NDISKS, &tests);
pr("\n");
source "drivers/message/fusion/Kconfig"
-source "drivers/ieee1394/Kconfig"
+source "drivers/firewire/Kconfig"
source "drivers/message/i2o/Kconfig"
#ifdef CONFIG_ACPI_SLEEP
static u32 acpi_target_sleep_state = ACPI_STATE_S0;
/*
+ * According to the ACPI specification the BIOS should make sure that ACPI is
+ * enabled and SCI_EN bit is set on wake-up from S1 - S3 sleep states. Still,
+ * some BIOSes don't do that and therefore we use acpi_enable() to enable ACPI
+ * on such systems during resume. Unfortunately that doesn't help in
+ * particularly pathological cases in which SCI_EN has to be set directly on
+ * resume, although the specification states very clearly that this flag is
+ * owned by the hardware. The set_sci_en_on_resume variable will be set in such
+ * cases.
+ */
+static bool set_sci_en_on_resume;
+
+void __init acpi_set_sci_en_on_resume(void)
+{
+ set_sci_en_on_resume = true;
+}
+
+/*
* ACPI 1.0 wants us to execute _PTS before suspending devices, so we allow the
* user to request that behavior by using the 'acpi_old_suspend_ordering'
* kernel command line option that causes the following variable to be set.
#endif /* CONFIG_ACPI_SLEEP */
#ifdef CONFIG_SUSPEND
-/*
- * According to the ACPI specification the BIOS should make sure that ACPI is
- * enabled and SCI_EN bit is set on wake-up from S1 - S3 sleep states. Still,
- * some BIOSes don't do that and therefore we use acpi_enable() to enable ACPI
- * on such systems during resume. Unfortunately that doesn't help in
- * particularly pathological cases in which SCI_EN has to be set directly on
- * resume, although the specification states very clearly that this flag is
- * owned by the hardware. The set_sci_en_on_resume variable will be set in such
- * cases.
- */
-static bool set_sci_en_on_resume;
-
extern void do_suspend_lowlevel(void);
static u32 acpi_suspend_states[] = {
sprintf(name, "acpi_video%d", count++);
device->backlight = backlight_device_register(name,
NULL, device, &acpi_backlight_ops);
- device->backlight->props.max_brightness = device->brightness->count-3;
kfree(name);
+ if (IS_ERR(device->backlight))
+ return;
+ device->backlight->props.max_brightness = device->brightness->count-3;
result = sysfs_create_link(&device->backlight->dev.kobj,
&device->dev->dev.kobj, "device");
unsigned long long level_current, level_next;
int result = -EINVAL;
+ /* no warning message if acpi_backlight=vendor is used */
+ if (!acpi_video_backlight_support())
+ return 0;
+
if (!device->brightness)
goto out;
static struct DAC960_privdata DAC960_LP_privdata = {
.HardwareType = DAC960_LP_Controller,
- .FirmwareType = DAC960_LP_Controller,
+ .FirmwareType = DAC960_V2_Controller,
.InterruptHandler = DAC960_LP_InterruptHandler,
.MemoryWindowSize = DAC960_LP_RegisterWindowSize,
};
part_stat_unlock();
}
-/*
- * Ensure we don't create aliases in VI caches
- */
-static inline void
-killalias(struct bio *bio)
-{
- struct bio_vec *bv;
- int i;
-
- if (bio_data_dir(bio) == READ)
- __bio_for_each_segment(bv, bio, i, 0) {
- flush_dcache_page(bv->bv_page);
- }
-}
-
void
aoecmd_ata_rsp(struct sk_buff *skb)
{
if (buf->flags & BUFFL_FAIL)
bio_endio(buf->bio, -EIO);
else {
- killalias(buf->bio);
+ bio_flush_dcache_pages(buf->bio);
bio_endio(buf->bio, 0);
}
mempool_free(buf, d->bufpool);
/* drbd_proc.c */
extern struct proc_dir_entry *drbd_proc;
-extern struct file_operations drbd_proc_fops;
+extern const struct file_operations drbd_proc_fops;
extern const char *drbd_conn_str(enum drbd_conns s);
extern const char *drbd_role_str(enum drbd_role s);
*/
#include <linux/module.h>
-#include <linux/version.h>
#include <linux/drbd.h>
#include <asm/uaccess.h>
#include <asm/types.h>
DEFINE_RATELIMIT_STATE(drbd_ratelimit_state, 5 * HZ, 5);
-static struct block_device_operations drbd_ops = {
+static const struct block_device_operations drbd_ops = {
.owner = THIS_MODULE,
.open = drbd_open,
.release = drbd_release,
{
long refresh;
- if (--rsp->count < 0) {
+ if (!rsp->count--) {
get_random_bytes(&refresh, sizeof(refresh));
rsp->state += refresh;
rsp->count = FAULT_RANDOM_REFRESH;
struct proc_dir_entry *drbd_proc;
-struct file_operations drbd_proc_fops = {
+const struct file_operations drbd_proc_fops = {
.owner = THIS_MODULE,
.open = drbd_proc_open,
.read = seq_read,
#include <asm/uaccess.h>
#include <net/sock.h>
-#include <linux/version.h>
#include <linux/drbd.h>
#include <linux/fs.h>
#include <linux/file.h>
*/
#include <linux/module.h>
-#include <linux/version.h>
#include <linux/drbd.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
#include <linux/mm_inline.h>
#include <linux/slab.h>
#include <linux/random.h>
-#include <linux/mm.h>
#include <linux/string.h>
#include <linux/scatterlist.h>
err = -EINVAL;
goto probe_err_2;
}
- host->dev_base = ioremap(rsc->start , rsc->end + 1);
+ host->dev_base = ioremap(rsc->start, resource_size(rsc));
if (!host->dev_base) {
printk(KERN_ERR "%s:%d ioremap fail\n",
__func__, __LINE__);
int __init agp_amd64_init(void)
{
int err = 0;
+ static int done = 0;
if (agp_off)
return -EINVAL;
+
+ if (done++)
+ return agp_bridges_found ? 0 : -ENODEV;
+
err = pci_register_driver(&agp_amd64_pci_driver);
if (err < 0)
return err;
pci_unregister_driver(&agp_amd64_pci_driver);
}
-/* On AMD64 the PCI driver needs to initialize this driver early
- for the IOMMU, so it has to be called via a backdoor. */
-#ifndef CONFIG_GART_IOMMU
module_init(agp_amd64_init);
module_exit(agp_amd64_cleanup);
-#endif
MODULE_AUTHOR("Dave Jones <davej@redhat.com>, Andi Kleen");
module_param(agp_try_unsupported, bool, 0);
goto out;
}
}
-out_unlock:
- mutex_unlock(&rng_mutex);
out:
return ret ? : err;
+out_unlock:
+ mutex_unlock(&rng_mutex);
+ goto out;
}
#ifdef CONFIG_ACPI
spmi_find_bmc();
#endif
-#ifdef CONFIG_PNP
+#ifdef CONFIG_ACPI
pnp_register_driver(&ipmi_pnp_driver);
#endif
#ifdef CONFIG_PCI
pci_unregister_driver(&ipmi_pci_driver);
#endif
-#ifdef CONFIG_PNP
+#ifdef CONFIG_ACPI
pnp_unregister_driver(&ipmi_pnp_driver);
#endif
dev_vdbg(chan2dev(chan), "is_tx_complete: %d (d%d, u%d)\n",
cookie, done ? *done : 0, used ? *used : 0);
- spin_lock_bh(atchan->lock);
+ spin_lock_bh(&atchan->lock);
last_complete = atchan->completed_cookie;
last_used = chan->cookie;
ret = dma_async_is_complete(cookie, last_complete, last_used);
}
- spin_unlock_bh(atchan->lock);
+ spin_unlock_bh(&atchan->lock);
if (done)
*done = last_complete;
dma_async_device_unregister(&base->dma_slave);
coh901318_pool_destroy(&base->pool);
free_irq(platform_get_irq(pdev, 0), base);
- kfree(base);
iounmap(base->virtbase);
+ kfree(base);
release_mem_region(pdev->resource->start,
resource_size(pdev->resource));
return 0;
goto err_kfree;
}
- memset(dw, 0, sizeof *dw);
-
dw->regs = ioremap(io->start, DW_REGLEN);
if (!dw->regs) {
err = -ENOMEM;
dma->dev = &pdev->dev;
if (!dma->chancnt) {
- dev_err(dev, "zero channels detected\n");
+ dev_err(dev, "channel enumeration error\n");
goto err_setup_interrupts;
}
* @dca: direct cache access context
* @intr_quirk: interrupt setup quirk (for ioat_v1 devices)
* @enumerate_channels: hw version specific channel enumeration
+ * @reset_hw: hw version specific channel (re)initialization
* @cleanup_tasklet: select between the v2 and v3 cleanup routines
* @timer_fn: select between the v2 and v3 timer watchdog routines
* @self_test: hardware version specific self test for each supported op type
struct dca_provider *dca;
void (*intr_quirk)(struct ioatdma_device *device);
int (*enumerate_channels)(struct ioatdma_device *device);
+ int (*reset_hw)(struct ioat_chan_common *chan);
void (*cleanup_tasklet)(unsigned long data);
void (*timer_fn)(unsigned long data);
int (*self_test)(struct ioatdma_device *device);
writeb(IOAT_CHANCMD_SUSPEND, chan->reg_base + IOAT_CHANCMD_OFFSET(ver));
}
+static inline void ioat_reset(struct ioat_chan_common *chan)
+{
+ u8 ver = chan->device->version;
+
+ writeb(IOAT_CHANCMD_RESET, chan->reg_base + IOAT_CHANCMD_OFFSET(ver));
+}
+
+static inline bool ioat_reset_pending(struct ioat_chan_common *chan)
+{
+ u8 ver = chan->device->version;
+ u8 cmd;
+
+ cmd = readb(chan->reg_base + IOAT_CHANCMD_OFFSET(ver));
+ return (cmd & IOAT_CHANCMD_RESET) == IOAT_CHANCMD_RESET;
+}
+
static inline void ioat_set_chainaddr(struct ioat_dma_chan *ioat, u64 addr)
{
struct ioat_chan_common *chan = &ioat->base;
__ioat2_start_null_desc(ioat);
}
-static void ioat2_restart_channel(struct ioat2_dma_chan *ioat)
+int ioat2_quiesce(struct ioat_chan_common *chan, unsigned long tmo)
{
- struct ioat_chan_common *chan = &ioat->base;
- unsigned long phys_complete;
+ unsigned long end = jiffies + tmo;
+ int err = 0;
u32 status;
status = ioat_chansts(chan);
if (is_ioat_active(status) || is_ioat_idle(status))
ioat_suspend(chan);
while (is_ioat_active(status) || is_ioat_idle(status)) {
+ if (end && time_after(jiffies, end)) {
+ err = -ETIMEDOUT;
+ break;
+ }
status = ioat_chansts(chan);
cpu_relax();
}
+ return err;
+}
+
+int ioat2_reset_sync(struct ioat_chan_common *chan, unsigned long tmo)
+{
+ unsigned long end = jiffies + tmo;
+ int err = 0;
+
+ ioat_reset(chan);
+ while (ioat_reset_pending(chan)) {
+ if (end && time_after(jiffies, end)) {
+ err = -ETIMEDOUT;
+ break;
+ }
+ cpu_relax();
+ }
+
+ return err;
+}
+
+static void ioat2_restart_channel(struct ioat2_dma_chan *ioat)
+{
+ struct ioat_chan_common *chan = &ioat->base;
+ unsigned long phys_complete;
+
+ ioat2_quiesce(chan, 0);
if (ioat_cleanup_preamble(chan, &phys_complete))
__cleanup(ioat, phys_complete);
spin_unlock_bh(&chan->cleanup_lock);
}
+static int ioat2_reset_hw(struct ioat_chan_common *chan)
+{
+ /* throw away whatever the channel was doing and get it initialized */
+ u32 chanerr;
+
+ ioat2_quiesce(chan, msecs_to_jiffies(100));
+
+ chanerr = readl(chan->reg_base + IOAT_CHANERR_OFFSET);
+ writel(chanerr, chan->reg_base + IOAT_CHANERR_OFFSET);
+
+ return ioat2_reset_sync(chan, msecs_to_jiffies(200));
+}
+
/**
* ioat2_enumerate_channels - find and initialize the device's channels
* @device: the device to be enumerated
(unsigned long) ioat);
ioat->xfercap_log = xfercap_log;
spin_lock_init(&ioat->ring_lock);
+ if (device->reset_hw(&ioat->base)) {
+ i = 0;
+ break;
+ }
}
dma->chancnt = i;
return i;
struct ioat2_dma_chan *ioat = to_ioat2_chan(c);
struct ioat_chan_common *chan = &ioat->base;
struct ioat_ring_ent **ring;
- u32 chanerr;
int order;
/* have we already been set up? */
/* Setup register to interrupt and write completion status on error */
writew(IOAT_CHANCTRL_RUN, chan->reg_base + IOAT_CHANCTRL_OFFSET);
- chanerr = readl(chan->reg_base + IOAT_CHANERR_OFFSET);
- if (chanerr) {
- dev_err(to_dev(chan), "CHANERR = %x, clearing\n", chanerr);
- writel(chanerr, chan->reg_base + IOAT_CHANERR_OFFSET);
- }
-
/* allocate a completion writeback area */
/* doing 2 32bit writes to mmio since 1 64b write doesn't work */
chan->completion = pci_pool_alloc(chan->device->completion_pool,
tasklet_disable(&chan->cleanup_task);
del_timer_sync(&chan->timer);
device->cleanup_tasklet((unsigned long) ioat);
-
- /* Delay 100ms after reset to allow internal DMA logic to quiesce
- * before removing DMA descriptor resources.
- */
- writeb(IOAT_CHANCMD_RESET,
- chan->reg_base + IOAT_CHANCMD_OFFSET(chan->device->version));
- mdelay(100);
+ device->reset_hw(chan);
spin_lock_bh(&ioat->ring_lock);
descs = ioat2_ring_space(ioat);
int err;
device->enumerate_channels = ioat2_enumerate_channels;
+ device->reset_hw = ioat2_reset_hw;
device->cleanup_tasklet = ioat2_cleanup_tasklet;
device->timer_fn = ioat2_timer_event;
device->self_test = ioat_dma_self_test;
void __ioat2_issue_pending(struct ioat2_dma_chan *ioat);
void ioat2_cleanup_tasklet(unsigned long data);
void ioat2_timer_event(unsigned long data);
+int ioat2_quiesce(struct ioat_chan_common *chan, unsigned long tmo);
+int ioat2_reset_sync(struct ioat_chan_common *chan, unsigned long tmo);
extern struct kobj_type ioat2_ktype;
extern struct kmem_cache *ioat2_cache;
#endif /* IOATDMA_V2_H */
num_descs = ioat2_xferlen_to_descs(ioat, len);
/* we need 2x the number of descriptors to cover greater than 3
- * sources
+ * sources (we need 1 extra source in the q-only continuation
+ * case and 3 extra sources in the p+q continuation case.
*/
- if (src_cnt > 3 || flags & DMA_PREP_CONTINUE) {
+ if (src_cnt + dmaf_p_disabled_continue(flags) > 3 ||
+ (dmaf_continue(flags) && !dmaf_p_disabled_continue(flags))) {
with_ext = 1;
num_descs *= 2;
} else
return 0;
}
+static int ioat3_reset_hw(struct ioat_chan_common *chan)
+{
+ /* throw away whatever the channel was doing and get it
+ * initialized, with ioat3 specific workarounds
+ */
+ struct ioatdma_device *device = chan->device;
+ struct pci_dev *pdev = device->pdev;
+ u32 chanerr;
+ u16 dev_id;
+ int err;
+
+ ioat2_quiesce(chan, msecs_to_jiffies(100));
+
+ chanerr = readl(chan->reg_base + IOAT_CHANERR_OFFSET);
+ writel(chanerr, chan->reg_base + IOAT_CHANERR_OFFSET);
+
+ /* -= IOAT ver.3 workarounds =- */
+ /* Write CHANERRMSK_INT with 3E07h to mask out the errors
+ * that can cause stability issues for IOAT ver.3, and clear any
+ * pending errors
+ */
+ pci_write_config_dword(pdev, IOAT_PCI_CHANERRMASK_INT_OFFSET, 0x3e07);
+ err = pci_read_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, &chanerr);
+ if (err) {
+ dev_err(&pdev->dev, "channel error register unreachable\n");
+ return err;
+ }
+ pci_write_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, chanerr);
+
+ /* Clear DMAUNCERRSTS Cfg-Reg Parity Error status bit
+ * (workaround for spurious config parity error after restart)
+ */
+ pci_read_config_word(pdev, IOAT_PCI_DEVICE_ID_OFFSET, &dev_id);
+ if (dev_id == PCI_DEVICE_ID_INTEL_IOAT_TBG0)
+ pci_write_config_dword(pdev, IOAT_PCI_DMAUNCERRSTS_OFFSET, 0x10);
+
+ return ioat2_reset_sync(chan, msecs_to_jiffies(200));
+}
+
int __devinit ioat3_dma_probe(struct ioatdma_device *device, int dca)
{
struct pci_dev *pdev = device->pdev;
struct ioat_chan_common *chan;
bool is_raid_device = false;
int err;
- u16 dev_id;
u32 cap;
device->enumerate_channels = ioat2_enumerate_channels;
+ device->reset_hw = ioat3_reset_hw;
device->self_test = ioat3_dma_self_test;
dma = &device->common;
dma->device_prep_dma_memcpy = ioat2_dma_prep_memcpy_lock;
dma->device_prep_dma_xor_val = NULL;
#endif
- /* -= IOAT ver.3 workarounds =- */
- /* Write CHANERRMSK_INT with 3E07h to mask out the errors
- * that can cause stability issues for IOAT ver.3
- */
- pci_write_config_dword(pdev, IOAT_PCI_CHANERRMASK_INT_OFFSET, 0x3e07);
-
- /* Clear DMAUNCERRSTS Cfg-Reg Parity Error status bit
- * (workaround for spurious config parity error after restart)
- */
- pci_read_config_word(pdev, IOAT_PCI_DEVICE_ID_OFFSET, &dev_id);
- if (dev_id == PCI_DEVICE_ID_INTEL_IOAT_TBG0)
- pci_write_config_dword(pdev, IOAT_PCI_DMAUNCERRSTS_OFFSET, 0x10);
-
err = ioat_probe(device);
if (err)
return err;
#define IOAT_PCI_DEVICE_ID_OFFSET 0x02
#define IOAT_PCI_DMAUNCERRSTS_OFFSET 0x148
+#define IOAT_PCI_CHANERR_INT_OFFSET 0x180
#define IOAT_PCI_CHANERRMASK_INT_OFFSET 0x184
/* MMIO Device Registers */
#include <linux/dmaengine.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
-#include <linux/dmapool.h>
#include <linux/platform_device.h>
#include <cpu/dma.h>
#include <asm/dma-sh.h>
#include "shdma.h"
/* DMA descriptor control */
-#define DESC_LAST (-1)
-#define DESC_COMP (1)
-#define DESC_NCOMP (0)
+enum sh_dmae_desc_status {
+ DESC_IDLE,
+ DESC_PREPARED,
+ DESC_SUBMITTED,
+ DESC_COMPLETED, /* completed, have to call callback */
+ DESC_WAITING, /* callback called, waiting for ack / re-submit */
+};
#define NR_DESCS_PER_CHANNEL 32
/*
*/
#define RS_DEFAULT (RS_DUAL)
+static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all);
+
#define SH_DMAC_CHAN_BASE(id) (dma_base_addr[id])
static void sh_dmae_writel(struct sh_dmae_chan *sh_dc, u32 data, u32 reg)
{
return ts_shift[(chcr & CHCR_TS_MASK) >> CHCR_TS_SHIFT];
}
-static void dmae_set_reg(struct sh_dmae_chan *sh_chan, struct sh_dmae_regs hw)
+static void dmae_set_reg(struct sh_dmae_chan *sh_chan, struct sh_dmae_regs *hw)
{
- sh_dmae_writel(sh_chan, hw.sar, SAR);
- sh_dmae_writel(sh_chan, hw.dar, DAR);
- sh_dmae_writel(sh_chan, hw.tcr >> calc_xmit_shift(sh_chan), TCR);
+ sh_dmae_writel(sh_chan, hw->sar, SAR);
+ sh_dmae_writel(sh_chan, hw->dar, DAR);
+ sh_dmae_writel(sh_chan, hw->tcr >> calc_xmit_shift(sh_chan), TCR);
}
static void dmae_start(struct sh_dmae_chan *sh_chan)
static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
{
- struct sh_desc *desc = tx_to_sh_desc(tx);
+ struct sh_desc *desc = tx_to_sh_desc(tx), *chunk, *last = desc, *c;
struct sh_dmae_chan *sh_chan = to_sh_chan(tx->chan);
+ dma_async_tx_callback callback = tx->callback;
dma_cookie_t cookie;
spin_lock_bh(&sh_chan->desc_lock);
if (cookie < 0)
cookie = 1;
- /* If desc only in the case of 1 */
- if (desc->async_tx.cookie != -EBUSY)
- desc->async_tx.cookie = cookie;
- sh_chan->common.cookie = desc->async_tx.cookie;
+ sh_chan->common.cookie = cookie;
+ tx->cookie = cookie;
+
+ /* Mark all chunks of this descriptor as submitted, move to the queue */
+ list_for_each_entry_safe(chunk, c, desc->node.prev, node) {
+ /*
+ * All chunks are on the global ld_free, so, we have to find
+ * the end of the chain ourselves
+ */
+ if (chunk != desc && (chunk->mark == DESC_IDLE ||
+ chunk->async_tx.cookie > 0 ||
+ chunk->async_tx.cookie == -EBUSY ||
+ &chunk->node == &sh_chan->ld_free))
+ break;
+ chunk->mark = DESC_SUBMITTED;
+ /* Callback goes to the last chunk */
+ chunk->async_tx.callback = NULL;
+ chunk->cookie = cookie;
+ list_move_tail(&chunk->node, &sh_chan->ld_queue);
+ last = chunk;
+ }
+
+ last->async_tx.callback = callback;
+ last->async_tx.callback_param = tx->callback_param;
- list_splice_init(&desc->tx_list, sh_chan->ld_queue.prev);
+ dev_dbg(sh_chan->dev, "submit #%d@%p on %d: %x[%d] -> %x\n",
+ tx->cookie, &last->async_tx, sh_chan->id,
+ desc->hw.sar, desc->hw.tcr, desc->hw.dar);
spin_unlock_bh(&sh_chan->desc_lock);
return cookie;
}
+/* Called with desc_lock held */
static struct sh_desc *sh_dmae_get_desc(struct sh_dmae_chan *sh_chan)
{
- struct sh_desc *desc, *_desc, *ret = NULL;
+ struct sh_desc *desc;
- spin_lock_bh(&sh_chan->desc_lock);
- list_for_each_entry_safe(desc, _desc, &sh_chan->ld_free, node) {
- if (async_tx_test_ack(&desc->async_tx)) {
+ list_for_each_entry(desc, &sh_chan->ld_free, node)
+ if (desc->mark != DESC_PREPARED) {
+ BUG_ON(desc->mark != DESC_IDLE);
list_del(&desc->node);
- ret = desc;
- break;
+ return desc;
}
- }
- spin_unlock_bh(&sh_chan->desc_lock);
-
- return ret;
-}
-
-static void sh_dmae_put_desc(struct sh_dmae_chan *sh_chan, struct sh_desc *desc)
-{
- if (desc) {
- spin_lock_bh(&sh_chan->desc_lock);
-
- list_splice_init(&desc->tx_list, &sh_chan->ld_free);
- list_add(&desc->node, &sh_chan->ld_free);
- spin_unlock_bh(&sh_chan->desc_lock);
- }
+ return NULL;
}
static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
dma_async_tx_descriptor_init(&desc->async_tx,
&sh_chan->common);
desc->async_tx.tx_submit = sh_dmae_tx_submit;
- desc->async_tx.flags = DMA_CTRL_ACK;
- INIT_LIST_HEAD(&desc->tx_list);
- sh_dmae_put_desc(sh_chan, desc);
+ desc->mark = DESC_IDLE;
spin_lock_bh(&sh_chan->desc_lock);
+ list_add(&desc->node, &sh_chan->ld_free);
sh_chan->descs_allocated++;
}
spin_unlock_bh(&sh_chan->desc_lock);
struct sh_desc *desc, *_desc;
LIST_HEAD(list);
- BUG_ON(!list_empty(&sh_chan->ld_queue));
+ /* Prepared and not submitted descriptors can still be on the queue */
+ if (!list_empty(&sh_chan->ld_queue))
+ sh_dmae_chan_ld_cleanup(sh_chan, true);
+
spin_lock_bh(&sh_chan->desc_lock);
list_splice_init(&sh_chan->ld_free, &list);
struct sh_dmae_chan *sh_chan;
struct sh_desc *first = NULL, *prev = NULL, *new;
size_t copy_size;
+ LIST_HEAD(tx_list);
+ int chunks = (len + SH_DMA_TCR_MAX) / (SH_DMA_TCR_MAX + 1);
if (!chan)
return NULL;
sh_chan = to_sh_chan(chan);
+ /* Have to lock the whole loop to protect against concurrent release */
+ spin_lock_bh(&sh_chan->desc_lock);
+
+ /*
+ * Chaining:
+ * first descriptor is what user is dealing with in all API calls, its
+ * cookie is at first set to -EBUSY, at tx-submit to a positive
+ * number
+ * if more than one chunk is needed further chunks have cookie = -EINVAL
+ * the last chunk, if not equal to the first, has cookie = -ENOSPC
+ * all chunks are linked onto the tx_list head with their .node heads
+ * only during this function, then they are immediately spliced
+ * back onto the free list in form of a chain
+ */
do {
- /* Allocate the link descriptor from DMA pool */
+ /* Allocate the link descriptor from the free list */
new = sh_dmae_get_desc(sh_chan);
if (!new) {
dev_err(sh_chan->dev,
"No free memory for link descriptor\n");
- goto err_get_desc;
+ list_for_each_entry(new, &tx_list, node)
+ new->mark = DESC_IDLE;
+ list_splice(&tx_list, &sh_chan->ld_free);
+ spin_unlock_bh(&sh_chan->desc_lock);
+ return NULL;
}
- copy_size = min(len, (size_t)SH_DMA_TCR_MAX);
+ copy_size = min(len, (size_t)SH_DMA_TCR_MAX + 1);
new->hw.sar = dma_src;
new->hw.dar = dma_dest;
new->hw.tcr = copy_size;
- if (!first)
+ if (!first) {
+ /* First desc */
+ new->async_tx.cookie = -EBUSY;
first = new;
+ } else {
+ /* Other desc - invisible to the user */
+ new->async_tx.cookie = -EINVAL;
+ }
- new->mark = DESC_NCOMP;
- async_tx_ack(&new->async_tx);
+ dev_dbg(sh_chan->dev,
+ "chaining %u of %u with %p, dst %x, cookie %d\n",
+ copy_size, len, &new->async_tx, dma_dest,
+ new->async_tx.cookie);
+
+ new->mark = DESC_PREPARED;
+ new->async_tx.flags = flags;
+ new->chunks = chunks--;
prev = new;
len -= copy_size;
dma_src += copy_size;
dma_dest += copy_size;
/* Insert the link descriptor to the LD ring */
- list_add_tail(&new->node, &first->tx_list);
+ list_add_tail(&new->node, &tx_list);
} while (len);
- new->async_tx.flags = flags; /* client is in control of this ack */
- new->async_tx.cookie = -EBUSY; /* Last desc */
+ if (new != first)
+ new->async_tx.cookie = -ENOSPC;
- return &first->async_tx;
+ /* Put them back on the free list, so, they don't get lost */
+ list_splice_tail(&tx_list, &sh_chan->ld_free);
-err_get_desc:
- sh_dmae_put_desc(sh_chan, first);
- return NULL;
+ spin_unlock_bh(&sh_chan->desc_lock);
+ return &first->async_tx;
}
-/*
- * sh_chan_ld_cleanup - Clean up link descriptors
- *
- * This function clean up the ld_queue of DMA channel.
- */
-static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan)
+static dma_async_tx_callback __ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
{
struct sh_desc *desc, *_desc;
+ /* Is the "exposed" head of a chain acked? */
+ bool head_acked = false;
+ dma_cookie_t cookie = 0;
+ dma_async_tx_callback callback = NULL;
+ void *param = NULL;
spin_lock_bh(&sh_chan->desc_lock);
list_for_each_entry_safe(desc, _desc, &sh_chan->ld_queue, node) {
- dma_async_tx_callback callback;
- void *callback_param;
-
- /* non send data */
- if (desc->mark == DESC_NCOMP)
+ struct dma_async_tx_descriptor *tx = &desc->async_tx;
+
+ BUG_ON(tx->cookie > 0 && tx->cookie != desc->cookie);
+ BUG_ON(desc->mark != DESC_SUBMITTED &&
+ desc->mark != DESC_COMPLETED &&
+ desc->mark != DESC_WAITING);
+
+ /*
+ * queue is ordered, and we use this loop to (1) clean up all
+ * completed descriptors, and to (2) update descriptor flags of
+ * any chunks in a (partially) completed chain
+ */
+ if (!all && desc->mark == DESC_SUBMITTED &&
+ desc->cookie != cookie)
break;
- /* send data sesc */
- callback = desc->async_tx.callback;
- callback_param = desc->async_tx.callback_param;
+ if (tx->cookie > 0)
+ cookie = tx->cookie;
- /* Remove from ld_queue list */
- list_splice_init(&desc->tx_list, &sh_chan->ld_free);
+ if (desc->mark == DESC_COMPLETED && desc->chunks == 1) {
+ BUG_ON(sh_chan->completed_cookie != desc->cookie - 1);
+ sh_chan->completed_cookie = desc->cookie;
+ }
- dev_dbg(sh_chan->dev, "link descriptor %p will be recycle.\n",
- desc);
+ /* Call callback on the last chunk */
+ if (desc->mark == DESC_COMPLETED && tx->callback) {
+ desc->mark = DESC_WAITING;
+ callback = tx->callback;
+ param = tx->callback_param;
+ dev_dbg(sh_chan->dev, "descriptor #%d@%p on %d callback\n",
+ tx->cookie, tx, sh_chan->id);
+ BUG_ON(desc->chunks != 1);
+ break;
+ }
- list_move(&desc->node, &sh_chan->ld_free);
- /* Run the link descriptor callback function */
- if (callback) {
- spin_unlock_bh(&sh_chan->desc_lock);
- dev_dbg(sh_chan->dev, "link descriptor %p callback\n",
- desc);
- callback(callback_param);
- spin_lock_bh(&sh_chan->desc_lock);
+ if (tx->cookie > 0 || tx->cookie == -EBUSY) {
+ if (desc->mark == DESC_COMPLETED) {
+ BUG_ON(tx->cookie < 0);
+ desc->mark = DESC_WAITING;
+ }
+ head_acked = async_tx_test_ack(tx);
+ } else {
+ switch (desc->mark) {
+ case DESC_COMPLETED:
+ desc->mark = DESC_WAITING;
+ /* Fall through */
+ case DESC_WAITING:
+ if (head_acked)
+ async_tx_ack(&desc->async_tx);
+ }
+ }
+
+ dev_dbg(sh_chan->dev, "descriptor %p #%d completed.\n",
+ tx, tx->cookie);
+
+ if (((desc->mark == DESC_COMPLETED ||
+ desc->mark == DESC_WAITING) &&
+ async_tx_test_ack(&desc->async_tx)) || all) {
+ /* Remove from ld_queue list */
+ desc->mark = DESC_IDLE;
+ list_move(&desc->node, &sh_chan->ld_free);
}
}
spin_unlock_bh(&sh_chan->desc_lock);
+
+ if (callback)
+ callback(param);
+
+ return callback;
+}
+
+/*
+ * sh_chan_ld_cleanup - Clean up link descriptors
+ *
+ * This function cleans up the ld_queue of DMA channel.
+ */
+static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
+{
+ while (__ld_cleanup(sh_chan, all))
+ ;
}
static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan)
{
- struct list_head *ld_node;
- struct sh_dmae_regs hw;
+ struct sh_desc *sd;
+ spin_lock_bh(&sh_chan->desc_lock);
/* DMA work check */
- if (dmae_is_busy(sh_chan))
+ if (dmae_is_busy(sh_chan)) {
+ spin_unlock_bh(&sh_chan->desc_lock);
return;
+ }
/* Find the first un-transfer desciptor */
- for (ld_node = sh_chan->ld_queue.next;
- (ld_node != &sh_chan->ld_queue)
- && (to_sh_desc(ld_node)->mark == DESC_COMP);
- ld_node = ld_node->next)
- cpu_relax();
-
- if (ld_node != &sh_chan->ld_queue) {
- /* Get the ld start address from ld_queue */
- hw = to_sh_desc(ld_node)->hw;
- dmae_set_reg(sh_chan, hw);
- dmae_start(sh_chan);
- }
+ list_for_each_entry(sd, &sh_chan->ld_queue, node)
+ if (sd->mark == DESC_SUBMITTED) {
+ /* Get the ld start address from ld_queue */
+ dmae_set_reg(sh_chan, &sd->hw);
+ dmae_start(sh_chan);
+ break;
+ }
+
+ spin_unlock_bh(&sh_chan->desc_lock);
}
static void sh_dmae_memcpy_issue_pending(struct dma_chan *chan)
dma_cookie_t last_used;
dma_cookie_t last_complete;
- sh_dmae_chan_ld_cleanup(sh_chan);
+ sh_dmae_chan_ld_cleanup(sh_chan, false);
last_used = chan->cookie;
last_complete = sh_chan->completed_cookie;
- if (last_complete == -EBUSY)
- last_complete = last_used;
+ BUG_ON(last_complete < 0);
if (done)
*done = last_complete;
err = sh_dmae_rst(0);
if (err)
return err;
+#ifdef SH_DMAC_BASE1
if (shdev->pdata.mode & SHDMA_DMAOR1) {
err = sh_dmae_rst(1);
if (err)
return err;
}
+#endif
disable_irq(irq);
return IRQ_HANDLED;
}
static void dmae_do_tasklet(unsigned long data)
{
struct sh_dmae_chan *sh_chan = (struct sh_dmae_chan *)data;
- struct sh_desc *desc, *_desc, *cur_desc = NULL;
+ struct sh_desc *desc;
u32 sar_buf = sh_dmae_readl(sh_chan, SAR);
- list_for_each_entry_safe(desc, _desc,
- &sh_chan->ld_queue, node) {
- if ((desc->hw.sar + desc->hw.tcr) == sar_buf) {
- cur_desc = desc;
+ spin_lock(&sh_chan->desc_lock);
+ list_for_each_entry(desc, &sh_chan->ld_queue, node) {
+ if ((desc->hw.sar + desc->hw.tcr) == sar_buf &&
+ desc->mark == DESC_SUBMITTED) {
+ dev_dbg(sh_chan->dev, "done #%d@%p dst %u\n",
+ desc->async_tx.cookie, &desc->async_tx,
+ desc->hw.dar);
+ desc->mark = DESC_COMPLETED;
break;
}
}
+ spin_unlock(&sh_chan->desc_lock);
- if (cur_desc) {
- switch (cur_desc->async_tx.cookie) {
- case 0: /* other desc data */
- break;
- case -EBUSY: /* last desc */
- sh_chan->completed_cookie =
- cur_desc->async_tx.cookie;
- break;
- default: /* first desc ( 0 < )*/
- sh_chan->completed_cookie =
- cur_desc->async_tx.cookie - 1;
- break;
- }
- cur_desc->mark = DESC_COMP;
- }
/* Next desc */
sh_chan_xfer_ld_queue(sh_chan);
- sh_dmae_chan_ld_cleanup(sh_chan);
+ sh_dmae_chan_ld_cleanup(sh_chan, false);
}
static unsigned int get_dmae_irq(unsigned int id)
#ifndef __DMA_SHDMA_H
#define __DMA_SHDMA_H
-#include <linux/device.h>
-#include <linux/dmapool.h>
#include <linux/dmaengine.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
#define SH_DMA_TCR_MAX 0x00FFFFFF /* 16MB */
};
struct sh_desc {
- struct list_head tx_list;
struct sh_dmae_regs hw;
struct list_head node;
struct dma_async_tx_descriptor async_tx;
+ dma_cookie_t cookie;
+ int chunks;
int mark;
};
+struct device;
+
struct sh_dmae_chan {
dma_cookie_t completed_cookie; /* The maximum cookie completed */
spinlock_t desc_lock; /* Descriptor operation lock */
+menu "IEEE 1394 (FireWire) support"
+ depends on PCI || BROKEN
+ # firewire-core does not depend on PCI but is
+ # not useful without PCI controller driver
+
comment "You can enable one or both FireWire driver stacks."
-comment "See the help texts for more information."
+comment "The newer stack is recommended."
config FIREWIRE
tristate "FireWire driver stack"
To compile this driver as a module, say M here: the module will be
called firewire-core.
- This module functionally replaces ieee1394, raw1394, and video1394.
- To access it from application programs, you generally need at least
- libraw1394 v2. IIDC/DCAM applications need libdc1394 v2.
- No libraries are required to access storage devices through the
- firewire-sbp2 driver.
-
- NOTE:
- FireWire audio devices currently require the old drivers (ieee1394,
- ohci1394, raw1394).
-
config FIREWIRE_OHCI
tristate "OHCI-1394 controllers"
depends on PCI && FIREWIRE
is the only chipset in use, so say Y here.
To compile this driver as a module, say M here: The module will be
- called firewire-ohci. It replaces ohci1394 of the classic IEEE 1394
- stack.
-
- NOTE:
- If you want to install firewire-ohci and ohci1394 together, you
- should configure them only as modules and blacklist the driver(s)
- which you don't want to have auto-loaded. Add either
-
- blacklist firewire-ohci
- or
- blacklist ohci1394
- blacklist video1394
- blacklist dv1394
-
- to /etc/modprobe.conf or /etc/modprobe.d/* and update modprobe.conf
- depending on your distribution.
+ called firewire-ohci.
config FIREWIRE_OHCI_DEBUG
bool
like scanners.
To compile this driver as a module, say M here: The module will be
- called firewire-sbp2. It replaces sbp2 of the classic IEEE 1394
- stack.
+ called firewire-sbp2.
You should also enable support for disks, CD-ROMs, etc. in the SCSI
configuration section.
NOTE, this driver is not stable yet!
To compile this driver as a module, say M here: The module will be
- called firewire-net. It replaces eth1394 of the classic IEEE 1394
- stack.
+ called firewire-net.
+
+source "drivers/ieee1394/Kconfig"
+
+endmenu
struct inbound_transaction_resource *r = container_of(resource,
struct inbound_transaction_resource, resource);
- fw_send_response(client->device->card, r->request,
- RCODE_CONFLICT_ERROR);
+ if (r->request)
+ fw_send_response(client->device->card, r->request,
+ RCODE_CONFLICT_ERROR);
kfree(r);
}
failed:
kfree(r);
kfree(e);
- fw_send_response(card, request, RCODE_CONFLICT_ERROR);
+ if (request)
+ fw_send_response(card, request, RCODE_CONFLICT_ERROR);
}
static void release_address_handler(struct client *client,
r = container_of(resource, struct inbound_transaction_resource,
resource);
- if (request->length < r->length)
- r->length = request->length;
-
- if (copy_from_user(r->data, u64_to_uptr(request->data), r->length)) {
- ret = -EFAULT;
- goto out;
+ if (r->request) {
+ if (request->length < r->length)
+ r->length = request->length;
+ if (copy_from_user(r->data, u64_to_uptr(request->data),
+ r->length)) {
+ ret = -EFAULT;
+ kfree(r->request);
+ goto out;
+ }
+ fw_send_response(client->device->card, r->request,
+ request->rcode);
}
-
- fw_send_response(client->device->card, r->request, request->rcode);
out:
kfree(r);
return NULL;
}
+static bool is_enclosing_handler(struct fw_address_handler *handler,
+ unsigned long long offset, size_t length)
+{
+ return handler->offset <= offset &&
+ offset + length <= handler->offset + handler->length;
+}
+
static struct fw_address_handler *lookup_enclosing_address_handler(
struct list_head *list, unsigned long long offset, size_t length)
{
struct fw_address_handler *handler;
list_for_each_entry(handler, list, link) {
- if (handler->offset <= offset &&
- offset + length <= handler->offset + handler->length)
+ if (is_enclosing_handler(handler, offset, length))
return handler;
}
{ .start = 0xfffff0000900ULL, .end = 0x1000000000000ULL, };
#endif /* 0 */
+static bool is_in_fcp_region(u64 offset, size_t length)
+{
+ return offset >= (CSR_REGISTER_BASE | CSR_FCP_COMMAND) &&
+ offset + length <= (CSR_REGISTER_BASE | CSR_FCP_END);
+}
+
/**
* fw_core_add_address_handler - register for incoming requests
* @handler: callback
* give the details of the particular request.
*
* Return value: 0 on success, non-zero otherwise.
+ *
* The start offset of the handler's address region is determined by
* fw_core_add_address_handler() and is returned in handler->offset.
+ *
+ * Address allocations are exclusive, except for the FCP registers.
*/
int fw_core_add_address_handler(struct fw_address_handler *handler,
const struct fw_address_region *region)
handler->offset = region->start;
while (handler->offset + handler->length <= region->end) {
- other =
- lookup_overlapping_address_handler(&address_handler_list,
- handler->offset,
- handler->length);
+ if (is_in_fcp_region(handler->offset, handler->length))
+ other = NULL;
+ else
+ other = lookup_overlapping_address_handler
+ (&address_handler_list,
+ handler->offset, handler->length);
if (other != NULL) {
handler->offset += other->length;
} else {
void fw_send_response(struct fw_card *card,
struct fw_request *request, int rcode)
{
+ if (WARN_ONCE(!request, "invalid for FCP address handlers"))
+ return;
+
/* unified transaction or broadcast transaction: don't respond */
if (request->ack != ACK_PENDING ||
HEADER_DESTINATION_IS_BROADCAST(request->request_header[0])) {
}
EXPORT_SYMBOL(fw_send_response);
-void fw_core_handle_request(struct fw_card *card, struct fw_packet *p)
+static void handle_exclusive_region_request(struct fw_card *card,
+ struct fw_packet *p,
+ struct fw_request *request,
+ unsigned long long offset)
{
struct fw_address_handler *handler;
- struct fw_request *request;
- unsigned long long offset;
unsigned long flags;
int tcode, destination, source;
- if (p->ack != ACK_PENDING && p->ack != ACK_COMPLETE)
- return;
-
- request = allocate_request(p);
- if (request == NULL) {
- /* FIXME: send statically allocated busy packet. */
- return;
- }
-
- offset =
- ((unsigned long long)
- HEADER_GET_OFFSET_HIGH(p->header[1]) << 32) | p->header[2];
tcode = HEADER_GET_TCODE(p->header[0]);
destination = HEADER_GET_DESTINATION(p->header[0]);
source = HEADER_GET_SOURCE(p->header[1]);
request->data, request->length,
handler->callback_data);
}
+
+static void handle_fcp_region_request(struct fw_card *card,
+ struct fw_packet *p,
+ struct fw_request *request,
+ unsigned long long offset)
+{
+ struct fw_address_handler *handler;
+ unsigned long flags;
+ int tcode, destination, source;
+
+ if ((offset != (CSR_REGISTER_BASE | CSR_FCP_COMMAND) &&
+ offset != (CSR_REGISTER_BASE | CSR_FCP_RESPONSE)) ||
+ request->length > 0x200) {
+ fw_send_response(card, request, RCODE_ADDRESS_ERROR);
+
+ return;
+ }
+
+ tcode = HEADER_GET_TCODE(p->header[0]);
+ destination = HEADER_GET_DESTINATION(p->header[0]);
+ source = HEADER_GET_SOURCE(p->header[1]);
+
+ if (tcode != TCODE_WRITE_QUADLET_REQUEST &&
+ tcode != TCODE_WRITE_BLOCK_REQUEST) {
+ fw_send_response(card, request, RCODE_TYPE_ERROR);
+
+ return;
+ }
+
+ spin_lock_irqsave(&address_handler_lock, flags);
+ list_for_each_entry(handler, &address_handler_list, link) {
+ if (is_enclosing_handler(handler, offset, request->length))
+ handler->address_callback(card, NULL, tcode,
+ destination, source,
+ p->generation, p->speed,
+ offset, request->data,
+ request->length,
+ handler->callback_data);
+ }
+ spin_unlock_irqrestore(&address_handler_lock, flags);
+
+ fw_send_response(card, request, RCODE_COMPLETE);
+}
+
+void fw_core_handle_request(struct fw_card *card, struct fw_packet *p)
+{
+ struct fw_request *request;
+ unsigned long long offset;
+
+ if (p->ack != ACK_PENDING && p->ack != ACK_COMPLETE)
+ return;
+
+ request = allocate_request(p);
+ if (request == NULL) {
+ /* FIXME: send statically allocated busy packet. */
+ return;
+ }
+
+ offset = ((u64)HEADER_GET_OFFSET_HIGH(p->header[1]) << 32) |
+ p->header[2];
+
+ if (!is_in_fcp_region(offset, request->length))
+ handle_exclusive_region_request(card, p, request, offset);
+ else
+ handle_fcp_region_request(card, p, request, offset);
+
+}
EXPORT_SYMBOL(fw_core_handle_request);
void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
if (rest == 0)
return -EINVAL;
- /* FIXME: make packet-per-buffer/dual-buffer a context option */
while (rest > 0) {
d = context_get_descriptors(&ctx->context,
z + header_z, &d_bus);
}
version = reg_read(ohci, OHCI1394_Version) & 0x00ff00ff;
+#if 0
+ /* FIXME: make it a context option or remove dual-buffer mode */
ohci->use_dualbuffer = version >= OHCI_VERSION_1_1;
+#endif
/* dual-buffer mode is broken if more than one IR context is active */
if (dev->vendor == PCI_VENDOR_ID_AGERE &&
-menu "IEEE 1394 (FireWire) support"
- depends on PCI || BROKEN
-
-source "drivers/firewire/Kconfig"
-
config IEEE1394
tristate "Legacy alternative FireWire driver stack"
depends on PCI || BROKEN
is the core support only, you will also need to select a driver for
your IEEE 1394 adapter.
- To compile this driver as a module, say M here: the
- module will be called ieee1394.
+ To compile this driver as a module, say M here: the module will be
+ called ieee1394.
+
+ NOTE:
+ ieee1394 is superseded by the newer firewire-core driver. See
+ http://ieee1394.wiki.kernel.org/index.php/Juju_Migration for
+ further information on how to switch to the new FireWire drivers.
config IEEE1394_OHCI1394
tristate "OHCI-1394 controllers"
use one of these chipsets. It should work with any OHCI-1394
compliant card, however.
- To compile this driver as a module, say M here: the
- module will be called ohci1394.
+ To compile this driver as a module, say M here: the module will be
+ called ohci1394.
NOTE:
+ ohci1394 is superseded by the newer firewire-ohci driver. See
+ http://ieee1394.wiki.kernel.org/index.php/Juju_Migration for
+ further information on how to switch to the new FireWire drivers.
+
If you want to install firewire-ohci and ohci1394 together, you
should configure them only as modules and blacklist the driver(s)
which you don't want to have auto-loaded. Add either
- blacklist firewire-ohci
- or
blacklist ohci1394
blacklist video1394
blacklist dv1394
+ or
+ blacklist firewire-ohci
to /etc/modprobe.conf or /etc/modprobe.d/* and update modprobe.conf
depending on your distribution.
Instruments PCILynx chip. Note: this driver is written for revision
2 of this chip and may not work with revision 0.
- To compile this driver as a module, say M here: the
- module will be called pcilynx.
+ To compile this driver as a module, say M here: the module will be
+ called pcilynx.
Only some old and now very rare PCI and CardBus cards and
PowerMacs G3 B&W contain the PCILynx controller. Therefore
You should also enable support for disks, CD-ROMs, etc. in the SCSI
configuration section.
+ To compile this driver as a module, say M here: the module will be
+ called sbp2.
+
+ NOTE:
+ sbp2 is superseded by the newer firewire-sbp2 driver. See
+ http://ieee1394.wiki.kernel.org/index.php/Juju_Migration for
+ further information on how to switch to the new FireWire drivers.
+
config IEEE1394_SBP2_PHYS_DMA
bool "Enable replacement for physical DMA in SBP2"
depends on IEEE1394_SBP2 && VIRT_TO_BUS && EXPERIMENTAL
The module is called eth1394 although it does not emulate Ethernet.
+ NOTE:
+ eth1394 is superseded by the newer firewire-net driver. See
+ http://ieee1394.wiki.kernel.org/index.php/Juju_Migration for
+ further information on how to switch to the new FireWire drivers.
+
config IEEE1394_RAWIO
tristate "raw1394 userspace interface"
depends on IEEE1394
To compile this driver as a module, say M here: the module will be
called raw1394.
+ NOTE:
+ raw1394 is superseded by the newer firewire-core driver. See
+ http://ieee1394.wiki.kernel.org/index.php/Juju_Migration for
+ further information on how to switch to the new FireWire drivers.
+
config IEEE1394_VIDEO1394
tristate "video1394 userspace interface"
depends on IEEE1394 && IEEE1394_OHCI1394
To compile this driver as a module, say M here: the module will be
called video1394.
+ NOTE:
+ video1394 is superseded by the newer firewire-core driver. See
+ http://ieee1394.wiki.kernel.org/index.php/Juju_Migration for
+ further information on how to switch to the new FireWire drivers.
+
config IEEE1394_DV1394
tristate "dv1394 userspace interface (deprecated)"
depends on IEEE1394 && IEEE1394_OHCI1394
help
The dv1394 driver is unsupported and may be removed from Linux in a
- future release. Its functionality is now provided by raw1394 together
- with libraries such as libiec61883.
+ future release. Its functionality is now provided by either
+ raw1394 or firewire-core together with libraries such as libiec61883.
config IEEE1394_VERBOSEDEBUG
bool "Excessive debugging output"
will quickly result in large amounts of data sent to the system log.
Say Y if you really need the debugging output. Everyone else says N.
-
-endmenu
}
/*
+ * Only left/right direction should be used (under/over 0x8000) for
+ * forward/reverse motor direction (to keep calculation fast & simple).
+ */
+static u16 ml_calculate_direction(u16 direction, u16 force,
+ u16 new_direction, u16 new_force)
+{
+ if (!force)
+ return new_direction;
+ if (!new_force)
+ return direction;
+ return (((u32)(direction >> 1) * force +
+ (new_direction >> 1) * new_force) /
+ (force + new_force)) << 1;
+}
+
+/*
* Combine two effects and apply gain.
*/
static void ml_combine_effects(struct ff_effect *effect,
struct ml_effect_state *state,
- unsigned int gain)
+ int gain)
{
struct ff_effect *new = state->effect;
unsigned int strong, weak, i;
break;
case FF_RUMBLE:
- strong = new->u.rumble.strong_magnitude * gain / 0xffff;
- weak = new->u.rumble.weak_magnitude * gain / 0xffff;
+ strong = (u32)new->u.rumble.strong_magnitude * gain / 0xffff;
+ weak = (u32)new->u.rumble.weak_magnitude * gain / 0xffff;
+
+ if (effect->u.rumble.strong_magnitude + strong)
+ effect->direction = ml_calculate_direction(
+ effect->direction,
+ effect->u.rumble.strong_magnitude,
+ new->direction, strong);
+ else if (effect->u.rumble.weak_magnitude + weak)
+ effect->direction = ml_calculate_direction(
+ effect->direction,
+ effect->u.rumble.weak_magnitude,
+ new->direction, weak);
+ else
+ effect->direction = 0;
effect->u.rumble.strong_magnitude =
min(strong + effect->u.rumble.strong_magnitude,
0xffffU);
/* here we also scale it 0x7fff => 0xffff */
i = i * gain / 0x7fff;
+ if (effect->u.rumble.strong_magnitude + i)
+ effect->direction = ml_calculate_direction(
+ effect->direction,
+ effect->u.rumble.strong_magnitude,
+ new->direction, i);
+ else
+ effect->direction = 0;
effect->u.rumble.strong_magnitude =
min(i + effect->u.rumble.strong_magnitude, 0xffffU);
effect->u.rumble.weak_magnitude =
msecs_to_jiffies(state->effect->replay.length);
state->adj_at = state->play_at;
- ml_schedule_timer(ml);
-
} else {
debug("initiated stop");
__set_bit(FF_EFFECT_ABORTING, &state->flags);
else
__clear_bit(FF_EFFECT_STARTED, &state->flags);
-
- ml_play_effects(ml);
}
+ ml_play_effects(ml);
+
return 0;
}
return 0;
}
-static void iforce_release(struct input_dev *dev)
+static void iforce_close(struct input_dev *dev)
{
struct iforce *iforce = input_get_drvdata(dev);
int i;
/* Disable force feedback playback */
iforce_send_packet(iforce, FF_CMD_ENABLE, "\001");
+ /* Wait for the command to complete */
+ wait_event_interruptible(iforce->wait,
+ !test_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags));
}
switch (iforce->bus) {
#ifdef CONFIG_JOYSTICK_IFORCE_USB
- case IFORCE_USB:
- usb_kill_urb(iforce->irq);
-
- /* The device was unplugged before the file
- * was released */
- if (iforce->usbdev == NULL) {
- iforce_delete_device(iforce);
- kfree(iforce);
- }
- break;
-#endif
- }
-}
-
-void iforce_delete_device(struct iforce *iforce)
-{
- switch (iforce->bus) {
-#ifdef CONFIG_JOYSTICK_IFORCE_USB
case IFORCE_USB:
- iforce_usb_delete(iforce);
+ usb_kill_urb(iforce->irq);
+ usb_kill_urb(iforce->out);
+ usb_kill_urb(iforce->ctrl);
break;
#endif
#ifdef CONFIG_JOYSTICK_IFORCE_232
input_dev->name = "Unknown I-Force device";
input_dev->open = iforce_open;
- input_dev->close = iforce_release;
+ input_dev->close = iforce_close;
/*
* On-device memory allocation.
struct iforce *iforce = urb->context;
if (urb->status) {
+ clear_bit(IFORCE_XMIT_RUNNING, iforce->xmit_flags);
dbg("urb->status %d, exiting", urb->status);
return;
}
return err;
}
-/* Called by iforce_delete() */
-void iforce_usb_delete(struct iforce* iforce)
-{
- usb_kill_urb(iforce->irq);
- usb_kill_urb(iforce->out);
- usb_kill_urb(iforce->ctrl);
-
- usb_free_urb(iforce->irq);
- usb_free_urb(iforce->out);
- usb_free_urb(iforce->ctrl);
-}
-
static void iforce_usb_disconnect(struct usb_interface *intf)
{
struct iforce *iforce = usb_get_intfdata(intf);
- int open = 0; /* FIXME! iforce->dev.handle->open; */
usb_set_intfdata(intf, NULL);
- if (iforce) {
- iforce->usbdev = NULL;
- input_unregister_device(iforce->dev);
- if (!open) {
- iforce_delete_device(iforce);
- kfree(iforce);
- }
- }
+ input_unregister_device(iforce->dev);
+
+ usb_free_urb(iforce->irq);
+ usb_free_urb(iforce->out);
+ usb_free_urb(iforce->ctrl);
+
+ kfree(iforce);
}
static struct usb_device_id iforce_usb_ids [] = {
/* iforce-usb.c */
void iforce_usb_xmit(struct iforce *iforce);
-void iforce_usb_delete(struct iforce *iforce);
/* iforce-main.c */
int iforce_init_device(struct iforce *iforce);
-void iforce_delete_device(struct iforce *iforce);
/* iforce-packets.c */
int iforce_control_playback(struct iforce*, u16 id, unsigned int);
#define ATKBD_CMD_GETID 0x02f2
#define ATKBD_CMD_SETREP 0x10f3
#define ATKBD_CMD_ENABLE 0x00f4
-#define ATKBD_CMD_RESET_DIS 0x00f5
+#define ATKBD_CMD_RESET_DIS 0x00f5 /* Reset to defaults and disable */
+#define ATKBD_CMD_RESET_DEF 0x00f6 /* Reset to defaults */
#define ATKBD_CMD_SETALL_MBR 0x00fa
#define ATKBD_CMD_RESET_BAT 0x02ff
#define ATKBD_CMD_RESEND 0x00fe
struct atkbd *atkbd = serio_get_drvdata(serio);
atkbd_disable(atkbd);
- ps2_command(&atkbd->ps2dev, NULL, ATKBD_CMD_RESET_BAT);
+ ps2_command(&atkbd->ps2dev, NULL, ATKBD_CMD_RESET_DEF);
}
unsigned short *keycodes;
unsigned int row_shift;
+ DECLARE_BITMAP(disabled_gpios, MATRIX_MAX_ROWS);
+
uint32_t last_key_state[MATRIX_MAX_COLS];
struct delayed_work work;
+ spinlock_t lock;
bool scan_pending;
bool stopped;
- spinlock_t lock;
};
/*
matrix_keypad_stop(keypad->input_dev);
- if (device_may_wakeup(&pdev->dev))
- for (i = 0; i < pdata->num_row_gpios; i++)
- enable_irq_wake(gpio_to_irq(pdata->row_gpios[i]));
+ if (device_may_wakeup(&pdev->dev)) {
+ for (i = 0; i < pdata->num_row_gpios; i++) {
+ if (!test_bit(i, keypad->disabled_gpios)) {
+ unsigned int gpio = pdata->row_gpios[i];
+
+ if (enable_irq_wake(gpio_to_irq(gpio)) == 0)
+ __set_bit(i, keypad->disabled_gpios);
+ }
+ }
+ }
return 0;
}
const struct matrix_keypad_platform_data *pdata = keypad->pdata;
int i;
- if (device_may_wakeup(&pdev->dev))
- for (i = 0; i < pdata->num_row_gpios; i++)
- disable_irq_wake(gpio_to_irq(pdata->row_gpios[i]));
+ if (device_may_wakeup(&pdev->dev)) {
+ for (i = 0; i < pdata->num_row_gpios; i++) {
+ if (test_and_clear_bit(i, keypad->disabled_gpios)) {
+ unsigned int gpio = pdata->row_gpios[i];
+
+ disable_irq_wake(gpio_to_irq(gpio));
+ }
+ }
+ }
matrix_keypad_start(keypad->input_dev);
u8 reg;
int ret;
-#ifdef CONFIG_LOCKDEP
- /* WORKAROUND for lockdep forcing IRQF_DISABLED on us, which
- * we don't want and can't tolerate. Although it might be
- * friendlier not to borrow this thread context...
- */
- local_irq_enable();
-#endif
-
/* Read & Clear TWL4030 pending interrupt */
ret = twl4030_kpread(kp, ®, KEYP_ISR1, 1);
*
* NOTE: we assume this host is wired to TWL4040 INT1, not INT2 ...
*/
- error = request_irq(kp->irq, do_kp_irq, 0, pdev->name, kp);
+ error = request_threaded_irq(kp->irq, NULL, do_kp_irq,
+ 0, pdev->name, kp);
if (error) {
dev_info(kp->dbg_dev, "request_irq failed for irq no=%d\n",
kp->irq);
int err;
u8 value;
-#ifdef CONFIG_LOCKDEP
- /* WORKAROUND for lockdep forcing IRQF_DISABLED on us, which
- * we don't want and can't tolerate since this is a threaded
- * IRQ and can sleep due to the i2c reads it has to issue.
- * Although it might be friendlier not to borrow this thread
- * context...
- */
- local_irq_enable();
-#endif
-
err = twl_i2c_read_u8(TWL4030_MODULE_PM_MASTER, &value,
- STS_HW_CONDITIONS);
+ STS_HW_CONDITIONS);
if (!err) {
input_report_key(pwr, KEY_POWER, value & PWR_PWRON_IRQ);
input_sync(pwr);
pwr->phys = "twl4030_pwrbutton/input0";
pwr->dev.parent = &pdev->dev;
- err = request_irq(irq, powerbutton_irq,
+ err = request_threaded_irq(irq, NULL, powerbutton_irq,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING,
"twl4030_pwrbutton", pwr);
if (err < 0) {
.driver = {
.name = "wistron-bios",
.owner = THIS_MODULE,
-#if CONFIG_PM
+#ifdef CONFIG_PM
.pm = &wistron_pm_ops,
#endif
},
config MOUSE_PS2_LIFEBOOK
bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EMBEDDED
default y
- depends on MOUSE_PS2 && X86
+ depends on MOUSE_PS2 && X86 && DMI
help
Say Y here if you have a Fujitsu B-series Lifebook PS/2
TouchScreen connected to your system.
static int hgpk_register(struct psmouse *psmouse)
{
- struct input_dev *dev = psmouse->dev;
int err;
/* register handlers */
}
static const struct dmi_system_id __initconst lifebook_dmi_table[] = {
-#if defined(CONFIG_DMI) && defined(CONFIG_X86)
{
/* FLORA-ie 55mi */
.matches = {
},
},
{ }
-#endif
};
void __init lifebook_module_init(void)
if (psmouse->cleanup)
psmouse->cleanup(psmouse);
- psmouse_reset(psmouse);
+/*
+ * Reset the mouse to defaults (bare PS/2 protocol).
+ */
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_RESET_DIS);
/*
* Some boxes, such as HP nx7400, get terribly confused if mouse
mutex_lock(&serio_mutex);
- /*
- * Note that we handle only one event here to give swsusp
- * a chance to freeze kseriod thread. Serio events should
- * be pretty rare so we are not concerned about taking
- * performance hit.
- */
- if ((event = serio_get_event())) {
+ while ((event = serio_get_event())) {
switch (event->type) {
case SERIO_REGISTER_PORT:
static int serio_thread(void *nothing)
{
- set_freezable();
do {
serio_handle_event();
- wait_event_freezable(serio_wait,
+ wait_event_interruptible(serio_wait,
kthread_should_stop() || !list_empty(&serio_event_list));
} while (!kthread_should_stop());
if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
return;
if (!mddev->raid_disks && list_empty(&mddev->disks) &&
- !mddev->hold_active) {
+ mddev->ctime == 0 && !mddev->hold_active) {
+ /* Array is not configured at all, and not held active,
+ * so destroy it */
list_del(&mddev->all_mddevs);
if (mddev->gendisk) {
/* we did a probe so need to clean up.
mddev->barriers_work = 1;
mddev->ok_start_degraded = start_dirty_degraded;
- if (start_readonly)
+ if (start_readonly && mddev->ro == 0)
mddev->ro = 2; /* read-only, but switch on first write */
err = mddev->pers->run(mddev);
set_capacity(disk, mddev->array_sectors);
- /* If there is a partially-recovered drive we need to
- * start recovery here. If we leave it to md_check_recovery,
- * it will remove the drives and not do the right thing
- */
- if (mddev->degraded && !mddev->sync_thread) {
- int spares = 0;
- list_for_each_entry(rdev, &mddev->disks, same_set)
- if (rdev->raid_disk >= 0 &&
- !test_bit(In_sync, &rdev->flags) &&
- !test_bit(Faulty, &rdev->flags))
- /* complete an interrupted recovery */
- spares++;
- if (spares && mddev->pers->sync_request) {
- mddev->recovery = 0;
- set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
- mddev->sync_thread = md_register_thread(md_do_sync,
- mddev,
- "resync");
- if (!mddev->sync_thread) {
- printk(KERN_ERR "%s: could not start resync"
- " thread...\n",
- mdname(mddev));
- /* leave the spares where they are, it shouldn't hurt */
- mddev->recovery = 0;
- }
- }
- }
md_wakeup_thread(mddev->thread);
md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
mddev->minor_version = info->minor_version;
mddev->patch_version = info->patch_version;
mddev->persistent = !info->not_persistent;
+ /* ensure mddev_put doesn't delete this now that there
+ * is some minimal configuration.
+ */
+ mddev->ctime = get_seconds();
return 0;
}
mddev->major_version = MD_MAJOR_VERSION;
mddev->curr_resync = 2;
try_again:
- if (kthread_should_stop()) {
+ if (kthread_should_stop())
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
goto skip;
- }
for_each_mddev(mddev2, tmp) {
if (mddev2 == mddev)
continue;
unsigned long flags;
int su;
- if ((tcode != TCODE_WRITE_QUADLET_REQUEST &&
- tcode != TCODE_WRITE_BLOCK_REQUEST) ||
- offset != CSR_REGISTER_BASE + CSR_FCP_RESPONSE ||
- length == 0 ||
- (((u8 *)payload)[0] & 0xf0) != 0) {
- fw_send_response(card, request, RCODE_TYPE_ERROR);
+ if (length < 2 || (((u8 *)payload)[0] & 0xf0) != 0)
return;
- }
su = ((u8 *)payload)[1] & 0x7;
}
spin_unlock_irqrestore(&node_list_lock, flags);
- if (fdtv) {
+ if (fdtv)
avc_recv(fdtv, payload, length);
- fw_send_response(card, request, RCODE_COMPLETE);
- }
}
static struct fw_address_handler fcp_handler = {
bytes[byte] &= ~(0x03 << off);
- spin_lock_irq(&d->lock);
+ raw_spin_lock_irq(&d->lock);
if (d->status & IRQ_TYPE_EDGE_RISING)
bytes[byte] |= BIT(off + 1);
if (d->status & IRQ_TYPE_EDGE_FALLING)
bytes[byte] |= BIT(off + 0);
- spin_unlock_irq(&d->lock);
+ raw_spin_unlock_irq(&d->lock);
edge_change &= ~BIT(i);
}
#include <linux/errno.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include <linux/init.h>
memcpy_toio(lp->base, init_words + 5, sizeof(init_words) - 10);
/* Fill in the station address. */
- memcpy_toio(lp->base+SA_OFFSET, dev->dev_addr,
- sizeof(dev->dev_addr));
+ memcpy_toio(lp->base+SA_OFFSET, dev->dev_addr, ETH_ALEN);
/* The Tx-block list is written as needed. We just set up the values. */
lp->tx_cmd_link = IDLELOOP + 4;
config GELIC_WIRELESS
bool "PS3 Wireless support"
+ depends on WLAN
depends on GELIC_NET
select WIRELESS_EXT
help
config GELIC_WIRELESS_OLD_PSK_INTERFACE
bool "PS3 Wireless private PSK interface (OBSOLETE)"
depends on GELIC_WIRELESS
+ select WEXT_PRIV
help
This option retains the obsolete private interface to pass
the PSK from user space programs to the driver. The PSK
u32 tx_fc; /* Tx flow control */
int link_speed;
u8 port_type;
+ u8 transceiver;
};
extern const struct ethtool_ops be_ethtool_ops;
return status;
}
+int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ u8 loopback_type, u8 enable)
+{
+ struct be_mcc_wrb *wrb;
+ struct be_cmd_req_set_lmode *req;
+ int status;
+
+ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+ status = -EBUSY;
+ goto err;
+ }
+
+ req = embedded_payload(wrb);
+
+ be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0,
+ OPCODE_LOWLEVEL_SET_LOOPBACK_MODE);
+
+ be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_LOWLEVEL,
+ OPCODE_LOWLEVEL_SET_LOOPBACK_MODE,
+ sizeof(*req));
+
+ req->src_port = port_num;
+ req->dest_port = port_num;
+ req->loopback_type = loopback_type;
+ req->loopback_state = enable;
+
+ status = be_mcc_notify_wait(adapter);
+err:
+ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+}
+
int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num,
u32 loopback_type, u32 pkt_size, u32 num_pkts, u64 pattern)
{
be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_LOWLEVEL,
OPCODE_LOWLEVEL_LOOPBACK_TEST, sizeof(*req));
+ req->hdr.timeout = 4;
req->pattern = cpu_to_le64(pattern);
req->src_port = cpu_to_le32(port_num);
#define OPCODE_LOWLEVEL_HOST_DDR_DMA 17
#define OPCODE_LOWLEVEL_LOOPBACK_TEST 18
+#define OPCODE_LOWLEVEL_SET_LOOPBACK_MODE 19
struct be_cmd_req_hdr {
u8 opcode; /* dword 0 */
u32 ticks_compl;
};
+struct be_cmd_req_set_lmode {
+ struct be_cmd_req_hdr hdr;
+ u8 src_port;
+ u8 dest_port;
+ u8 loopback_type;
+ u8 loopback_state;
+};
+
+struct be_cmd_resp_set_lmode {
+ struct be_cmd_resp_hdr resp_hdr;
+ u8 rsvd0[4];
+};
+
/********************** DDR DMA test *********************/
struct be_cmd_req_ddrdma_test {
struct be_cmd_req_hdr hdr;
u32 num_pkts, u64 pattern);
extern int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern,
u32 byte_cnt, struct be_dma_mem *cmd);
+extern int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ u8 loopback_type, u8 enable);
#define BE_MAC_LOOPBACK 0x0
#define BE_PHY_LOOPBACK 0x1
#define BE_ONE_PORT_EXT_LOOPBACK 0x2
+#define BE_NO_LOOPBACK 0xff
static void
be_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
status = be_cmd_read_port_type(adapter, adapter->port_num,
&connector);
- switch (connector) {
- case 7:
- ecmd->port = PORT_FIBRE;
- break;
- default:
- ecmd->port = PORT_TP;
- break;
+ if (!status) {
+ switch (connector) {
+ case 7:
+ ecmd->port = PORT_FIBRE;
+ ecmd->transceiver = XCVR_EXTERNAL;
+ break;
+ case 0:
+ ecmd->port = PORT_TP;
+ ecmd->transceiver = XCVR_EXTERNAL;
+ break;
+ default:
+ ecmd->port = PORT_TP;
+ ecmd->transceiver = XCVR_INTERNAL;
+ break;
+ }
+ } else {
+ ecmd->port = PORT_AUI;
+ ecmd->transceiver = XCVR_INTERNAL;
}
/* Save for future use */
adapter->link_speed = ecmd->speed;
adapter->port_type = ecmd->port;
+ adapter->transceiver = ecmd->transceiver;
} else {
ecmd->speed = adapter->link_speed;
ecmd->port = adapter->port_type;
+ ecmd->transceiver = adapter->transceiver;
}
ecmd->duplex = DUPLEX_FULL;
ecmd->autoneg = AUTONEG_DISABLE;
- ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_TP);
ecmd->phy_address = adapter->port_num;
- ecmd->transceiver = XCVR_INTERNAL;
+ switch (ecmd->port) {
+ case PORT_FIBRE:
+ ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE);
+ break;
+ case PORT_TP:
+ ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_TP);
+ break;
+ case PORT_AUI:
+ ecmd->supported = (SUPPORTED_10000baseT_Full | SUPPORTED_AUI);
+ break;
+ }
return 0;
}
return ret;
}
+static u64 be_loopback_test(struct be_adapter *adapter, u8 loopback_type,
+ u64 *status)
+{
+ be_cmd_set_loopback(adapter, adapter->port_num,
+ loopback_type, 1);
+ *status = be_cmd_loopback_test(adapter, adapter->port_num,
+ loopback_type, 1500,
+ 2, 0xabc);
+ be_cmd_set_loopback(adapter, adapter->port_num,
+ BE_NO_LOOPBACK, 1);
+ return *status;
+}
+
static void
be_self_test(struct net_device *netdev, struct ethtool_test *test, u64 *data)
{
memset(data, 0, sizeof(u64) * ETHTOOL_TESTS_NUM);
if (test->flags & ETH_TEST_FL_OFFLINE) {
- data[0] = be_cmd_loopback_test(adapter, adapter->port_num,
- BE_MAC_LOOPBACK, 1500,
- 2, 0xabc);
- if (data[0] != 0)
+ if (be_loopback_test(adapter, BE_MAC_LOOPBACK,
+ &data[0]) != 0) {
test->flags |= ETH_TEST_FL_FAILED;
-
- data[1] = be_cmd_loopback_test(adapter, adapter->port_num,
- BE_PHY_LOOPBACK, 1500,
- 2, 0xabc);
- if (data[1] != 0)
+ }
+ if (be_loopback_test(adapter, BE_PHY_LOOPBACK,
+ &data[1]) != 0) {
test->flags |= ETH_TEST_FL_FAILED;
-
- data[2] = be_cmd_loopback_test(adapter, adapter->port_num,
- BE_ONE_PORT_EXT_LOOPBACK,
- 1500, 2, 0xabc);
- if (data[2] != 0)
+ }
+ if (be_loopback_test(adapter, BE_ONE_PORT_EXT_LOOPBACK,
+ &data[2]) != 0) {
test->flags |= ETH_TEST_FL_FAILED;
+ }
data[3] = be_test_ddr_dma(adapter);
if (data[3] != 0)
if (bp->cnic_eth_dev.drv_state & CNIC_DRV_STATE_REGD) {
bnx2x_set_iscsi_eth_mac_addr(bp, 1);
bp->cnic_flags |= BNX2X_CNIC_FLAG_MAC_SET;
+ bnx2x_init_sb(bp, bp->cnic_sb, bp->cnic_sb_mapping,
+ CNIC_SB_ID(bp));
}
mutex_unlock(&bp->cnic_mutex);
#endif
// check if any partner replys
if (best->is_individual) {
pr_warning("%s: Warning: No 802.3ad response from the link partner for any adapters in the bond\n",
- best->slave->dev->master->name);
+ best->slave ? best->slave->dev->master->name : "NULL");
}
best->is_active = 1;
static void gfar_clear_exact_match(struct net_device *dev);
static void gfar_set_mac_for_addr(struct net_device *dev, int num, u8 *addr);
static int gfar_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
-u16 gfar_select_queue(struct net_device *dev, struct sk_buff *skb);
MODULE_AUTHOR("Freescale Semiconductor, Inc");
MODULE_DESCRIPTION("Gianfar Ethernet Driver");
.ndo_set_multicast_list = gfar_set_multi,
.ndo_tx_timeout = gfar_timeout,
.ndo_do_ioctl = gfar_ioctl,
- .ndo_select_queue = gfar_select_queue,
.ndo_get_stats = gfar_get_stats,
.ndo_vlan_rx_register = gfar_vlan_rx_register,
.ndo_set_mac_address = eth_mac_addr,
return priv->vlgrp || priv->rx_csum_enable;
}
-u16 gfar_select_queue(struct net_device *dev, struct sk_buff *skb)
-{
- return skb_get_queue_mapping(skb);
-}
static void free_tx_pointers(struct gfar_private *priv)
{
int i = 0;
fcb = (struct rxfcb *)skb->data;
/* Remove the FCB from the skb */
- skb_set_queue_mapping(skb, fcb->rq);
/* Remove the padded bytes, if there are any */
- if (amount_pull)
+ if (amount_pull) {
+ skb_record_rx_queue(skb, fcb->rq);
skb_pull(skb, amount_pull);
+ }
if (priv->rx_csum_enable)
gfar_rx_checksum(skb, fcb);
/* Remove the FCS from the packet length */
skb_put(skb, pkt_len);
rx_queue->stats.rx_bytes += pkt_len;
-
+ skb_record_rx_queue(skb, rx_queue->qindex);
gfar_process_frame(dev, skb, amount_pull);
} else {
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
#include <linux/skbuff.h>
#include <linux/bitops.h>
/* copy out MAC address */
- for (z = 0; z < sizeof(dev->dev_addr); z++)
+ for (z = 0; z < ETH_ALEN; z++)
dev->dev_addr[z] = inb(dev->base_addr + MACADDRPROM + z);
/* print config */
hw_dbg("Configuring Autoneg:PCS_LCTL=0x%08X\n", reg);
} else {
/* Set PCS register for forced link */
- reg |= E1000_PCS_LCTL_FSD | /* Force Speed */
- E1000_PCS_LCTL_FORCE_LINK | /* Force Link */
- E1000_PCS_LCTL_FLV_LINK_UP; /* Force link value up */
+ reg |= E1000_PCS_LCTL_FSD; /* Force Speed */
hw_dbg("Configuring Forced Link:PCS_LCTL=0x%08X\n", reg);
}
phy_data |= I82580_CFG_ENABLE_DOWNSHIFT;
ret_val = phy->ops.write_reg(hw, I82580_CFG_REG, phy_data);
- if (ret_val)
- goto out;
-
- /* Set number of link attempts before downshift */
- ret_val = phy->ops.read_reg(hw, I82580_CTRL_REG, &phy_data);
- if (ret_val)
- goto out;
- phy_data &= ~I82580_CTRL_DOWNSHIFT_MASK;
- ret_val = phy->ops.write_reg(hw, I82580_CTRL_REG, phy_data);
out:
return ret_val;
/* dual port cards only support WoL on port A from now on
* unless it was enabled in the eeprom for port B
* so exclude FUNC_1 ports from having WoL enabled */
- if (rd32(E1000_STATUS) & E1000_STATUS_FUNC_1 &&
+ if ((rd32(E1000_STATUS) & E1000_STATUS_FUNC_MASK) &&
!adapter->eeprom_wol) {
wol->supported = 0;
break;
hwm = min(((pba << 10) * 9 / 10),
((pba << 10) - 2 * adapter->max_frame_size));
- if (mac->type < e1000_82576) {
- fc->high_water = hwm & 0xFFF8; /* 8-byte granularity */
- fc->low_water = fc->high_water - 8;
- } else {
- fc->high_water = hwm & 0xFFF0; /* 16-byte granularity */
- fc->low_water = fc->high_water - 16;
- }
+ fc->high_water = hwm & 0xFFF0; /* 16-byte granularity */
+ fc->low_water = fc->high_water - 16;
fc->pause_time = 0xFFFF;
fc->send_xon = 1;
fc->current_mode = fc->requested_mode;
err = hw->mac.ops.reset_hw(hw);
if (err) {
dev_info(&pdev->dev,
- "PF still in reset state, assigning new address\n");
+ "PF still in reset state, assigning new address."
+ " Is the PF interface up?\n");
random_ether_addr(hw->mac.addr);
} else {
err = hw->mac.ops.read_mac_addr(hw);
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
+ /*
+ * pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
err = pci_enable_device_mem(pdev);
if (err) {
#include <linux/crc32.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/moduleparam.h>
/* if the ethernet address is not valid, force to 00:00:00:00:00:00 */
if (!is_valid_ether_addr(dev->perm_addr))
- memset(dev->dev_addr, 0, sizeof(dev->dev_addr));
+ memset(dev->dev_addr, 0, ETH_ALEN);
if (pcnet32_debug & NETIF_MSG_PROBE) {
printk(" %pM", dev->dev_addr);
EFX_LOG(efx, "create port\n");
+ if (phy_flash_cfg)
+ efx->phy_mode = PHY_MODE_SPECIAL;
+
/* Connect up MAC/PHY operations table */
rc = efx->type->probe_port(efx);
if (rc)
goto err;
- if (phy_flash_cfg)
- efx->phy_mode = PHY_MODE_SPECIAL;
-
/* Sanity check MAC address */
if (is_valid_ether_addr(efx->mac_address)) {
memcpy(efx->net_dev->dev_addr, efx->mac_address, ETH_ALEN);
static void falcon_remove_port(struct efx_nic *efx)
{
+ efx->phy_op->remove(efx);
efx_nic_free_buffer(efx, &efx->stats_buffer);
}
efx_writeo(efx, ®, FR_AB_XM_MGT_INT_MASK);
}
-/* Get status of XAUI link */
-static bool falcon_xaui_link_ok(struct efx_nic *efx)
+static bool falcon_xgxs_link_ok(struct efx_nic *efx)
{
efx_oword_t reg;
bool align_done, link_ok = false;
int sync_status;
- if (LOOPBACK_INTERNAL(efx))
- return true;
-
/* Read link status */
efx_reado(efx, ®, FR_AB_XX_CORE_STAT);
EFX_SET_OWORD_FIELD(reg, FRF_AB_XX_DISPERR, FFE_AB_XX_STAT_ALL_LANES);
efx_writeo(efx, ®, FR_AB_XX_CORE_STAT);
- /* If the link is up, then check the phy side of the xaui link */
- if (efx->link_state.up && link_ok)
- if (efx->mdio.mmds & (1 << MDIO_MMD_PHYXS))
- link_ok = efx_mdio_phyxgxs_lane_sync(efx);
-
return link_ok;
}
+static bool falcon_xmac_link_ok(struct efx_nic *efx)
+{
+ /*
+ * Check MAC's XGXS link status except when using XGMII loopback
+ * which bypasses the XGXS block.
+ * If possible, check PHY's XGXS link status except when using
+ * MAC loopback.
+ */
+ return (efx->loopback_mode == LOOPBACK_XGMII ||
+ falcon_xgxs_link_ok(efx)) &&
+ (!(efx->mdio.mmds & (1 << MDIO_MMD_PHYXS)) ||
+ LOOPBACK_INTERNAL(efx) ||
+ efx_mdio_phyxgxs_lane_sync(efx));
+}
+
void falcon_reconfigure_xmac_core(struct efx_nic *efx)
{
unsigned int max_frame_len;
/* Try to bring up the Falcon side of the Falcon-Phy XAUI link */
-static bool falcon_check_xaui_link_up(struct efx_nic *efx, int tries)
+static bool falcon_xmac_link_ok_retry(struct efx_nic *efx, int tries)
{
- bool mac_up = falcon_xaui_link_ok(efx);
+ bool mac_up = falcon_xmac_link_ok(efx);
if (LOOPBACK_MASK(efx) & LOOPBACKS_EXTERNAL(efx) & LOOPBACKS_WS ||
efx_phy_mode_disabled(efx->phy_mode))
falcon_reset_xaui(efx);
udelay(200);
- mac_up = falcon_xaui_link_ok(efx);
+ mac_up = falcon_xmac_link_ok(efx);
--tries;
}
static bool falcon_xmac_check_fault(struct efx_nic *efx)
{
- return !falcon_check_xaui_link_up(efx, 5);
+ return !falcon_xmac_link_ok_retry(efx, 5);
}
static int falcon_reconfigure_xmac(struct efx_nic *efx)
falcon_reconfigure_mac_wrapper(efx);
- efx->xmac_poll_required = !falcon_check_xaui_link_up(efx, 5);
+ efx->xmac_poll_required = !falcon_xmac_link_ok_retry(efx, 5);
falcon_mask_status_intr(efx, true);
return 0;
return;
falcon_mask_status_intr(efx, false);
- efx->xmac_poll_required = !falcon_check_xaui_link_up(efx, 1);
+ efx->xmac_poll_required = !falcon_xmac_link_ok_retry(efx, 1);
falcon_mask_status_intr(efx, true);
}
static int efx_mcdi_phy_probe(struct efx_nic *efx)
{
- struct efx_mcdi_phy_cfg *phy_cfg;
+ struct efx_mcdi_phy_cfg *phy_data;
+ u8 outbuf[MC_CMD_GET_LINK_OUT_LEN];
+ u32 caps;
int rc;
- /* TODO: Move phy_data initialisation to
- * phy_op->probe/remove, rather than init/fini */
- phy_cfg = kzalloc(sizeof(*phy_cfg), GFP_KERNEL);
- if (phy_cfg == NULL) {
- rc = -ENOMEM;
- goto fail_alloc;
- }
- rc = efx_mcdi_get_phy_cfg(efx, phy_cfg);
+ /* Initialise and populate phy_data */
+ phy_data = kzalloc(sizeof(*phy_data), GFP_KERNEL);
+ if (phy_data == NULL)
+ return -ENOMEM;
+
+ rc = efx_mcdi_get_phy_cfg(efx, phy_data);
if (rc != 0)
goto fail;
- efx->phy_type = phy_cfg->type;
+ /* Read initial link advertisement */
+ BUILD_BUG_ON(MC_CMD_GET_LINK_IN_LEN != 0);
+ rc = efx_mcdi_rpc(efx, MC_CMD_GET_LINK, NULL, 0,
+ outbuf, sizeof(outbuf), NULL);
+ if (rc)
+ goto fail;
+
+ /* Fill out nic state */
+ efx->phy_data = phy_data;
+ efx->phy_type = phy_data->type;
- efx->mdio_bus = phy_cfg->channel;
- efx->mdio.prtad = phy_cfg->port;
- efx->mdio.mmds = phy_cfg->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22);
+ efx->mdio_bus = phy_data->channel;
+ efx->mdio.prtad = phy_data->port;
+ efx->mdio.mmds = phy_data->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22);
efx->mdio.mode_support = 0;
- if (phy_cfg->mmd_mask & (1 << MC_CMD_MMD_CLAUSE22))
+ if (phy_data->mmd_mask & (1 << MC_CMD_MMD_CLAUSE22))
efx->mdio.mode_support |= MDIO_SUPPORTS_C22;
- if (phy_cfg->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22))
+ if (phy_data->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22))
efx->mdio.mode_support |= MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
+ caps = MCDI_DWORD(outbuf, GET_LINK_OUT_CAP);
+ if (caps & (1 << MC_CMD_PHY_CAP_AN_LBN))
+ efx->link_advertising =
+ mcdi_to_ethtool_cap(phy_data->media, caps);
+ else
+ phy_data->forced_cap = caps;
+
/* Assert that we can map efx -> mcdi loopback modes */
BUILD_BUG_ON(LOOPBACK_NONE != MC_CMD_LOOPBACK_NONE);
BUILD_BUG_ON(LOOPBACK_DATA != MC_CMD_LOOPBACK_DATA);
* but by convention we don't */
efx->loopback_modes &= ~(1 << LOOPBACK_NONE);
- kfree(phy_cfg);
-
- return 0;
-
-fail:
- kfree(phy_cfg);
-fail_alloc:
- return rc;
-}
-
-static int efx_mcdi_phy_init(struct efx_nic *efx)
-{
- struct efx_mcdi_phy_cfg *phy_data;
- u8 outbuf[MC_CMD_GET_LINK_OUT_LEN];
- u32 caps;
- int rc;
-
- phy_data = kzalloc(sizeof(*phy_data), GFP_KERNEL);
- if (phy_data == NULL)
- return -ENOMEM;
-
- rc = efx_mcdi_get_phy_cfg(efx, phy_data);
- if (rc != 0)
- goto fail;
-
- efx->phy_data = phy_data;
-
- BUILD_BUG_ON(MC_CMD_GET_LINK_IN_LEN != 0);
- rc = efx_mcdi_rpc(efx, MC_CMD_GET_LINK, NULL, 0,
- outbuf, sizeof(outbuf), NULL);
- if (rc)
- goto fail;
-
- caps = MCDI_DWORD(outbuf, GET_LINK_OUT_CAP);
- if (caps & (1 << MC_CMD_PHY_CAP_AN_LBN))
- efx->link_advertising =
- mcdi_to_ethtool_cap(phy_data->media, caps);
- else
- phy_data->forced_cap = caps;
-
return 0;
fail:
return !efx_link_state_equal(&efx->link_state, &old_state);
}
-static void efx_mcdi_phy_fini(struct efx_nic *efx)
+static void efx_mcdi_phy_remove(struct efx_nic *efx)
{
struct efx_mcdi_phy_data *phy_data = efx->phy_data;
struct efx_phy_operations efx_mcdi_phy_ops = {
.probe = efx_mcdi_phy_probe,
- .init = efx_mcdi_phy_init,
+ .init = efx_port_dummy_op_int,
.reconfigure = efx_mcdi_phy_reconfigure,
.poll = efx_mcdi_phy_poll,
- .fini = efx_mcdi_phy_fini,
+ .fini = efx_port_dummy_op_void,
+ .remove = efx_mcdi_phy_remove,
.get_settings = efx_mcdi_phy_get_settings,
.set_settings = efx_mcdi_phy_set_settings,
.run_tests = NULL,
int (*probe) (struct efx_nic *efx);
int (*init) (struct efx_nic *efx);
void (*fini) (struct efx_nic *efx);
+ void (*remove) (struct efx_nic *efx);
int (*reconfigure) (struct efx_nic *efx);
bool (*poll) (struct efx_nic *efx);
void (*get_settings) (struct efx_nic *efx,
EFX_SET_OWORD_FIELD(temp, FRF_AZ_TX_SOFT_EVT_EN, 1);
/* Prefetch threshold 2 => fetch when descriptor cache half empty */
EFX_SET_OWORD_FIELD(temp, FRF_AZ_TX_PREF_THRESHOLD, 2);
+ /* Disable hardware watchdog which can misfire */
+ EFX_SET_OWORD_FIELD(temp, FRF_AZ_TX_PREF_WD_TMR, 0x3fffff);
/* Squash TX of packets of 16 bytes or less */
if (efx_nic_rev(efx) >= EFX_REV_FALCON_B0)
EFX_SET_OWORD_FIELD(temp, FRF_BZ_TX_FLUSH_MIN_LEN_EN, 1);
#define PCS_FW_HEARTBEAT_REG 0xd7ee
#define PCS_FW_HEARTB_LBN 0
#define PCS_FW_HEARTB_WIDTH 8
+#define PCS_FW_PRODUCT_CODE_1 0xd7f0
+#define PCS_FW_VERSION_1 0xd7f3
+#define PCS_FW_BUILD_1 0xd7f6
#define PCS_UC8051_STATUS_REG 0xd7fd
#define PCS_UC_STATUS_LBN 0
#define PCS_UC_STATUS_WIDTH 8
struct qt202x_phy_data {
enum efx_phy_mode phy_mode;
+ bool bug17190_in_bad_state;
+ unsigned long bug17190_timer;
+ u32 firmware_ver;
};
#define QT2022C2_MAX_RESET_TIME 500
#define QT2022C2_RESET_WAIT 10
-static int qt2025c_wait_reset(struct efx_nic *efx)
+#define QT2025C_MAX_HEARTB_TIME (5 * HZ)
+#define QT2025C_HEARTB_WAIT 100
+#define QT2025C_MAX_FWSTART_TIME (25 * HZ / 10)
+#define QT2025C_FWSTART_WAIT 100
+
+#define BUG17190_INTERVAL (2 * HZ)
+
+static int qt2025c_wait_heartbeat(struct efx_nic *efx)
{
- unsigned long timeout = jiffies + 10 * HZ;
+ unsigned long timeout = jiffies + QT2025C_MAX_HEARTB_TIME;
int reg, old_counter = 0;
/* Wait for firmware heartbeat to start */
old_counter = counter;
else if (counter != old_counter)
break;
- if (time_after(jiffies, timeout))
+ if (time_after(jiffies, timeout)) {
+ /* Some cables have EEPROMs that conflict with the
+ * PHY's on-board EEPROM so it cannot load firmware */
+ EFX_ERR(efx, "If an SFP+ direct attach cable is"
+ " connected, please check that it complies"
+ " with the SFP+ specification\n");
return -ETIMEDOUT;
- msleep(10);
+ }
+ msleep(QT2025C_HEARTB_WAIT);
}
+ return 0;
+}
+
+static int qt2025c_wait_fw_status_good(struct efx_nic *efx)
+{
+ unsigned long timeout = jiffies + QT2025C_MAX_FWSTART_TIME;
+ int reg;
+
/* Wait for firmware status to look good */
for (;;) {
reg = efx_mdio_read(efx, MDIO_MMD_PCS, PCS_UC8051_STATUS_REG);
break;
if (time_after(jiffies, timeout))
return -ETIMEDOUT;
+ msleep(QT2025C_FWSTART_WAIT);
+ }
+
+ return 0;
+}
+
+static void qt2025c_restart_firmware(struct efx_nic *efx)
+{
+ /* Restart microcontroller execution of firmware from RAM */
+ efx_mdio_write(efx, 3, 0xe854, 0x00c0);
+ efx_mdio_write(efx, 3, 0xe854, 0x0040);
+ msleep(50);
+}
+
+static int qt2025c_wait_reset(struct efx_nic *efx)
+{
+ int rc;
+
+ rc = qt2025c_wait_heartbeat(efx);
+ if (rc != 0)
+ return rc;
+
+ rc = qt2025c_wait_fw_status_good(efx);
+ if (rc == -ETIMEDOUT) {
+ /* Bug 17689: occasionally heartbeat starts but firmware status
+ * code never progresses beyond 0x00. Try again, once, after
+ * restarting execution of the firmware image. */
+ EFX_LOG(efx, "bashing QT2025C microcontroller\n");
+ qt2025c_restart_firmware(efx);
+ rc = qt2025c_wait_heartbeat(efx);
+ if (rc != 0)
+ return rc;
+ rc = qt2025c_wait_fw_status_good(efx);
+ }
+
+ return rc;
+}
+
+static void qt2025c_firmware_id(struct efx_nic *efx)
+{
+ struct qt202x_phy_data *phy_data = efx->phy_data;
+ u8 firmware_id[9];
+ size_t i;
+
+ for (i = 0; i < sizeof(firmware_id); i++)
+ firmware_id[i] = efx_mdio_read(efx, MDIO_MMD_PCS,
+ PCS_FW_PRODUCT_CODE_1 + i);
+ EFX_INFO(efx, "QT2025C firmware %xr%d v%d.%d.%d.%d [20%02d-%02d-%02d]\n",
+ (firmware_id[0] << 8) | firmware_id[1], firmware_id[2],
+ firmware_id[3] >> 4, firmware_id[3] & 0xf,
+ firmware_id[4], firmware_id[5],
+ firmware_id[6], firmware_id[7], firmware_id[8]);
+ phy_data->firmware_ver = ((firmware_id[3] & 0xf0) << 20) |
+ ((firmware_id[3] & 0x0f) << 16) |
+ (firmware_id[4] << 8) | firmware_id[5];
+}
+
+static void qt2025c_bug17190_workaround(struct efx_nic *efx)
+{
+ struct qt202x_phy_data *phy_data = efx->phy_data;
+
+ /* The PHY can get stuck in a state where it reports PHY_XS and PMA/PMD
+ * layers up, but PCS down (no block_lock). If we notice this state
+ * persisting for a couple of seconds, we switch PMA/PMD loopback
+ * briefly on and then off again, which is normally sufficient to
+ * recover it.
+ */
+ if (efx->link_state.up ||
+ !efx_mdio_links_ok(efx, MDIO_DEVS_PMAPMD | MDIO_DEVS_PHYXS)) {
+ phy_data->bug17190_in_bad_state = false;
+ return;
+ }
+
+ if (!phy_data->bug17190_in_bad_state) {
+ phy_data->bug17190_in_bad_state = true;
+ phy_data->bug17190_timer = jiffies + BUG17190_INTERVAL;
+ return;
+ }
+
+ if (time_after_eq(jiffies, phy_data->bug17190_timer)) {
+ EFX_LOG(efx, "bashing QT2025C PMA/PMD\n");
+ efx_mdio_set_flag(efx, MDIO_MMD_PMAPMD, MDIO_CTRL1,
+ MDIO_PMA_CTRL1_LOOPBACK, true);
msleep(100);
+ efx_mdio_set_flag(efx, MDIO_MMD_PMAPMD, MDIO_CTRL1,
+ MDIO_PMA_CTRL1_LOOPBACK, false);
+ phy_data->bug17190_timer = jiffies + BUG17190_INTERVAL;
+ }
+}
+
+static int qt2025c_select_phy_mode(struct efx_nic *efx)
+{
+ struct qt202x_phy_data *phy_data = efx->phy_data;
+ struct falcon_board *board = falcon_board(efx);
+ int reg, rc, i;
+ uint16_t phy_op_mode;
+
+ /* Only 2.0.1.0+ PHY firmware supports the more optimal SFP+
+ * Self-Configure mode. Don't attempt any switching if we encounter
+ * older firmware. */
+ if (phy_data->firmware_ver < 0x02000100)
+ return 0;
+
+ /* In gen