The framework provides generic functions (mostly kernel API calls), such as:
- Transport Stream URB handling in conjunction with dvb-demux-feed-control
- (bulk and isoc (TODO) are supported)
+ (bulk and isoc are supported)
- registering the device for the DVB-API
- registering an I2C-adapter if applicable
- remote-control/input-device handling
- firmware requesting and loading (currently just for the Cypress USB
- controller)
+ controllers)
- other functions/methods which can be shared by several drivers (such as
functions for bulk-control-commands)
+- TODO: a I2C-chunker. It creates device-specific chunks of register-accesses
+ depending on length of a register and the number of values that can be
+ multi-written and multi-read.
The source code of the particular DVB USB devices does just the communication
with the device via the bus. The connection between the DVB-API-functionality
TODO: dynamic enabling and disabling of the pid-filter in regard to number of
feeds requested.
-Supported devices USB1.1
+Supported devices
========================
-Produced and reselled by Twinhan:
----------------------------------
-- TwinhanDTV USB-Ter DVB-T Device (VP7041)
- http://www.twinhan.com/product_terrestrial_3.asp
+See the LinuxTV DVB Wiki at www.linuxtv.org for a complete list of
+cards/drivers/firmwares:
-- TwinhanDTV Magic Box (VP7041e)
- http://www.twinhan.com/product_terrestrial_4.asp
-
-- HAMA DVB-T USB device
- http://www.hama.de/portal/articleId*110620/action*2598
-
-- CTS Portable (Chinese Television System) (2)
- http://www.2cts.tv/ctsportable/
-
-- Unknown USB DVB-T device with vendor ID Hyper-Paltek
-
-
-Produced and reselled by KWorld:
---------------------------------
-- KWorld V-Stream XPERT DTV DVB-T USB
- http://www.kworld.com.tw/en/product/DVBT-USB/DVBT-USB.html
-
-- JetWay DTV DVB-T USB
- http://www.jetway.com.tw/evisn/product/lcd-tv/DVT-USB/dtv-usb.htm
-
-- ADSTech Instant TV DVB-T USB
- http://www.adstech.com/products/PTV-333/intro/PTV-333_intro.asp?pid=PTV-333
-
-
-Others:
--------
-- Ultima Electronic/Artec T1 USB TVBOX (AN2135, AN2235, AN2235 with Panasonic Tuner)
- http://82.161.246.249/products-tvbox.html
-
-- Compro Videomate DVB-U2000 - DVB-T USB (2)
- http://www.comprousa.com/products/vmu2000.htm
-
-- Grandtec USB DVB-T
- http://www.grand.com.tw/
-
-- AVerMedia AverTV DVBT USB
- http://www.avermedia.com/
-
-- DiBcom USB DVB-T reference device (non-public)
-
-
-Supported devices USB2.0-only
-=============================
-- Twinhan MagicBox II
- http://www.twinhan.com/product_terrestrial_7.asp
-
-- TwinhanDTV Alpha
- http://www.twinhan.com/product_terrestrial_8.asp
-
-- DigitalNow TinyUSB 2 DVB-t Receiver
- http://www.digitalnow.com.au/DigitalNow%20tinyUSB2%20Specifications.html
-
-- Hanftek UMT-010
- http://www.globalsources.com/si/6008819757082/ProductDetail/Digital-TV/product_id-100046529
-
-
-Supported devices USB2.0 and USB1.1
-=============================
-- Typhoon/Yakumo/HAMA/Yuan DVB-T mobile USB2.0
- http://www.yakumo.de/produkte/index.php?pid=1&ag=DVB-T
- http://www.yuan.com.tw/en/products/vdo_ub300.html
- http://www.hama.de/portal/articleId*114663/action*2563
- http://www.anubisline.com/english/articlec.asp?id=50502&catid=002
-
-- Artec T1 USB TVBOX (FX2) (2)
-
-- Hauppauge WinTV NOVA-T USB2
- http://www.hauppauge.com/
-
-- KWorld/ADSTech Instant DVB-T USB2.0 (DiB3000M-B)
-
-- DiBcom USB2.0 DVB-T reference device (non-public)
-
-- AVerMedia AverTV A800 DVB-T USB2.0
-
-1) It is working almost - work-in-progress.
-2) No test reports received yet.
+http://www.linuxtv.org/wiki/index.php/DVB_USB
0. History & News:
+ 2005-06-30 - added support for WideView WT-220U (Thanks to Steve Chang)
+ 2005-05-30 - added basic isochronous support to the dvb-usb-framework
+ added support for Conexant Hybrid reference design and Nebula DigiTV USB
2005-04-17 - all dibusb devices ported to make use of the dvb-usb-framework
2005-04-02 - re-enabled and improved remote control code.
2005-03-31 - ported the Yakumo/Hama/Typhoon DVB-T USB2.0 device to dvb-usb.
2005-01-31 - distorted streaming is gone for USB1.1 devices
2005-01-13 - moved the mirrored pid_filter_table back to dvb-dibusb
- first almost working version for HanfTek UMT-010
- - found out, that Yakumo/HAMA/Typhoon are predessors of the HanfTek UMT-010
+ - found out, that Yakumo/HAMA/Typhoon are predecessors of the HanfTek UMT-010
2005-01-10 - refactoring completed, now everything is very delightful
- tuner quirks for some weird devices (Artec T1 AN2235 device has sometimes a
Panasonic Tuner assembled). Tunerprobing implemented. Thanks a lot to Gunnar Wittich.
1. How to use?
1.1. Firmware
-Most of the USB drivers need to download a firmware to start working.
-
-for USB1.1 (AN2135) you need: dvb-usb-dibusb-5.0.0.11.fw
-for USB2.0 HanfTek: dvb-usb-umt-010-02.fw
-for USB2.0 DiBcom: dvb-usb-dibusb-6.0.0.8.fw
-for USB2.0 AVerMedia AverTV DVB-T USB2: dvb-usb-avertv-a800-01.fw
-for USB2.0 TwinhanDTV Alpha/MagicBox II: dvb-usb-vp7045-01.fw
-
-The files can be found on http://www.linuxtv.org/download/firmware/ .
+Most of the USB drivers need to download a firmware to the device before start
+working.
-We do not have the permission (yet) to publish the following firmware-files.
-You'll need to extract them from the windows drivers.
+Have a look at the Wikipage for the DVB-USB-drivers to find out, which firmware
+you need for your device:
-You should be able to use "get_dvb_firmware dvb-usb" to get the firmware:
-
-for USB1.1 (AN2235) (a few Artec T1 devices): dvb-usb-dibusb-an2235-01.fw
-for USB2.0 Hauppauge: dvb-usb-nova-t-usb2-01.fw
-for USB2.0 ADSTech/Kworld USB2.0: dvb-usb-adstech-usb2-01.fw
-for USB2.0 Yakumo/Typhoon/Hama: dvb-usb-dtt200u-01.fw
+http://www.linuxtv.org/wiki/index.php/DVB_USB
1.2. Compiling
Gunnar Wittich and Joachim von Caron for their trust for providing
root-shells on their machines to implement support for new devices.
+ Allan Third and Michael Hutchinson for their help to write the Nebula
+ digitv-driver.
+
Glen Harris for bringing up, that there is a new dibusb-device and Jiun-Kuei
Jung from AVerMedia who kindly provided a special firmware to get the device
up and running in Linux.
Jennifer Chen, Jeff and Jack from Twinhan for kindly supporting by
writing the vp7045-driver.
- Some guys on the linux-dvb mailing list for encouraging me
+ Steve Chang from WideView for providing information for new devices and
+ firmware files.
+
+ Michael Paxton for submitting remote control keymaps.
+
+ Some guys on the linux-dvb mailing list for encouraging me.
Peter Schildmann >peter.schildmann-nospam-at-web.de< for his
user-level firmware loader, which saves a lot of time
Ulf Hermenau for helping me out with traditional chinese.
André Smoktun and Christian Frömmel for supporting me with
- hardware and listening to my problems very patient.
+ hardware and listening to my problems very patiently.
-How to get the Nebula, PCTV and Twinhan DST cards working
-=========================================================
+How to get the Nebula Electronics DigiTV, Pinnacle PCTV Sat, Twinhan DST + clones working
+=========================================================================================
-This class of cards has a bt878a as the PCI interface, and
-require the bttv driver.
+1) General information
+======================
-Please pay close attention to the warning about the bttv module
-options below for the DST card.
+This class of cards has a bt878a chip as the PCI interface.
+The different card drivers require the bttv driver to provide the means
+to access the i2c bus and the gpio pins of the bt8xx chipset.
-1) General informations
-=======================
+2) Compilation rules for Kernel >= 2.6.12
+=========================================
-These drivers require the bttv driver to provide the means to access
-the i2c bus and the gpio pins of the bt8xx chipset.
+Enable the following options:
-Because of this, you need to enable
"Device drivers" => "Multimedia devices"
- => "Video For Linux" => "BT848 Video For Linux"
-
-Furthermore you need to enable
+ => "Video For Linux" => "BT848 Video For Linux"
"Device drivers" => "Multimedia devices" => "Digital Video Broadcasting Devices"
- => "DVB for Linux" "DVB Core Support" "Nebula/Pinnacle PCTV/TwinHan PCI Cards"
+ => "DVB for Linux" "DVB Core Support" "Nebula/Pinnacle PCTV/TwinHan PCI Cards"
-2) Loading Modules
-==================
+3) Loading Modules, described by two approaches
+===============================================
In general you need to load the bttv driver, which will handle the gpio and
-i2c communication for us, plus the common dvb-bt8xx device driver.
-The frontends for Nebula (nxt6000), Pinnacle PCTV (cx24110) and
-TwinHan (dst) are loaded automatically by the dvb-bt8xx device driver.
+i2c communication for us, plus the common dvb-bt8xx device driver,
+which is called the backend.
+The frontends for Nebula DigiTV (nxt6000), Pinnacle PCTV Sat (cx24110),
+TwinHan DST + clones (dst and dst-ca) are loaded automatically by the backend.
+For further details about TwinHan DST + clones see /Documentation/dvb/ci.txt.
-3a) Nebula / Pinnacle PCTV
---------------------------
+3a) The manual approach
+-----------------------
- $ modprobe bttv (normally bttv is being loaded automatically by kmod)
- $ modprobe dvb-bt8xx (or just place dvb-bt8xx in /etc/modules for automatic loading)
+Loading modules:
+modprobe bttv
+modprobe dvb-bt8xx
+Unloading modules:
+modprobe -r dvb-bt8xx
+modprobe -r bttv
-3b) TwinHan and Clones
+3b) The automatic approach
--------------------------
- $ modprobe bttv i2c_hw=1 card=0x71
- $ modprobe dvb-bt8xx
- $ modprobe dst
-
-The value 0x71 will override the PCI type detection for dvb-bt8xx,
-which is necessary for TwinHan cards.
-
-If you're having an older card (blue color circuit) and card=0x71 locks
-your machine, try using 0x68, too. If that does not work, ask on the
-mailing list.
-
-The DST module takes a couple of useful parameters:
+If not already done by installation, place a line either in
+/etc/modules.conf or in /etc/modprobe.conf containing this text:
+alias char-major-81 bttv
-a. verbose takes values 0 to 5. These values control the verbosity level.
-b. debug takes values 0 and 1. You can either disable or enable debugging.
-c. dst_addons takes values 0 and 0x20:
-- A value of 0 means it is a FTA card.
-- A value of 0x20 means it has a Conditional Access slot.
+Then place a line in /etc/modules containing this text:
+dvb-bt8xx
-The autodetected values are determined by the "response string"
-of the card, which you can see in your logs:
-e.g.: dst_get_device_id: Recognize [DSTMCI]
+Reboot your system and have fun!
--
Authors: Richard Walker, Jamie Honan, Michael Hunold, Manu Abraham, Uwe Bugla
will be available until removal of old names.
Who: Grant Coady <gcoady@gmail.com>
+---------------------------
+
+What: PCMCIA control ioctl (needed for pcmcia-cs [cardmgr, cardctl])
+When: November 2005
+Files: drivers/pcmcia/: pcmcia_ioctl.c
+Why: With the 16-bit PCMCIA subsystem now behaving (almost) like a
+ normal hotpluggable bus, and with it using the default kernel
+ infrastructure (hotplug, driver core, sysfs) keeping the PCMCIA
+ control ioctl needed by cardmgr and cardctl from pcmcia-cs is
+ unnecessary, and makes further cleanups and integration of the
+ PCMCIA subsystem into the Linux kernel device driver model more
+ difficult. The features provided by cardmgr and cardctl are either
+ handled by the kernel itself now or are available in the new
+ pcmciautils package available at
+ http://kernel.org/pub/linux/utils/kernel/pcmcia/
+Who: Dominik Brodowski <linux@brodo.de>
--- /dev/null
+USERSPACE VERBS ACCESS
+
+ The ib_uverbs module, built by enabling CONFIG_INFINIBAND_USER_VERBS,
+ enables direct userspace access to IB hardware via "verbs," as
+ described in chapter 11 of the InfiniBand Architecture Specification.
+
+ To use the verbs, the libibverbs library, available from
+ <http://openib.org/>, is required. libibverbs contains a
+ device-independent API for using the ib_uverbs interface.
+ libibverbs also requires appropriate device-dependent kernel and
+ userspace driver for your InfiniBand hardware. For example, to use
+ a Mellanox HCA, you will need the ib_mthca kernel module and the
+ libmthca userspace driver be installed.
+
+User-kernel communication
+
+ Userspace communicates with the kernel for slow path, resource
+ management operations via the /dev/infiniband/uverbsN character
+ devices. Fast path operations are typically performed by writing
+ directly to hardware registers mmap()ed into userspace, with no
+ system call or context switch into the kernel.
+
+ Commands are sent to the kernel via write()s on these device files.
+ The ABI is defined in drivers/infiniband/include/ib_user_verbs.h.
+ The structs for commands that require a response from the kernel
+ contain a 64-bit field used to pass a pointer to an output buffer.
+ Status is returned to userspace as the return value of the write()
+ system call.
+
+Resource management
+
+ Since creation and destruction of all IB resources is done by
+ commands passed through a file descriptor, the kernel can keep track
+ of which resources are attached to a given userspace context. The
+ ib_uverbs module maintains idr tables that are used to translate
+ between kernel pointers and opaque userspace handles, so that kernel
+ pointers are never exposed to userspace and userspace cannot trick
+ the kernel into following a bogus pointer.
+
+ This also allows the kernel to clean up when a process exits and
+ prevent one process from touching another process's resources.
+
+Memory pinning
+
+ Direct userspace I/O requires that memory regions that are potential
+ I/O targets be kept resident at the same physical address. The
+ ib_uverbs module manages pinning and unpinning memory regions via
+ get_user_pages() and put_page() calls. It also accounts for the
+ amount of memory pinned in the process's locked_vm, and checks that
+ unprivileged processes do not exceed their RLIMIT_MEMLOCK limit.
+
+ Pages that are pinned multiple times are counted each time they are
+ pinned, so the value of locked_vm may be an overestimate of the
+ number of pages pinned by a process.
+
+/dev files
+
+ To create the appropriate character device files automatically with
+ udev, a rule like
+
+ KERNEL="uverbs*", NAME="infiniband/%k"
+
+ can be used. This will create device nodes named
+
+ /dev/infiniband/uverbs0
+
+ and so on. Since the InfiniBand userspace verbs should be safe for
+ use by non-privileged processes, it may be useful to add an
+ appropriate MODE or GROUP to the udev rule.
Medion MD4220 ??? (*)
Samsung P35 vbetool needed (6)
Sharp PC-AR10 (ATI rage) none (1)
+Sony Vaio PCG-C1VRX/K s3_bios (2)
Sony Vaio PCG-F403 ??? (*)
Sony Vaio PCG-N505SN ??? (*)
Sony Vaio vgn-s260 X or boot-radeon can init it (5)
W: http://atmelwlandriver.sourceforge.net/
S: Maintained
+AUDIT SUBSYSTEM
+L: linux-audit@redhat.com (subscribers-only)
+S: Maintained
+
AX.25 NETWORK LAYER
P: Ralf Baechle
M: ralf@linux-mips.org
S: Maintained
PCMCIA SUBSYSTEM
+P: Linux PCMCIA Team
L: http://lists.infradead.org/mailman/listinfo/linux-pcmcia
-S: Unmaintained
+S: Maintained
PCNET32 NETWORK DRIVER
P: Thomas Bogendörfer
$(Q)$(MAKE) $(build)=$(@D) $@
%.o: %.c scripts FORCE
$(Q)$(MAKE) $(build)=$(@D) $@
+%.ko: scripts FORCE
+ $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) $(build)=$(@D) $(@:.ko=.o)
+ $(Q)$(MAKE) -rR -f $(srctree)/scripts/Makefile.modpost
%/: scripts prepare FORCE
$(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) $(build)=$(@D)
%.lst: %.c scripts FORCE
@echo ' modules_install - Install all modules'
@echo ' dir/ - Build all files in dir and below'
@echo ' dir/file.[ois] - Build specified target only'
+ @echo ' dir/file.ko - Build module including final link'
@echo ' rpm - Build a kernel as an RPM package'
@echo ' tags/TAGS - Generate tags file for editors'
@echo ' cscope - Generate cscope index'
#(which is the most common case IMHO) to avoid unneeded clutter in the big tags file.
#Adding $(srctree) adds about 20M on i386 to the size of the output file!
-ifeq ($(KBUILD_OUTPUT),)
+ifeq ($(src),$(obj))
__srctree =
else
__srctree = $(srctree)/
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.11.8
+# Fri May 13 17:16:03 2005
+#
+CONFIG_FRV=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_FIND_NEXT_BIT=y
+# CONFIG_GENERIC_CALIBRATE_DELAY is not set
+# CONFIG_GENERIC_HARDIRQS is not set
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_BROKEN_ON_SMP=y
+CONFIG_INIT_ENV_ARG_LIMIT=32
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+# CONFIG_HOTPLUG is not set
+# CONFIG_KOBJECT_UEVENT is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_PRINTK=y
+CONFIG_BUG=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SHMEM=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
+# CONFIG_TINY_SHMEM is not set
+CONFIG_BASE_SMALL=0
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# Fujitsu FR-V system setup
+#
+CONFIG_MMU=y
+CONFIG_FRV_OUTOFLINE_ATOMIC_OPS=y
+CONFIG_HIGHMEM=y
+CONFIG_HIGHPTE=y
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_FLATMEM_MANUAL=y
+# CONFIG_DISCONTIGMEM_MANUAL is not set
+# CONFIG_SPARSEMEM_MANUAL is not set
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+# CONFIG_FRV_DEFL_CACHE_WBACK is not set
+# CONFIG_FRV_DEFL_CACHE_WBEHIND is not set
+CONFIG_FRV_DEFL_CACHE_WTHRU=y
+# CONFIG_FRV_DEFL_CACHE_DISABLED is not set
+
+#
+# CPU core support
+#
+CONFIG_CPU_FR451=y
+CONFIG_CPU_FR451_COMPILE=y
+CONFIG_FRV_L1_CACHE_SHIFT=5
+CONFIG_MB93091_VDK=y
+# CONFIG_MB93093_PDK is not set
+CONFIG_MB93090_MB00=y
+# CONFIG_MB93091_NO_MB is not set
+# CONFIG_GPREL_DATA_8 is not set
+CONFIG_GPREL_DATA_4=y
+# CONFIG_GPREL_DATA_NONE is not set
+CONFIG_PCI=y
+# CONFIG_PCI_LEGACY_PROC is not set
+# CONFIG_PCI_NAMES is not set
+# CONFIG_PCI_DEBUG is not set
+# CONFIG_PCMCIA is not set
+
+#
+# Power management options
+#
+# CONFIG_PM is not set
+
+#
+# Executable formats
+#
+# CONFIG_BINFMT_ELF is not set
+CONFIG_BINFMT_ELF_FDPIC=y
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+# CONFIG_STANDALONE is not set
+# CONFIG_PREVENT_FIRMWARE_BUILD is not set
+# CONFIG_FW_LOADER is not set
+# CONFIG_DEBUG_DRIVER is not set
+
+#
+# Connector - unified userspace <-> kernelspace linker
+#
+# CONFIG_CONNECTOR is not set
+# CONFIG_FORK_CONNECTOR is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+# CONFIG_BLK_DEV_RAM is not set
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CDROM_PKTCDVD is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+# CONFIG_IP_PNP_BOOTP is not set
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_TUNNEL is not set
+# CONFIG_IP_TCPDIAG is not set
+# CONFIG_IP_TCPDIAG_IPV6 is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_KGDBOE is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NETPOLL_RX is not set
+# CONFIG_NETPOLL_TRAP is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+# CONFIG_IEEE80211 is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+CONFIG_NE2K_PCI=y
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SKGE is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_VIA_VELOCITY is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_CHELSIO_T1 is not set
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+# CONFIG_INPUT is not set
+
+#
+# Hardware I/O ports
+#
+# CONFIG_SERIO is not set
+# CONFIG_GAMEPORT is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=1
+CONFIG_SERIAL_8250_EXTENDED=y
+# CONFIG_SERIAL_8250_MANY_PORTS is not set
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+# CONFIG_SERIAL_8250_DETECT_IRQ is not set
+# CONFIG_SERIAL_8250_MULTIPORT is not set
+# CONFIG_SERIAL_8250_RSA is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+# CONFIG_SERIAL_JSM is not set
+CONFIG_UNIX98_PTYS=y
+# CONFIG_LEGACY_PTYS is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# TPM devices
+#
+# CONFIG_TCG_TPM is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Dallas's 1-wire bus
+#
+# CONFIG_W1 is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
+#
+# File systems
+#
+# CONFIG_EXT2_FS is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISER4_FS is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+CONFIG_INOTIFY=y
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# Caches
+#
+# CONFIG_FSCACHE is not set
+# CONFIG_FUSE_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+# CONFIG_PROC_KCORE is not set
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_TMPFS_XATTR is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+# CONFIG_RELAYFS_FS is not set
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_PRINTK_TIME is not set
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_MAGIC_SYSRQ is not set
+CONFIG_LOG_BUF_SHIFT=14
+CONFIG_DETECT_SOFTLOCKUP=y
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+# CONFIG_DEBUG_KOBJECT is not set
+# CONFIG_DEBUG_HIGHMEM is not set
+# CONFIG_DEBUG_BUGVERBOSE is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_DEBUG_FS is not set
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_EARLY_PRINTK is not set
+CONFIG_DEBUG_STACKOVERFLOW=y
+# CONFIG_DEBUG_PAGEALLOC is not set
+# CONFIG_GDBSTUB is not set
+
+#
+# Security options
+#
+# CONFIG_KEYS is not set
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Hardware crypto devices
+#
+
+#
+# Library routines
+#
+# CONFIG_CRC_CCITT is not set
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
if (c == &boot_cpu_data)
sysenter_setup();
enable_sep_cpu();
+
+ if (c == &boot_cpu_data)
+ mtrr_bp_init();
+ else
+ mtrr_ap_init();
}
#ifdef CONFIG_X86_HT
/*
* Alignment at which movsl is preferred for bulk memory copies.
*/
-struct movsl_mask movsl_mask;
+struct movsl_mask movsl_mask __read_mostly;
#endif
void __devinit early_intel_workaround(struct cpuinfo_x86 *c)
mtrr_state.enabled = (lo & 0xc00) >> 10;
}
-/* Free resources associated with a struct mtrr_state */
-void __init finalize_mtrr_state(void)
-{
- kfree(mtrr_state.var_ranges);
- mtrr_state.var_ranges = NULL;
-}
-
/* Some BIOS's are fucked and don't set all MTRRs the same! */
void __init mtrr_state_warn(void)
{
*/
{
unsigned long flags;
+ struct mtrr_var_range *vr;
+
+ vr = &mtrr_state.var_ranges[reg];
local_irq_save(flags);
prepare_set();
/* The invalid bit is kept in the mask, so we simply clear the
relevant mask register to disable a range. */
mtrr_wrmsr(MTRRphysMask_MSR(reg), 0, 0);
+ memset(vr, 0, sizeof(struct mtrr_var_range));
} else {
- mtrr_wrmsr(MTRRphysBase_MSR(reg), base << PAGE_SHIFT | type,
- (base & size_and_mask) >> (32 - PAGE_SHIFT));
- mtrr_wrmsr(MTRRphysMask_MSR(reg), -size << PAGE_SHIFT | 0x800,
- (-size & size_and_mask) >> (32 - PAGE_SHIFT));
+ vr->base_lo = base << PAGE_SHIFT | type;
+ vr->base_hi = (base & size_and_mask) >> (32 - PAGE_SHIFT);
+ vr->mask_lo = -size << PAGE_SHIFT | 0x800;
+ vr->mask_hi = (-size & size_and_mask) >> (32 - PAGE_SHIFT);
+
+ mtrr_wrmsr(MTRRphysBase_MSR(reg), vr->base_lo, vr->base_hi);
+ mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi);
}
post_set();
error = -EINVAL;
+ /* No CPU hotplug when we change MTRR entries */
+ lock_cpu_hotplug();
/* Search for existing MTRR */
down(&main_lock);
for (i = 0; i < num_var_ranges; ++i) {
error = i;
out:
up(&main_lock);
+ unlock_cpu_hotplug();
return error;
}
return -ENXIO;
max = num_var_ranges;
+ /* No CPU hotplug when we change MTRR entries */
+ lock_cpu_hotplug();
down(&main_lock);
if (reg < 0) {
/* Search for existing MTRR */
error = reg;
out:
up(&main_lock);
+ unlock_cpu_hotplug();
return error;
}
/**
centaur_init_mtrr();
}
-static void __init init_other_cpus(void)
-{
- if (use_intel())
- get_mtrr_state();
-
- /* bring up the other processors */
- set_mtrr(~0U,0,0,0);
-
- if (use_intel()) {
- finalize_mtrr_state();
- mtrr_state_warn();
- }
-}
-
-
+/* The suspend/resume methods are only for CPU without MTRR. CPU using generic
+ * MTRR driver doesn't require this
+ */
struct mtrr_value {
mtrr_type ltype;
unsigned long lbase;
/**
- * mtrr_init - initialize mtrrs on the boot CPU
+ * mtrr_bp_init - initialize mtrrs on the boot CPU
*
* This needs to be called early; before any of the other CPUs are
* initialized (i.e. before smp_init()).
*
*/
-static int __init mtrr_init(void)
+void __init mtrr_bp_init(void)
{
init_ifs();
if (mtrr_if) {
set_num_var_ranges();
init_table();
- init_other_cpus();
-
- return sysdev_driver_register(&cpu_sysdev_class,
- &mtrr_sysdev_driver);
+ if (use_intel())
+ get_mtrr_state();
}
- return -ENXIO;
}
-subsys_initcall(mtrr_init);
+void mtrr_ap_init(void)
+{
+ unsigned long flags;
+
+ if (!mtrr_if || !use_intel())
+ return;
+ /*
+ * Ideally we should hold main_lock here to avoid mtrr entries changed,
+ * but this routine will be called in cpu boot time, holding the lock
+ * breaks it. This routine is called in two cases: 1.very earily time
+ * of software resume, when there absolutely isn't mtrr entry changes;
+ * 2.cpu hotadd time. We let mtrr_add/del_page hold cpuhotplug lock to
+ * prevent mtrr entry changes
+ */
+ local_irq_save(flags);
+
+ mtrr_if->set_all();
+
+ local_irq_restore(flags);
+}
+
+static int __init mtrr_init_finialize(void)
+{
+ if (!mtrr_if)
+ return 0;
+ if (use_intel())
+ mtrr_state_warn();
+ else {
+ /* The CPUs haven't MTRR and seemes not support SMP. They have
+ * specific drivers, we use a tricky method to support
+ * suspend/resume for them.
+ * TBD: is there any system with such CPU which supports
+ * suspend/resume? if no, we should remove the code.
+ */
+ sysdev_driver_register(&cpu_sysdev_class,
+ &mtrr_sysdev_driver);
+ }
+ return 0;
+}
+subsys_initcall(mtrr_init_finialize);
extern unsigned int num_var_ranges;
-void finalize_mtrr_state(void);
void mtrr_state_warn(void);
char *mtrr_attrib_to_str(int x);
void mtrr_wrmsr(unsigned, unsigned, unsigned);
#endif
/* Package ID of each logical CPU */
-int phys_proc_id[NR_CPUS] = {[0 ... NR_CPUS-1] = BAD_APICID};
+int phys_proc_id[NR_CPUS] __read_mostly = {[0 ... NR_CPUS-1] = BAD_APICID};
EXPORT_SYMBOL(phys_proc_id);
/* Core ID of each logical CPU */
-int cpu_core_id[NR_CPUS] = {[0 ... NR_CPUS-1] = BAD_APICID};
+int cpu_core_id[NR_CPUS] __read_mostly = {[0 ... NR_CPUS-1] = BAD_APICID};
EXPORT_SYMBOL(cpu_core_id);
-cpumask_t cpu_sibling_map[NR_CPUS];
+cpumask_t cpu_sibling_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_sibling_map);
-cpumask_t cpu_core_map[NR_CPUS];
+cpumask_t cpu_core_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_core_map);
/* bitmap of online cpus */
-cpumask_t cpu_online_map;
+cpumask_t cpu_online_map __read_mostly;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_callin_map;
struct cpuinfo_x86 cpu_data[NR_CPUS] __cacheline_aligned;
EXPORT_SYMBOL(cpu_data);
-u8 x86_cpu_to_apicid[NR_CPUS] =
+u8 x86_cpu_to_apicid[NR_CPUS] __read_mostly =
{ [0 ... NR_CPUS-1] = 0xff };
EXPORT_SYMBOL(x86_cpu_to_apicid);
#ifdef CONFIG_NUMA
/* which logical CPUs are on which nodes */
-cpumask_t node_2_cpu_mask[MAX_NUMNODES] =
+cpumask_t node_2_cpu_mask[MAX_NUMNODES] __read_mostly =
{ [0 ... MAX_NUMNODES-1] = CPU_MASK_NONE };
/* which node each logical CPU is on */
-int cpu_2_node[NR_CPUS] = { [0 ... NR_CPUS-1] = 0 };
+int cpu_2_node[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = 0 };
EXPORT_SYMBOL(cpu_2_node);
/* set up a mapping between cpu and node. */
#endif /* CONFIG_NUMA */
-u8 cpu_2_logical_apicid[NR_CPUS] = { [0 ... NR_CPUS-1] = BAD_APICID };
+u8 cpu_2_logical_apicid[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = BAD_APICID };
static void map_cpu_to_logical_apicid(void)
{
DEFINE_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock);
-struct timer_opts *cur_timer = &timer_none;
+struct timer_opts *cur_timer __read_mostly = &timer_none;
/*
* This is a special lock that is owned by the CPU and holds the index
#include "mach_timer.h"
#include <asm/hpet.h>
-static unsigned long hpet_usec_quotient; /* convert hpet clks to usec */
+static unsigned long __read_mostly hpet_usec_quotient; /* convert hpet clks to usec */
static unsigned long tsc_hpet_quotient; /* convert tsc to hpet clks */
static unsigned long hpet_last; /* hpet counter value at last tick*/
static unsigned long last_tsc_low; /* lsb 32 bits of Time Stamp Counter */
/************************************************************/
/* tsc timer_opts struct */
-static struct timer_opts timer_hpet = {
+static struct timer_opts timer_hpet __read_mostly = {
.name = "hpet",
.mark_offset = mark_offset_hpet,
.get_offset = get_offset_hpet,
*(.data.cacheline_aligned)
}
+ /* rarely changed data like cpu maps */
+ . = ALIGN(32);
+ .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) { *(.data.read_mostly) }
_edata = .; /* End of data section */
. = ALIGN(THREAD_SIZE); /* init_task */
void iounmap(volatile void __iomem *addr)
{
struct vm_struct *p;
- if ((void __force *) addr <= high_memory)
+
+ if ((void __force *)addr <= high_memory)
return;
/*
return;
write_lock(&vmlist_lock);
- p = __remove_vm_area((void *) (PAGE_MASK & (unsigned long __force) addr));
+ p = __remove_vm_area((void *)(PAGE_MASK & (unsigned long __force)addr));
if (!p) {
printk(KERN_WARNING "iounmap: bad address %p\n", addr);
+ dump_stack();
goto out_unlock;
}
fix_processor_context();
do_fpu_end();
+ mtrr_ap_init();
}
void restore_processor_state(void)
obj-$(CONFIG_IOSAPIC) += iosapic.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_SMP) += smp.o smpboot.o domain.o
+obj-$(CONFIG_NUMA) += numa.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
obj-$(CONFIG_IA64_CYCLONE) += cyclone.o
obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o
if (smp_boot_data.cpu_phys_id[cpu] != hard_smp_processor_id())
node_cpuid[i++].phys_id = smp_boot_data.cpu_phys_id[cpu];
}
- build_cpu_to_node_map();
# endif
+#endif
+#ifdef CONFIG_ACPI_NUMA
+ build_cpu_to_node_map();
#endif
/* Make boot-up look pretty */
printk(KERN_INFO "%d CPUs available, %d CPUs total\n", available_cpus, total_cpus);
--- /dev/null
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * ia64 kernel NUMA specific stuff
+ *
+ * Copyright (C) 2002 Erich Focht <efocht@ess.nec.de>
+ * Copyright (C) 2004 Silicon Graphics, Inc.
+ * Jesse Barnes <jbarnes@sgi.com>
+ */
+#include <linux/config.h>
+#include <linux/topology.h>
+#include <linux/module.h>
+#include <asm/processor.h>
+#include <asm/smp.h>
+
+u8 cpu_to_node_map[NR_CPUS] __cacheline_aligned;
+EXPORT_SYMBOL(cpu_to_node_map);
+
+cpumask_t node_to_cpu_mask[MAX_NUMNODES] __cacheline_aligned;
+
+/**
+ * build_cpu_to_node_map - setup cpu to node and node to cpumask arrays
+ *
+ * Build cpu to node mapping and initialize the per node cpu masks using
+ * info from the node_cpuid array handed to us by ACPI.
+ */
+void __init build_cpu_to_node_map(void)
+{
+ int cpu, i, node;
+
+ for(node=0; node < MAX_NUMNODES; node++)
+ cpus_clear(node_to_cpu_mask[node]);
+
+ for(cpu = 0; cpu < NR_CPUS; ++cpu) {
+ node = -1;
+ for (i = 0; i < NR_CPUS; ++i)
+ if (cpu_physical_id(cpu) == node_cpuid[i].phys_id) {
+ node = node_cpuid[i].nid;
+ break;
+ }
+ cpu_to_node_map[cpu] = (node >= 0) ? node : 0;
+ if (node >= 0)
+ cpu_set(cpu, node_to_cpu_mask[node]);
+ }
+}
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
psr->mfh = 0; /* drop signal handler's fph contents... */
+ preempt_disable();
if (psr->dfh)
ia64_drop_fpu(current);
else {
__ia64_load_fpu(current->thread.fph);
ia64_set_local_fpu_owner(current);
}
+ preempt_enable();
}
return err;
}
}
}
-#ifdef CONFIG_NUMA
-
-/* on which node is each logical CPU (one cacheline even for 64 CPUs) */
-u8 cpu_to_node_map[NR_CPUS] __cacheline_aligned;
-EXPORT_SYMBOL(cpu_to_node_map);
-/* which logical CPUs are on which nodes */
-cpumask_t node_to_cpu_mask[MAX_NUMNODES] __cacheline_aligned;
-
-/*
- * Build cpu to node mapping and initialize the per node cpu masks.
- */
-void __init
-build_cpu_to_node_map (void)
-{
- int cpu, i, node;
-
- for(node=0; node<MAX_NUMNODES; node++)
- cpus_clear(node_to_cpu_mask[node]);
- for(cpu = 0; cpu < NR_CPUS; ++cpu) {
- /*
- * All Itanium NUMA platforms I know use ACPI, so maybe we
- * can drop this ifdef completely. [EF]
- */
-#ifdef CONFIG_ACPI_NUMA
- node = -1;
- for (i = 0; i < NR_CPUS; ++i)
- if (cpu_physical_id(cpu) == node_cpuid[i].phys_id) {
- node = node_cpuid[i].nid;
- break;
- }
-#else
-# error Fixme: Dunno how to build CPU-to-node map.
-#endif
- cpu_to_node_map[cpu] = (node >= 0) ? node : 0;
- if (node >= 0)
- cpu_set(cpu, node_to_cpu_mask[node]);
- }
-}
-
-#endif /* CONFIG_NUMA */
-
/*
* Cycle through the APs sending Wakeup IPIs to boot each.
*/
.lock_owner_depth = 0
};
static int die_counter;
+ int cpu = get_cpu();
- if (die.lock_owner != smp_processor_id()) {
+ if (die.lock_owner != cpu) {
console_verbose();
spin_lock_irq(&die.lock);
- die.lock_owner = smp_processor_id();
+ die.lock_owner = cpu;
die.lock_owner_depth = 0;
bust_spinlocks(1);
}
+ put_cpu();
if (++die.lock_owner_depth < 3) {
printk("%s[%d]: %s %ld [%d]\n",
};
static struct early_node_data mem_data[MAX_NUMNODES] __initdata;
-
-/**
- * reassign_cpu_only_nodes - called from find_memory to move CPU-only nodes to a memory node
- *
- * This function will move nodes with only CPUs (no memory)
- * to a node with memory which is at the minimum numa_slit distance.
- * Any reassigments will result in the compression of the nodes
- * and renumbering the nid values where appropriate.
- * The static declarations below are to avoid large stack size which
- * makes the code not re-entrant.
- */
-static void __init reassign_cpu_only_nodes(void)
-{
- struct node_memblk_s *p;
- int i, j, k, nnode, nid, cpu, cpunid, pxm;
- u8 cslit, slit;
- static DECLARE_BITMAP(nodes_with_mem, MAX_NUMNODES) __initdata;
- static u8 numa_slit_fix[MAX_NUMNODES * MAX_NUMNODES] __initdata;
- static int node_flip[MAX_NUMNODES] __initdata;
- static int old_nid_map[NR_CPUS] __initdata;
-
- for (nnode = 0, p = &node_memblk[0]; p < &node_memblk[num_node_memblks]; p++)
- if (!test_bit(p->nid, (void *) nodes_with_mem)) {
- set_bit(p->nid, (void *) nodes_with_mem);
- nnode++;
- }
-
- /*
- * All nids with memory.
- */
- if (nnode == num_online_nodes())
- return;
-
- /*
- * Change nids and attempt to migrate CPU-only nodes
- * to the best numa_slit (closest neighbor) possible.
- * For reassigned CPU nodes a nid can't be arrived at
- * until after this loop because the target nid's new
- * identity might not have been established yet. So
- * new nid values are fabricated above num_online_nodes() and
- * mapped back later to their true value.
- */
- /* MCD - This code is a bit complicated, but may be unnecessary now.
- * We can now handle much more interesting node-numbering.
- * The old requirement that 0 <= nid <= numnodes <= MAX_NUMNODES
- * and that there be no holes in the numbering 0..numnodes
- * has become simply 0 <= nid <= MAX_NUMNODES.
- */
- nid = 0;
- for_each_online_node(i) {
- if (test_bit(i, (void *) nodes_with_mem)) {
- /*
- * Save original nid value for numa_slit
- * fixup and node_cpuid reassignments.
- */
- node_flip[nid] = i;
-
- if (i == nid) {
- nid++;
- continue;
- }
-
- for (p = &node_memblk[0]; p < &node_memblk[num_node_memblks]; p++)
- if (p->nid == i)
- p->nid = nid;
-
- cpunid = nid;
- nid++;
- } else
- cpunid = MAX_NUMNODES;
-
- for (cpu = 0; cpu < NR_CPUS; cpu++)
- if (node_cpuid[cpu].nid == i) {
- /*
- * For nodes not being reassigned just
- * fix the cpu's nid and reverse pxm map
- */
- if (cpunid < MAX_NUMNODES) {
- pxm = nid_to_pxm_map[i];
- pxm_to_nid_map[pxm] =
- node_cpuid[cpu].nid = cpunid;
- continue;
- }
-
- /*
- * For nodes being reassigned, find best node by
- * numa_slit information and then make a temporary
- * nid value based on current nid and num_online_nodes().
- */
- slit = 0xff;
- k = 2*num_online_nodes();
- for_each_online_node(j) {
- if (i == j)
- continue;
- else if (test_bit(j, (void *) nodes_with_mem)) {
- cslit = numa_slit[i * num_online_nodes() + j];
- if (cslit < slit) {
- k = num_online_nodes() + j;
- slit = cslit;
- }
- }
- }
-
- /* save old nid map so we can update the pxm */
- old_nid_map[cpu] = node_cpuid[cpu].nid;
- node_cpuid[cpu].nid = k;
- }
- }
-
- /*
- * Fixup temporary nid values for CPU-only nodes.
- */
- for (cpu = 0; cpu < NR_CPUS; cpu++)
- if (node_cpuid[cpu].nid == (2*num_online_nodes())) {
- pxm = nid_to_pxm_map[old_nid_map[cpu]];
- pxm_to_nid_map[pxm] = node_cpuid[cpu].nid = nnode - 1;
- } else {
- for (i = 0; i < nnode; i++) {
- if (node_flip[i] != (node_cpuid[cpu].nid - num_online_nodes()))
- continue;
-
- pxm = nid_to_pxm_map[old_nid_map[cpu]];
- pxm_to_nid_map[pxm] = node_cpuid[cpu].nid = i;
- break;
- }
- }
-
- /*
- * Fix numa_slit by compressing from larger
- * nid array to reduced nid array.
- */
- for (i = 0; i < nnode; i++)
- for (j = 0; j < nnode; j++)
- numa_slit_fix[i * nnode + j] =
- numa_slit[node_flip[i] * num_online_nodes() + node_flip[j]];
-
- memcpy(numa_slit, numa_slit_fix, sizeof (numa_slit));
-
- nodes_clear(node_online_map);
- for (i = 0; i < nnode; i++)
- node_set_online(i);
-
- return;
-}
+static nodemask_t memory_less_mask __initdata;
/*
* To prevent cache aliasing effects, align per-node structures so that they
}
/**
- * early_nr_phys_cpus_node - return number of physical cpus on a given node
+ * early_nr_cpus_node - return number of cpus on a given node
* @node: node to check
*
- * Count the number of physical cpus on @node. These are cpus that actually
- * exist. We can't use nr_cpus_node() yet because
+ * Count the number of cpus on @node. We can't use nr_cpus_node() yet because
* acpi_boot_init() (which builds the node_to_cpu_mask array) hasn't been
- * called yet.
+ * called yet. Note that node 0 will also count all non-existent cpus.
*/
-static int early_nr_phys_cpus_node(int node)
+static int __init early_nr_cpus_node(int node)
{
int cpu, n = 0;
for (cpu = 0; cpu < NR_CPUS; cpu++)
if (node == node_cpuid[cpu].nid)
- if ((cpu == 0) || node_cpuid[cpu].phys_id)
- n++;
+ n++;
return n;
}
+/**
+ * compute_pernodesize - compute size of pernode data
+ * @node: the node id.
+ */
+static unsigned long __init compute_pernodesize(int node)
+{
+ unsigned long pernodesize = 0, cpus;
+
+ cpus = early_nr_cpus_node(node);
+ pernodesize += PERCPU_PAGE_SIZE * cpus;
+ pernodesize += node * L1_CACHE_BYTES;
+ pernodesize += L1_CACHE_ALIGN(sizeof(pg_data_t));
+ pernodesize += L1_CACHE_ALIGN(sizeof(struct ia64_node_data));
+ pernodesize = PAGE_ALIGN(pernodesize);
+ return pernodesize;
+}
/**
- * early_nr_cpus_node - return number of cpus on a given node
- * @node: node to check
+ * per_cpu_node_setup - setup per-cpu areas on each node
+ * @cpu_data: per-cpu area on this node
+ * @node: node to setup
*
- * Count the number of cpus on @node. We can't use nr_cpus_node() yet because
- * acpi_boot_init() (which builds the node_to_cpu_mask array) hasn't been
- * called yet. Note that node 0 will also count all non-existent cpus.
+ * Copy the static per-cpu data into the region we just set aside and then
+ * setup __per_cpu_offset for each CPU on this node. Return a pointer to
+ * the end of the area.
*/
-static int early_nr_cpus_node(int node)
+static void *per_cpu_node_setup(void *cpu_data, int node)
{
- int cpu, n = 0;
+#ifdef CONFIG_SMP
+ int cpu;
- for (cpu = 0; cpu < NR_CPUS; cpu++)
- if (node == node_cpuid[cpu].nid)
- n++;
+ for (cpu = 0; cpu < NR_CPUS; cpu++) {
+ if (node == node_cpuid[cpu].nid) {
+ memcpy(__va(cpu_data), __phys_per_cpu_start,
+ __per_cpu_end - __per_cpu_start);
+ __per_cpu_offset[cpu] = (char*)__va(cpu_data) -
+ __per_cpu_start;
+ cpu_data += PERCPU_PAGE_SIZE;
+ }
+ }
+#endif
+ return cpu_data;
+}
- return n;
+/**
+ * fill_pernode - initialize pernode data.
+ * @node: the node id.
+ * @pernode: physical address of pernode data
+ * @pernodesize: size of the pernode data
+ */
+static void __init fill_pernode(int node, unsigned long pernode,
+ unsigned long pernodesize)
+{
+ void *cpu_data;
+ int cpus = early_nr_cpus_node(node);
+ struct bootmem_data *bdp = &mem_data[node].bootmem_data;
+
+ mem_data[node].pernode_addr = pernode;
+ mem_data[node].pernode_size = pernodesize;
+ memset(__va(pernode), 0, pernodesize);
+
+ cpu_data = (void *)pernode;
+ pernode += PERCPU_PAGE_SIZE * cpus;
+ pernode += node * L1_CACHE_BYTES;
+
+ mem_data[node].pgdat = __va(pernode);
+ pernode += L1_CACHE_ALIGN(sizeof(pg_data_t));
+
+ mem_data[node].node_data = __va(pernode);
+ pernode += L1_CACHE_ALIGN(sizeof(struct ia64_node_data));
+
+ mem_data[node].pgdat->bdata = bdp;
+ pernode += L1_CACHE_ALIGN(sizeof(pg_data_t));
+
+ cpu_data = per_cpu_node_setup(cpu_data, node);
+
+ return;
}
/**
static int __init find_pernode_space(unsigned long start, unsigned long len,
int node)
{
- unsigned long epfn, cpu, cpus, phys_cpus;
+ unsigned long epfn;
unsigned long pernodesize = 0, pernode, pages, mapsize;
- void *cpu_data;
struct bootmem_data *bdp = &mem_data[node].bootmem_data;
epfn = (start + len) >> PAGE_SHIFT;
* Calculate total size needed, incl. what's necessary
* for good alignment and alias prevention.
*/
- cpus = early_nr_cpus_node(node);
- phys_cpus = early_nr_phys_cpus_node(node);
- pernodesize += PERCPU_PAGE_SIZE * cpus;
- pernodesize += node * L1_CACHE_BYTES;
- pernodesize += L1_CACHE_ALIGN(sizeof(pg_data_t));
- pernodesize += L1_CACHE_ALIGN(sizeof(struct ia64_node_data));
- pernodesize = PAGE_ALIGN(pernodesize);
+ pernodesize = compute_pernodesize(node);
pernode = NODEDATA_ALIGN(start, node);
/* Is this range big enough for what we want to store here? */
- if (start + len > (pernode + pernodesize + mapsize)) {
- mem_data[node].pernode_addr = pernode;
- mem_data[node].pernode_size = pernodesize;
- memset(__va(pernode), 0, pernodesize);
-
- cpu_data = (void *)pernode;
- pernode += PERCPU_PAGE_SIZE * cpus;
- pernode += node * L1_CACHE_BYTES;
-
- mem_data[node].pgdat = __va(pernode);
- pernode += L1_CACHE_ALIGN(sizeof(pg_data_t));
-
- mem_data[node].node_data = __va(pernode);
- pernode += L1_CACHE_ALIGN(sizeof(struct ia64_node_data));
-
- mem_data[node].pgdat->bdata = bdp;
- pernode += L1_CACHE_ALIGN(sizeof(pg_data_t));
-
- /*
- * Copy the static per-cpu data into the region we
- * just set aside and then setup __per_cpu_offset
- * for each CPU on this node.
- */
- for (cpu = 0; cpu < NR_CPUS; cpu++) {
- if (node == node_cpuid[cpu].nid) {
- memcpy(__va(cpu_data), __phys_per_cpu_start,
- __per_cpu_end - __per_cpu_start);
- __per_cpu_offset[cpu] = (char*)__va(cpu_data) -
- __per_cpu_start;
- cpu_data += PERCPU_PAGE_SIZE;
- }
- }
- }
+ if (start + len > (pernode + pernodesize + mapsize))
+ fill_pernode(node, pernode, pernodesize);
return 0;
}
for_each_online_node(node) {
pg_data_t *pdp = mem_data[node].pgdat;
+ if (node_isset(node, memory_less_mask))
+ continue;
+
bdp = pdp->bdata;
/* First the bootmem_map itself */
*/
static void __init initialize_pernode_data(void)
{
- int cpu, node;
pg_data_t *pgdat_list[MAX_NUMNODES];
+ int cpu, node;
for_each_online_node(node)
pgdat_list[node] = mem_data[node].pgdat;
memcpy(mem_data[node].node_data->pg_data_ptrs, pgdat_list,
sizeof(pgdat_list));
}
-
+#ifdef CONFIG_SMP
/* Set the node_data pointer for each per-cpu struct */
for (cpu = 0; cpu < NR_CPUS; cpu++) {
node = node_cpuid[cpu].nid;
per_cpu(cpu_info, cpu).node_data = mem_data[node].node_data;
}
+#else
+ {
+ struct cpuinfo_ia64 *cpu0_cpu_info;
+ cpu = 0;
+ node = node_cpuid[cpu].nid;
+ cpu0_cpu_info = (struct cpuinfo_ia64 *)(__phys_per_cpu_start +
+ ((char *)&per_cpu__cpu_info - __per_cpu_start));
+ cpu0_cpu_info->node_data = mem_data[node].node_data;
+ }
+#endif /* CONFIG_SMP */
+}
+
+/**
+ * memory_less_node_alloc - * attempt to allocate memory on the best NUMA slit
+ * node but fall back to any other node when __alloc_bootmem_node fails
+ * for best.
+ * @nid: node id
+ * @pernodesize: size of this node's pernode data
+ * @align: alignment to use for this node's pernode data
+ */
+static void __init *memory_less_node_alloc(int nid, unsigned long pernodesize,
+ unsigned long align)
+{
+ void *ptr = NULL;
+ u8 best = 0xff;
+ int bestnode = -1, node;
+
+ for_each_online_node(node) {
+ if (node_isset(node, memory_less_mask))
+ continue;
+ else if (node_distance(nid, node) < best) {
+ best = node_distance(nid, node);
+ bestnode = node;
+ }
+ }
+
+ ptr = __alloc_bootmem_node(mem_data[bestnode].pgdat,
+ pernodesize, align, __pa(MAX_DMA_ADDRESS));
+
+ if (!ptr)
+ panic("NO memory for memory less node\n");
+ return ptr;
+}
+
+/**
+ * pgdat_insert - insert the pgdat into global pgdat_list
+ * @pgdat: the pgdat for a node.
+ */
+static void __init pgdat_insert(pg_data_t *pgdat)
+{
+ pg_data_t *prev = NULL, *next;
+
+ for_each_pgdat(next)
+ if (pgdat->node_id < next->node_id)
+ break;
+ else
+ prev = next;
+
+ if (prev) {
+ prev->pgdat_next = pgdat;
+ pgdat->pgdat_next = next;
+ } else {
+ pgdat->pgdat_next = pgdat_list;
+ pgdat_list = pgdat;
+ }
+
+ return;
+}
+
+/**
+ * memory_less_nodes - allocate and initialize CPU only nodes pernode
+ * information.
+ */
+static void __init memory_less_nodes(void)
+{
+ unsigned long pernodesize;
+ void *pernode;
+ int node;
+
+ for_each_node_mask(node, memory_less_mask) {
+ pernodesize = compute_pernodesize(node);
+ pernode = memory_less_node_alloc(node, pernodesize,
+ (node) ? (node * PERCPU_PAGE_SIZE) : (1024*1024));
+ fill_pernode(node, __pa(pernode), pernodesize);
+ }
+
+ return;
}
/**
node_set_online(0);
}
+ nodes_or(memory_less_mask, memory_less_mask, node_online_map);
min_low_pfn = -1;
max_low_pfn = 0;
- if (num_online_nodes() > 1)
- reassign_cpu_only_nodes();
-
/* These actually end up getting called by call_pernode_memory() */
efi_memmap_walk(filter_rsvd_memory, build_node_maps);
efi_memmap_walk(filter_rsvd_memory, find_pernode_space);
+ for_each_online_node(node)
+ if (mem_data[node].bootmem_data.node_low_pfn) {
+ node_clear(node, memory_less_mask);
+ mem_data[node].min_pfn = ~0UL;
+ }
/*
* Initialize the boot memory maps in reverse order since that's
* what the bootmem allocator expects
if (!node_online(node))
continue;
+ else if (node_isset(node, memory_less_mask))
+ continue;
bdp = &mem_data[node].bootmem_data;
pernode = mem_data[node].pernode_addr;
pernodesize = mem_data[node].pernode_size;
map = pernode + pernodesize;
- /* Sanity check... */
- if (!pernode)
- panic("pernode space for node %d "
- "could not be allocated!", node);
-
init_bootmem_node(mem_data[node].pgdat,
map>>PAGE_SHIFT,
bdp->node_boot_start>>PAGE_SHIFT,
efi_memmap_walk(filter_rsvd_memory, free_node_bootmem);
reserve_pernode_space();
+ memory_less_nodes();
initialize_pernode_data();
max_pfn = max_low_pfn;
find_initrd();
}
+#ifdef CONFIG_SMP
/**
* per_cpu_init - setup per-cpu variables
*
{
int cpu;
- if (smp_processor_id() == 0) {
- for (cpu = 0; cpu < NR_CPUS; cpu++) {
- per_cpu(local_per_cpu_offset, cpu) =
- __per_cpu_offset[cpu];
- }
- }
+ if (smp_processor_id() != 0)
+ return __per_cpu_start + __per_cpu_offset[smp_processor_id()];
+
+ for (cpu = 0; cpu < NR_CPUS; cpu++)
+ per_cpu(local_per_cpu_offset, cpu) = __per_cpu_offset[cpu];
return __per_cpu_start + __per_cpu_offset[smp_processor_id()];
}
+#endif /* CONFIG_SMP */
/**
* show_mem - give short summary of memory stats
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
- /* so min() will work in count_node_pages */
- for_each_online_node(node)
- mem_data[node].min_pfn = ~0UL;
-
efi_memmap_walk(filter_rsvd_memory, count_node_pages);
+ vmalloc_end -= PAGE_ALIGN(max_low_pfn * sizeof(struct page));
+ vmem_map = (struct page *) vmalloc_end;
+ efi_memmap_walk(create_mem_map_page_table, NULL);
+ printk("Virtual mem_map starts at 0x%p\n", vmem_map);
+
for_each_online_node(node) {
memset(zones_size, 0, sizeof(zones_size));
memset(zholes_size, 0, sizeof(zholes_size));
mem_data[node].num_dma_physpages);
}
- if (node == 0) {
- vmalloc_end -=
- PAGE_ALIGN(max_low_pfn * sizeof(struct page));
- vmem_map = (struct page *) vmalloc_end;
-
- efi_memmap_walk(create_mem_map_page_table, NULL);
- printk("Virtual mem_map starts at 0x%p\n", vmem_map);
- }
-
pfn_offset = mem_data[node].min_pfn;
NODE_DATA(node)->node_mem_map = vmem_map + pfn_offset;
pfn_offset, zholes_size);
}
+ /*
+ * Make memory less nodes become a member of the known nodes.
+ */
+ for_each_node_mask(node, memory_less_mask)
+ pgdat_insert(mem_data[node].pgdat);
+
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
kclist_add(&kcore_kernel, _stext, _end - _stext);
for_each_pgdat(pgdat)
- totalram_pages += free_all_bootmem_node(pgdat);
+ if (pgdat->bdata->node_bootmem_map)
+ totalram_pages += free_all_bootmem_node(pgdat);
reserved_pages = 0;
efi_memmap_walk(count_reserved_pages, &reserved_pages);
#ifndef _ASM_IA64_SN_XTALK_HUBDEV_H
#define _ASM_IA64_SN_XTALK_HUBDEV_H
+#include "xtalk/xwidgetdev.h"
+
#define HUB_WIDGET_ID_MAX 0xf
#define DEV_PER_WIDGET (2*2*8)
#define IIO_ITTE_WIDGET_BITS 4 /* size of widget field */
#include <linux/bootmem.h>
#include <linux/nodemask.h>
#include <asm/sn/types.h>
-#include <asm/sn/sn_sal.h>
#include <asm/sn/addrs.h>
-#include <asm/sn/pcibus_provider_defs.h>
-#include <asm/sn/pcidev.h>
-#include "pci/pcibr_provider.h"
-#include "xtalk/xwidgetdev.h"
#include <asm/sn/geo.h>
-#include "xtalk/hubdev.h"
#include <asm/sn/io.h>
+#include <asm/sn/pcibr_provider.h>
+#include <asm/sn/pcibus_provider_defs.h>
+#include <asm/sn/pcidev.h>
#include <asm/sn/simulator.h>
+#include <asm/sn/sn_sal.h>
#include <asm/sn/tioca_provider.h>
+#include "xtalk/hubdev.h"
+#include "xtalk/xwidgetdev.h"
-char master_baseio_wid;
nasid_t master_nasid = INVALID_NASID; /* Partition Master */
+static struct list_head sn_sysdata_list;
+
+/* sysdata list struct */
+struct sysdata_el {
+ struct list_head entry;
+ void *sysdata;
+};
+
struct slab_info {
struct hubdev_info hubdev;
};
return ret_stuff.v0;
}
-/*
- * sn_alloc_pci_sysdata() - This routine allocates a pci controller
- * which is expected as the pci_dev and pci_bus sysdata by the Linux
- * PCI infrastructure.
- */
-static inline struct pci_controller *sn_alloc_pci_sysdata(void)
-{
- struct pci_controller *pci_sysdata;
-
- pci_sysdata = kmalloc(sizeof(*pci_sysdata), GFP_KERNEL);
- if (!pci_sysdata)
- BUG();
-
- memset(pci_sysdata, 0, sizeof(*pci_sysdata));
- return pci_sysdata;
-}
-
/*
* sn_fixup_ionodes() - This routine initializes the HUB data strcuture for
* each node in the system.
}
+void sn_pci_unfixup_slot(struct pci_dev *dev)
+{
+ struct pci_dev *host_pci_dev = SN_PCIDEV_INFO(dev)->host_pci_dev;
+
+ sn_irq_unfixup(dev);
+ pci_dev_put(host_pci_dev);
+ pci_dev_put(dev);
+}
+
/*
* sn_pci_fixup_slot() - This routine sets up a slot's resources
* consistent with the Linux PCI abstraction layer. Resources acquired
* from our PCI provider include PIO maps to BAR space and interrupt
* objects.
*/
-static void sn_pci_fixup_slot(struct pci_dev *dev)
+void sn_pci_fixup_slot(struct pci_dev *dev)
{
int idx;
int segment = 0;
- uint64_t size;
- struct sn_irq_info *sn_irq_info;
- struct pci_dev *host_pci_dev;
int status = 0;
struct pcibus_bussoft *bs;
+ struct pci_bus *host_pci_bus;
+ struct pci_dev *host_pci_dev;
+ struct sn_irq_info *sn_irq_info;
+ unsigned long size;
+ unsigned int bus_no, devfn;
+ pci_dev_get(dev); /* for the sysdata pointer */
dev->sysdata = kmalloc(sizeof(struct pcidev_info), GFP_KERNEL);
if (SN_PCIDEV_INFO(dev) <= 0)
BUG(); /* Cannot afford to run out of memory */
(u64) __pa(SN_PCIDEV_INFO(dev)),
(u64) __pa(sn_irq_info));
if (status)
- BUG(); /* Cannot get platform pci device information information */
+ BUG(); /* Cannot get platform pci device information */
/* Copy over PIO Mapped Addresses */
for (idx = 0; idx <= PCI_ROM_RESOURCE; idx++) {
dev->resource[idx].parent = &iomem_resource;
}
- /* set up host bus linkages */
- bs = SN_PCIBUS_BUSSOFT(dev->bus);
- host_pci_dev =
- pci_find_slot(SN_PCIDEV_INFO(dev)->pdi_slot_host_handle >> 32,
- SN_PCIDEV_INFO(dev)->
- pdi_slot_host_handle & 0xffffffff);
+ /*
+ * Using the PROMs values for the PCI host bus, get the Linux
+ * PCI host_pci_dev struct and set up host bus linkages
+ */
+
+ bus_no = SN_PCIDEV_INFO(dev)->pdi_slot_host_handle >> 32;
+ devfn = SN_PCIDEV_INFO(dev)->pdi_slot_host_handle & 0xffffffff;
+ host_pci_bus = pci_find_bus(pci_domain_nr(dev->bus), bus_no);
+ host_pci_dev = pci_get_slot(host_pci_bus, devfn);
+
+ SN_PCIDEV_INFO(dev)->host_pci_dev = host_pci_dev;
SN_PCIDEV_INFO(dev)->pdi_host_pcidev_info =
- SN_PCIDEV_INFO(host_pci_dev);
+ SN_PCIDEV_INFO(host_pci_dev);
SN_PCIDEV_INFO(dev)->pdi_linux_pcidev = dev;
+ bs = SN_PCIBUS_BUSSOFT(dev->bus);
SN_PCIDEV_INFO(dev)->pdi_pcibus_info = bs;
if (bs && bs->bs_asic_type < PCIIO_ASIC_MAX_TYPES) {
SN_PCIDEV_INFO(dev)->pdi_sn_irq_info = sn_irq_info;
dev->irq = SN_PCIDEV_INFO(dev)->pdi_sn_irq_info->irq_irq;
sn_irq_fixup(dev, sn_irq_info);
+ } else {
+ SN_PCIDEV_INFO(dev)->pdi_sn_irq_info = NULL;
+ kfree(sn_irq_info);
}
}
* sn_pci_controller_fixup() - This routine sets up a bus's resources
* consistent with the Linux PCI abstraction layer.
*/
-static void sn_pci_controller_fixup(int segment, int busnum)
+void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
{
int status = 0;
int nasid, cnode;
- struct pci_bus *bus;
struct pci_controller *controller;
struct pcibus_bussoft *prom_bussoft_ptr;
struct hubdev_info *hubdev_info;
void *provider_soft;
struct sn_pcibus_provider *provider;
- status =
- sal_get_pcibus_info((u64) segment, (u64) busnum,
- (u64) ia64_tpa(&prom_bussoft_ptr));
- if (status > 0) {
- return; /* bus # does not exist */
- }
-
+ status = sal_get_pcibus_info((u64) segment, (u64) busnum,
+ (u64) ia64_tpa(&prom_bussoft_ptr));
+ if (status > 0)
+ return; /*bus # does not exist */
prom_bussoft_ptr = __va(prom_bussoft_ptr);
- controller = sn_alloc_pci_sysdata();
- /* controller non-zero is BUG'd in sn_alloc_pci_sysdata */
- bus = pci_scan_bus(busnum, &pci_root_ops, controller);
+ controller = kcalloc(1,sizeof(struct pci_controller), GFP_KERNEL);
+ if (!controller)
+ BUG();
+
if (bus == NULL) {
- return; /* error, or bus already scanned */
+ bus = pci_scan_bus(busnum, &pci_root_ops, controller);
+ if (bus == NULL)
+ return; /* error, or bus already scanned */
+ bus->sysdata = NULL;
}
+ if (bus->sysdata)
+ goto error_return; /* sysdata already alloc'd */
+
/*
* Per-provider fixup. Copies the contents from prom to local
* area and links SN_PCIBUS_BUSSOFT().
*/
- if (prom_bussoft_ptr->bs_asic_type >= PCIIO_ASIC_MAX_TYPES) {
+ if (prom_bussoft_ptr->bs_asic_type >= PCIIO_ASIC_MAX_TYPES)
return; /* unsupported asic type */
- }
+
+ if (prom_bussoft_ptr->bs_asic_type == PCIIO_ASIC_TYPE_PPB)
+ goto error_return; /* no further fixup necessary */
provider = sn_pci_provider[prom_bussoft_ptr->bs_asic_type];
- if (provider == NULL) {
+ if (provider == NULL)
return; /* no provider registerd for this asic */
- }
provider_soft = NULL;
- if (provider->bus_fixup) {
+ if (provider->bus_fixup)
provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr);
- }
- if (provider_soft == NULL) {
+ if (provider_soft == NULL)
return; /* fixup failed or not applicable */
- }
/*
* Generic bus fixup goes here. Don't reference prom_bussoft_ptr
bus->sysdata = controller;
PCI_CONTROLLER(bus)->platform_data = provider_soft;
-
nasid = NASID_GET(SN_PCIBUS_BUSSOFT(bus)->bs_base);
cnode = nasid_to_cnodeid(nasid);
hubdev_info = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
SN_PCIBUS_BUSSOFT(bus)->bs_xwidget_info =
&(hubdev_info->hdi_xwidget_info[SN_PCIBUS_BUSSOFT(bus)->bs_xid]);
+
+ return;
+
+error_return:
+
+ kfree(controller);
+ return;
+}
+
+void sn_bus_store_sysdata(struct pci_dev *dev)
+{
+ struct sysdata_el *element;
+
+ element = kcalloc(1, sizeof(struct sysdata_el), GFP_KERNEL);
+ if (!element) {
+ dev_dbg(dev, "%s: out of memory!\n", __FUNCTION__);
+ return;
+ }
+ element->sysdata = dev->sysdata;
+ list_add(&element->entry, &sn_sysdata_list);
+}
+
+void sn_bus_free_sysdata(void)
+{
+ struct sysdata_el *element;
+ struct list_head *list;
+
+sn_sysdata_free_start:
+ list_for_each(list, &sn_sysdata_list) {
+ element = list_entry(list, struct sysdata_el, entry);
+ list_del(&element->entry);
+ kfree(element->sysdata);
+ kfree(element);
+ goto sn_sysdata_free_start;
+ }
+ return;
}
/*
*/
ia64_max_iommu_merge_mask = ~PAGE_MASK;
sn_fixup_ionodes();
- sn_irq = kmalloc(sizeof(struct sn_irq_info *) * NR_IRQS, GFP_KERNEL);
- if (sn_irq <= 0)
- BUG(); /* Canno afford to run out of memory. */
- memset(sn_irq, 0, sizeof(struct sn_irq_info *) * NR_IRQS);
-
+ sn_irq_lh_init();
+ INIT_LIST_HEAD(&sn_sysdata_list);
sn_init_cpei_timer();
#ifdef CONFIG_PROC_FS
register_sn_procfs();
#endif
- for (i = 0; i < PCI_BUSES_TO_SCAN; i++) {
- sn_pci_controller_fixup(0, i);
- }
+ /* busses are not known yet ... */
+ for (i = 0; i < PCI_BUSES_TO_SCAN; i++)
+ sn_pci_controller_fixup(0, i, NULL);
/*
* Generic Linux PCI Layer has created the pci_bus and pci_dev
*/
while ((pci_dev =
- pci_find_device(PCI_ANY_ID, PCI_ANY_ID, pci_dev)) != NULL) {
+ pci_get_device(PCI_ANY_ID, PCI_ANY_ID, pci_dev)) != NULL)
sn_pci_fixup_slot(pci_dev);
- }
sn_ioif_inited = 1; /* sn I/O infrastructure now initialized */
}
subsys_initcall(sn_pci_init);
+EXPORT_SYMBOL(sn_pci_fixup_slot);
+EXPORT_SYMBOL(sn_pci_unfixup_slot);
+EXPORT_SYMBOL(sn_pci_controller_fixup);
+EXPORT_SYMBOL(sn_bus_store_sysdata);
+EXPORT_SYMBOL(sn_bus_free_sysdata);
*/
#include <linux/irq.h>
-#include <asm/sn/intr.h>
+#include <linux/spinlock.h>
#include <asm/sn/addrs.h>
#include <asm/sn/arch.h>
-#include "xtalk/xwidgetdev.h"
+#include <asm/sn/intr.h>
+#include <asm/sn/pcibr_provider.h>
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
-#include "pci/pcibr_provider.h"
#include <asm/sn/shub_mmr.h>
#include <asm/sn/sn_sal.h>
extern int sn_force_interrupt_flag;
extern int sn_ioif_inited;
-struct sn_irq_info **sn_irq;
+static struct list_head **sn_irq_lh;
+static spinlock_t sn_irq_info_lock = SPIN_LOCK_UNLOCKED; /* non-IRQ lock */
static inline uint64_t sn_intr_alloc(nasid_t local_nasid, int local_widget,
u64 sn_irq_info,
nasid = get_nasid();
event_occurred = HUB_L((uint64_t *) GLOBAL_MMR_ADDR
(nasid, SH_EVENT_OCCURRED));
- /* If the UART bit is set here, we may have received an
+ /* If the UART bit is set here, we may have received an
* interrupt from the UART that the driver missed. To
* make sure, we IPI ourselves to force us to look again.
*/
force_interrupt(irq);
}
+static void sn_irq_info_free(struct rcu_head *head);
+
static void sn_set_affinity_irq(unsigned int irq, cpumask_t mask)
{
- struct sn_irq_info *sn_irq_info = sn_irq[irq];
- struct sn_irq_info *tmp_sn_irq_info;
+ struct sn_irq_info *sn_irq_info, *sn_irq_info_safe;
int cpuid, cpuphys;
- nasid_t t_nasid; /* nasid to target */
- int t_slice; /* slice to target */
-
- /* allocate a temp sn_irq_info struct to get new target info */
- tmp_sn_irq_info = kmalloc(sizeof(*tmp_sn_irq_info), GFP_KERNEL);
- if (!tmp_sn_irq_info)
- return;
cpuid = first_cpu(mask);
cpuphys = cpu_physical_id(cpuid);
- t_nasid = cpuid_to_nasid(cpuid);
- t_slice = cpuid_to_slice(cpuid);
- while (sn_irq_info) {
- int status;
- int local_widget;
- uint64_t bridge = (uint64_t) sn_irq_info->irq_bridge;
- nasid_t local_nasid = NASID_GET(bridge);
+ list_for_each_entry_safe(sn_irq_info, sn_irq_info_safe,
+ sn_irq_lh[irq], list) {
+ uint64_t bridge;
+ int local_widget, status;
+ nasid_t local_nasid;
+ struct sn_irq_info *new_irq_info;
+
+ new_irq_info = kmalloc(sizeof(struct sn_irq_info), GFP_ATOMIC);
+ if (new_irq_info == NULL)
+ break;
+ memcpy(new_irq_info, sn_irq_info, sizeof(struct sn_irq_info));
+
+ bridge = (uint64_t) new_irq_info->irq_bridge;
+ if (!bridge) {
+ kfree(new_irq_info);
+ break; /* irq is not a device interrupt */
+ }
- if (!bridge)
- break; /* irq is not a device interrupt */
+ local_nasid = NASID_GET(bridge);
if (local_nasid & 1)
local_widget = TIO_SWIN_WIDGETNUM(bridge);
else
local_widget = SWIN_WIDGETNUM(bridge);
- /* Free the old PROM sn_irq_info structure */
- sn_intr_free(local_nasid, local_widget, sn_irq_info);
+ /* Free the old PROM new_irq_info structure */
+ sn_intr_free(local_nasid, local_widget, new_irq_info);
+ /* Update kernels new_irq_info with new target info */
+ unregister_intr_pda(new_irq_info);
- /* allocate a new PROM sn_irq_info struct */
+ /* allocate a new PROM new_irq_info struct */
status = sn_intr_alloc(local_nasid, local_widget,
- __pa(tmp_sn_irq_info), irq, t_nasid,
- t_slice);
-
- if (status == 0) {
- /* Update kernels sn_irq_info with new target info */
- unregister_intr_pda(sn_irq_info);
- sn_irq_info->irq_cpuid = cpuid;
- sn_irq_info->irq_nasid = t_nasid;
- sn_irq_info->irq_slice = t_slice;
- sn_irq_info->irq_xtalkaddr =
- tmp_sn_irq_info->irq_xtalkaddr;
- sn_irq_info->irq_cookie = tmp_sn_irq_info->irq_cookie;
- register_intr_pda(sn_irq_info);
-
- if (IS_PCI_BRIDGE_ASIC(sn_irq_info->irq_bridge_type)) {
- pcibr_change_devices_irq(sn_irq_info);
- }
+ __pa(new_irq_info), irq,
+ cpuid_to_nasid(cpuid),
+ cpuid_to_slice(cpuid));
+
+ /* SAL call failed */
+ if (status) {
+ kfree(new_irq_info);
+ break;
+ }
+
+ new_irq_info->irq_cpuid = cpuid;
+ register_intr_pda(new_irq_info);
+
+ if (IS_PCI_BRIDGE_ASIC(new_irq_info->irq_bridge_type))
+ pcibr_change_devices_irq(new_irq_info);
- sn_irq_info = sn_irq_info->irq_next;
+ spin_lock(&sn_irq_info_lock);
+ list_replace_rcu(&sn_irq_info->list, &new_irq_info->list);
+ spin_unlock(&sn_irq_info_lock);
+ call_rcu(&sn_irq_info->rcu, sn_irq_info_free);
#ifdef CONFIG_SMP
- set_irq_affinity_info((irq & 0xff), cpuphys, 0);
+ set_irq_affinity_info((irq & 0xff), cpuphys, 0);
#endif
- } else {
- break; /* snp_affinity failed the intr_alloc */
- }
}
- kfree(tmp_sn_irq_info);
}
struct hw_interrupt_type irq_type_sn = {
- "SN hub",
- sn_startup_irq,
- sn_shutdown_irq,
- sn_enable_irq,
- sn_disable_irq,
- sn_ack_irq,
- sn_end_irq,
- sn_set_affinity_irq
+ .typename = "SN hub",
+ .startup = sn_startup_irq,
+ .shutdown = sn_shutdown_irq,
+ .enable = sn_enable_irq,
+ .disable = sn_disable_irq,
+ .ack = sn_ack_irq,
+ .end = sn_end_irq,
+ .set_affinity = sn_set_affinity_irq
};
unsigned int sn_local_vector_to_irq(u8 vector)
struct sn_irq_info *tmp_irq_info;
int i, foundmatch;
+ rcu_read_lock();
if (pdacpu(cpu)->sn_last_irq == irq) {
foundmatch = 0;
- for (i = pdacpu(cpu)->sn_last_irq - 1; i; i--) {
- tmp_irq_info = sn_irq[i];
- while (tmp_irq_info) {
+ for (i = pdacpu(cpu)->sn_last_irq - 1;
+ i && !foundmatch; i--) {
+ list_for_each_entry_rcu(tmp_irq_info,
+ sn_irq_lh[i],
+ list) {
if (tmp_irq_info->irq_cpuid == cpu) {
- foundmatch++;
+ foundmatch = 1;
break;
}
- tmp_irq_info = tmp_irq_info->irq_next;
- }
- if (foundmatch) {
- break;
}
}
pdacpu(cpu)->sn_last_irq = i;
if (pdacpu(cpu)->sn_first_irq == irq) {
foundmatch = 0;
- for (i = pdacpu(cpu)->sn_first_irq + 1; i < NR_IRQS; i++) {
- tmp_irq_info = sn_irq[i];
- while (tmp_irq_info) {
+ for (i = pdacpu(cpu)->sn_first_irq + 1;
+ i < NR_IRQS && !foundmatch; i++) {
+ list_for_each_entry_rcu(tmp_irq_info,
+ sn_irq_lh[i],
+ list) {
if (tmp_irq_info->irq_cpuid == cpu) {
- foundmatch++;
+ foundmatch = 1;
break;
}
- tmp_irq_info = tmp_irq_info->irq_next;
- }
- if (foundmatch) {
- break;
}
}
pdacpu(cpu)->sn_first_irq = ((i == NR_IRQS) ? 0 : i);
}
+ rcu_read_unlock();
}
-struct sn_irq_info *sn_irq_alloc(nasid_t local_nasid, int local_widget, int irq,
- nasid_t nasid, int slice)
+static void sn_irq_info_free(struct rcu_head *head)
{
struct sn_irq_info *sn_irq_info;
- int status;
-
- sn_irq_info = kmalloc(sizeof(*sn_irq_info), GFP_KERNEL);
- if (sn_irq_info == NULL)
- return NULL;
-
- memset(sn_irq_info, 0x0, sizeof(*sn_irq_info));
-
- status =
- sn_intr_alloc(local_nasid, local_widget, __pa(sn_irq_info), irq,
- nasid, slice);
-
- if (status) {
- kfree(sn_irq_info);
- return NULL;
- } else {
- return sn_irq_info;
- }
-}
-
-void sn_irq_free(struct sn_irq_info *sn_irq_info)
-{
- uint64_t bridge = (uint64_t) sn_irq_info->irq_bridge;
- nasid_t local_nasid = NASID_GET(bridge);
- int local_widget;
-
- if (local_nasid & 1) /* tio check */
- local_widget = TIO_SWIN_WIDGETNUM(bridge);
- else
- local_widget = SWIN_WIDGETNUM(bridge);
-
- sn_intr_free(local_nasid, local_widget, sn_irq_info);
+ sn_irq_info = container_of(head, struct sn_irq_info, rcu);
kfree(sn_irq_info);
}
int slice = sn_irq_info->irq_slice;
int cpu = nasid_slice_to_cpuid(nasid, slice);
+ pci_dev_get(pci_dev);
sn_irq_info->irq_cpuid = cpu;
sn_irq_info->irq_pciioinfo = SN_PCIDEV_INFO(pci_dev);
/* link it into the sn_irq[irq] list */
- sn_irq_info->irq_next = sn_irq[sn_irq_info->irq_irq];
- sn_irq[sn_irq_info->irq_irq] = sn_irq_info;
+ spin_lock(&sn_irq_info_lock);
+ list_add_rcu(&sn_irq_info->list, sn_irq_lh[sn_irq_info->irq_irq]);
+ spin_unlock(&sn_irq_info_lock);
(void)register_intr_pda(sn_irq_info);
}
+void sn_irq_unfixup(struct pci_dev *pci_dev)
+{
+ struct sn_irq_info *sn_irq_info;
+
+ /* Only cleanup IRQ stuff if this device has a host bus context */
+ if (!SN_PCIDEV_BUSSOFT(pci_dev))
+ return;
+
+ sn_irq_info = SN_PCIDEV_INFO(pci_dev)->pdi_sn_irq_info;
+ if (!sn_irq_info || !sn_irq_info->irq_irq) {
+ kfree(sn_irq_info);
+ return;
+ }
+
+ unregister_intr_pda(sn_irq_info);
+ spin_lock(&sn_irq_info_lock);
+ list_del_rcu(&sn_irq_info->list);
+ spin_unlock(&sn_irq_info_lock);
+ call_rcu(&sn_irq_info->rcu, sn_irq_info_free);
+ pci_dev_put(pci_dev);
+}
+
static void force_interrupt(int irq)
{
struct sn_irq_info *sn_irq_info;
if (!sn_ioif_inited)
return;
- sn_irq_info = sn_irq[irq];
- while (sn_irq_info) {
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(sn_irq_info, sn_irq_lh[irq], list) {
if (IS_PCI_BRIDGE_ASIC(sn_irq_info->irq_bridge_type) &&
- (sn_irq_info->irq_bridge != NULL)) {
+ (sn_irq_info->irq_bridge != NULL))
pcibr_force_interrupt(sn_irq_info);
- }
- sn_irq_info = sn_irq_info->irq_next;
}
+ rcu_read_unlock();
}
/*
void sn_lb_int_war_check(void)
{
+ struct sn_irq_info *sn_irq_info;
int i;
if (!sn_ioif_inited || pda->sn_first_irq == 0)
return;
+
+ rcu_read_lock();
for (i = pda->sn_first_irq; i <= pda->sn_last_irq; i++) {
- struct sn_irq_info *sn_irq_info = sn_irq[i];
- while (sn_irq_info) {
- /* Only call for PCI bridges that are fully initialized. */
+ list_for_each_entry_rcu(sn_irq_info, sn_irq_lh[i], list) {
+ /*
+ * Only call for PCI bridges that are fully
+ * initialized.
+ */
if (IS_PCI_BRIDGE_ASIC(sn_irq_info->irq_bridge_type) &&
- (sn_irq_info->irq_bridge != NULL)) {
+ (sn_irq_info->irq_bridge != NULL))
sn_check_intr(i, sn_irq_info);
- }
- sn_irq_info = sn_irq_info->irq_next;
}
}
+ rcu_read_unlock();
+}
+
+void sn_irq_lh_init(void)
+{
+ int i;
+
+ sn_irq_lh = kmalloc(sizeof(struct list_head *) * NR_IRQS, GFP_KERNEL);
+ if (!sn_irq_lh)
+ panic("SN PCI INIT: Failed to allocate memory for PCI init\n");
+
+ for (i = 0; i < NR_IRQS; i++) {
+ sn_irq_lh[i] = kmalloc(sizeof(struct list_head), GFP_KERNEL);
+ if (!sn_irq_lh[i])
+ panic("SN PCI INIT: Failed IRQ memory allocation\n");
+
+ INIT_LIST_HEAD(sn_irq_lh[i]);
+ }
+
}
{
long status, ticks_per_sec, drift;
int pxm;
- int major = sn_sal_rev_major(), minor = sn_sal_rev_minor();
+ u32 version = sn_sal_rev();
extern void sn_cpu_init(void);
ia64_sn_plat_set_error_handling_features();
* support here so we don't have to listen to failed keyboard probe
* messages.
*/
- if ((major < 2 || (major == 2 && minor <= 9)) &&
- acpi_kbd_controller_present) {
+ if (version <= 0x0209 && acpi_kbd_controller_present) {
printk(KERN_INFO "Disabling legacy keyboard support as prom "
"is too old and doesn't provide FADT\n");
acpi_kbd_controller_present = 0;
}
- printk("SGI SAL version %x.%02x\n", major, minor);
+ printk("SGI SAL version %x.%02x\n", version >> 8, version & 0x00FF);
/*
* Confirm the SAL we're running on is recent enough...
*/
- if ((major < SN_SAL_MIN_MAJOR) || (major == SN_SAL_MIN_MAJOR &&
- minor < SN_SAL_MIN_MINOR)) {
+ if (version < SN_SAL_MIN_VERSION) {
printk(KERN_ERR "This kernel needs SGI SAL version >= "
- "%x.%02x\n", SN_SAL_MIN_MAJOR, SN_SAL_MIN_MINOR);
+ "%x.%02x\n", SN_SAL_MIN_VERSION >> 8,
+ SN_SAL_MIN_VERSION & 0x00FF);
panic("PROM version too old\n");
}
#include <linux/proc_fs.h>
#include <linux/device.h>
#include <linux/delay.h>
+#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/addrs.h>
cnodeid_t cnodeid;
int found_tiocx_device = 0;
+ if (!ia64_platform_is("sn2"))
+ return -ENODEV;
+
bus_register(&tiocx_bus_type);
for (cnodeid = 0; cnodeid < MAX_COMPACT_NODES; cnodeid++) {
#include <linux/module.h>
#include <asm/dma.h>
-#include <asm/sn/sn_sal.h>
+#include <asm/sn/pcibr_provider.h>
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
+#include <asm/sn/sn_sal.h>
#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset)
#define SG_ENT_PHYS_ADDRESS(SG) virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
#include <linux/types.h>
#include <asm/sn/sn_sal.h>
+#include <asm/sn/pcibr_provider.h>
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
-#include "pci/pcibr_provider.h"
int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
#include <linux/types.h>
#include <linux/pci.h>
-#include <asm/sn/sn_sal.h>
+#include <asm/sn/addrs.h>
#include <asm/sn/geo.h>
-#include "xtalk/xwidgetdev.h"
-#include "xtalk/hubdev.h"
+#include <asm/sn/pcibr_provider.h>
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
-#include "pci/tiocp.h"
-#include "pci/pic.h"
-#include "pci/pcibr_provider.h"
-#include "pci/tiocp.h"
+#include <asm/sn/pic.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/tiocp.h>
#include "tio.h"
-#include <asm/sn/addrs.h>
+#include "xtalk/xwidgetdev.h"
+#include "xtalk/hubdev.h"
extern int sn_ioif_inited;
* Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
*/
-#include <linux/types.h>
#include <linux/interrupt.h>
+#include <linux/types.h>
#include <linux/pci.h>
-#include <asm/sn/sn_sal.h>
-#include "xtalk/xwidgetdev.h"
+#include <asm/sn/addrs.h>
#include <asm/sn/geo.h>
-#include "xtalk/hubdev.h"
+#include <asm/sn/pcibr_provider.h>
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
-#include "pci/pcibr_provider.h"
-#include <asm/sn/addrs.h>
+#include <asm/sn/sn_sal.h>
+#include "xtalk/xwidgetdev.h"
+#include "xtalk/hubdev.h"
+
+int
+sal_pcibr_slot_enable(struct pcibus_info *soft, int device, void *resp)
+{
+ struct ia64_sal_retval ret_stuff;
+ uint64_t busnum;
+
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+ busnum = soft->pbi_buscommon.bs_persist_busnum;
+ SAL_CALL_NOLOCK(ret_stuff, (u64) SN_SAL_IOIF_SLOT_ENABLE, (u64) busnum,
+ (u64) device, (u64) resp, 0, 0, 0, 0);
+
+ return (int)ret_stuff.v0;
+}
+
+int
+sal_pcibr_slot_disable(struct pcibus_info *soft, int device, int action,
+ void *resp)
+{
+ struct ia64_sal_retval ret_stuff;
+ uint64_t busnum;
+
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+
+ busnum = soft->pbi_buscommon.bs_persist_busnum;
+ SAL_CALL_NOLOCK(ret_stuff, (u64) SN_SAL_IOIF_SLOT_DISABLE,
+ (u64) busnum, (u64) device, (u64) action,
+ (u64) resp, 0, 0, 0);
+
+ return (int)ret_stuff.v0;
+}
static int sal_pcibr_error_interrupt(struct pcibus_info *soft)
{
return 0;
}
+
+EXPORT_SYMBOL_GPL(sal_pcibr_slot_enable);
+EXPORT_SYMBOL_GPL(sal_pcibr_slot_disable);
* Copyright (C) 2004 Silicon Graphics, Inc. All rights reserved.
*/
-#include <linux/types.h>
#include <linux/interrupt.h>
+#include <linux/types.h>
+#include <asm/sn/pcibr_provider.h>
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
-#include "pci/tiocp.h"
-#include "pci/pic.h"
-#include "pci/pcibr_provider.h"
+#include <asm/sn/pic.h>
+#include <asm/sn/tiocp.h>
union br_ptr {
struct tiocp tio;
/* sanity check prom rev */
- if (sn_sal_rev_major() < 4 ||
- (sn_sal_rev_major() == 4 && sn_sal_rev_minor() < 6)) {
+ if (sn_sal_rev() < 0x0406) {
printk
(KERN_ERR "%s: SGI prom rev 4.06 or greater required "
"for tioca support\n", __FUNCTION__);
*
* Setup routines for Renesas M32700UT Board
*
- * Copyright (c) 2002 Hiroyuki Kondo, Hirokazu Takata,
- * Hitoshi Yamamoto, Takeo Takahashi
+ * Copyright (c) 2002-2005 Hiroyuki Kondo, Hirokazu Takata,
+ * Hitoshi Yamamoto, Takeo Takahashi
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file "COPYING" in the main directory of this
icu_data[M32R_IRQ_INT2].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD01;
enable_m32700ut_irq(M32R_IRQ_INT2);
-//#if defined(CONFIG_VIDEO_M32R_AR)
+#if defined(CONFIG_VIDEO_M32R_AR)
/*
* INT3# is used for AR
*/
irq_desc[M32R_IRQ_INT3].depth = 1;
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_m32700ut_irq(M32R_IRQ_INT3);
-//#endif /* CONFIG_VIDEO_M32R_AR */
+#endif /* CONFIG_VIDEO_M32R_AR */
}
+#if defined(CONFIG_SMC91X)
+
#define LAN_IOSTART 0x300
#define LAN_IOEND 0x320
static struct resource smc91x_resources[] = {
.num_resources = ARRAY_SIZE(smc91x_resources),
.resource = smc91x_resources,
};
+#endif
+
+#if defined(CONFIG_FB_S1D13XXX)
+
+#include <video/s1d13xxxfb.h>
+#include <asm/s1d13806.h>
+
+static struct s1d13xxxfb_pdata s1d13xxxfb_data = {
+ .initregs = s1d13xxxfb_initregs,
+ .initregssize = ARRAY_SIZE(s1d13xxxfb_initregs),
+ .platform_init_video = NULL,
+#ifdef CONFIG_PM
+ .platform_suspend_video = NULL,
+ .platform_resume_video = NULL,
+#endif
+};
+
+static struct resource s1d13xxxfb_resources[] = {
+ [0] = {
+ .start = 0x10600000UL,
+ .end = 0x1073FFFFUL,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = 0x10400000UL,
+ .end = 0x104001FFUL,
+ .flags = IORESOURCE_MEM,
+ }
+};
+
+static struct platform_device s1d13xxxfb_device = {
+ .name = S1D_DEVICENAME,
+ .id = 0,
+ .dev = {
+ .platform_data = &s1d13xxxfb_data,
+ },
+ .num_resources = ARRAY_SIZE(s1d13xxxfb_resources),
+ .resource = s1d13xxxfb_resources,
+};
+#endif
static int __init platform_init(void)
{
+#if defined(CONFIG_SMC91X)
platform_device_register(&smc91x_device);
+#endif
+#if defined(CONFIG_FB_S1D13XXX)
+ platform_device_register(&s1d13xxxfb_device);
+#endif
return 0;
}
arch_initcall(platform_init);
*
* Setup routines for Renesas MAPPI Board
*
- * Copyright (c) 2001, 2002 Hiroyuki Kondo, Hirokazu Takata,
- * Hitoshi Yamamoto
+ * Copyright (c) 2001-2005 Hiroyuki Kondo, Hirokazu Takata,
+ * Hitoshi Yamamoto
*/
#include <linux/config.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/device.h>
#include <asm/system.h>
#include <asm/m32r.h>
disable_mappi_irq(M32R_IRQ_INT2);
#endif /* CONFIG_M32RPCC */
}
+
+#if defined(CONFIG_FB_S1D13XXX)
+
+#include <video/s1d13xxxfb.h>
+#include <asm/s1d13806.h>
+
+static struct s1d13xxxfb_pdata s1d13xxxfb_data = {
+ .initregs = s1d13xxxfb_initregs,
+ .initregssize = ARRAY_SIZE(s1d13xxxfb_initregs),
+ .platform_init_video = NULL,
+#ifdef CONFIG_PM
+ .platform_suspend_video = NULL,
+ .platform_resume_video = NULL,
+#endif
+};
+
+static struct resource s1d13xxxfb_resources[] = {
+ [0] = {
+ .start = 0x10200000UL,
+ .end = 0x1033FFFFUL,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = 0x10000000UL,
+ .end = 0x100001FFUL,
+ .flags = IORESOURCE_MEM,
+ }
+};
+
+static struct platform_device s1d13xxxfb_device = {
+ .name = S1D_DEVICENAME,
+ .id = 0,
+ .dev = {
+ .platform_data = &s1d13xxxfb_data,
+ },
+ .num_resources = ARRAY_SIZE(s1d13xxxfb_resources),
+ .resource = s1d13xxxfb_resources,
+};
+
+static int __init platform_init(void)
+{
+ platform_device_register(&s1d13xxxfb_device);
+ return 0;
+}
+arch_initcall(platform_init);
+#endif
*
* Setup routines for Renesas MAPPI-II(M3A-ZA36) Board
*
- * Copyright (c) 2001, 2002 Hiroyuki Kondo, Hirokazu Takata,
- * Hitoshi Yamamoto, Mamoru Sakugawa
+ * Copyright (c) 2001-2005 Hiroyuki Kondo, Hirokazu Takata,
+ * Hitoshi Yamamoto, Mamoru Sakugawa
*/
#include <linux/config.h>
*
* Setup routines for Renesas MAPPI-III(M3A-2170) Board
*
- * Copyright (c) 2001-2005 Hiroyuki Kondo, Hirokazu Takata,
- * Hitoshi Yamamoto, Mamoru Sakugawa
+ * Copyright (c) 2001-2005 Hiroyuki Kondo, Hirokazu Takata,
+ * Hitoshi Yamamoto, Mamoru Sakugawa
*/
#include <linux/config.h>
#endif /* CONFIG_M32R_CFC */
}
+#if defined(CONFIG_SMC91X)
+
#define LAN_IOSTART 0x300
#define LAN_IOEND 0x320
static struct resource smc91x_resources[] = {
.resource = smc91x_resources,
};
+#endif
+
+#if defined(CONFIG_FB_S1D13XXX)
+
+#include <video/s1d13xxxfb.h>
+#include <asm/s1d13806.h>
+
+static struct s1d13xxxfb_pdata s1d13xxxfb_data = {
+ .initregs = s1d13xxxfb_initregs,
+ .initregssize = ARRAY_SIZE(s1d13xxxfb_initregs),
+ .platform_init_video = NULL,
+#ifdef CONFIG_PM
+ .platform_suspend_video = NULL,
+ .platform_resume_video = NULL,
+#endif
+};
+
+static struct resource s1d13xxxfb_resources[] = {
+ [0] = {
+ .start = 0x1d600000UL,
+ .end = 0x1d73FFFFUL,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = 0x1d400000UL,
+ .end = 0x1d4001FFUL,
+ .flags = IORESOURCE_MEM,
+ }
+};
+
+static struct platform_device s1d13xxxfb_device = {
+ .name = S1D_DEVICENAME,
+ .id = 0,
+ .dev = {
+ .platform_data = &s1d13xxxfb_data,
+ },
+ .num_resources = ARRAY_SIZE(s1d13xxxfb_resources),
+ .resource = s1d13xxxfb_resources,
+};
+#endif
+
static int __init platform_init(void)
{
+#if defined(CONFIG_SMC91X)
platform_device_register(&smc91x_device);
+#endif
+#if defined(CONFIG_FB_S1D13XXX)
+ platform_device_register(&s1d13xxxfb_device);
+#endif
return 0;
}
arch_initcall(platform_init);
*
* Setup routines for OAKS32R Board
*
- * Copyright (c) 2002-2004 Hiroyuki Kondo, Hirokazu Takata,
- * Hitoshi Yamamoto, Mamoru Sakugawa
+ * Copyright (c) 2002-2005 Hiroyuki Kondo, Hirokazu Takata,
+ * Hitoshi Yamamoto, Mamoru Sakugawa
*/
#include <linux/config.h>
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_oaks32r_irq(M32R_IRQ_SIO1_S);
#endif /* CONFIG_SERIAL_M32R_SIO */
-
}
*
* Setup routines for Renesas OPSPUT Board
*
- * Copyright (c) 2002-2004
+ * Copyright (c) 2002-2005
* Hiroyuki Kondo, Hirokazu Takata,
* Hitoshi Yamamoto, Takeo Takahashi, Mamoru Sakugawa
*
icu_data[M32R_IRQ_INT2].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD01;
enable_opsput_irq(M32R_IRQ_INT2);
-//#if defined(CONFIG_VIDEO_M32R_AR)
+#if defined(CONFIG_VIDEO_M32R_AR)
/*
* INT3# is used for AR
*/
irq_desc[M32R_IRQ_INT3].depth = 1;
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_opsput_irq(M32R_IRQ_INT3);
-//#endif /* CONFIG_VIDEO_M32R_AR */
+#endif /* CONFIG_VIDEO_M32R_AR */
}
+#if defined(CONFIG_SMC91X)
+
#define LAN_IOSTART 0x300
#define LAN_IOEND 0x320
static struct resource smc91x_resources[] = {
.num_resources = ARRAY_SIZE(smc91x_resources),
.resource = smc91x_resources,
};
+#endif
+
+#if defined(CONFIG_FB_S1D13XXX)
+
+#include <video/s1d13xxxfb.h>
+#include <asm/s1d13806.h>
+
+static struct s1d13xxxfb_pdata s1d13xxxfb_data = {
+ .initregs = s1d13xxxfb_initregs,
+ .initregssize = ARRAY_SIZE(s1d13xxxfb_initregs),
+ .platform_init_video = NULL,
+#ifdef CONFIG_PM
+ .platform_suspend_video = NULL,
+ .platform_resume_video = NULL,
+#endif
+};
+
+static struct resource s1d13xxxfb_resources[] = {
+ [0] = {
+ .start = 0x10600000UL,
+ .end = 0x1073FFFFUL,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = 0x10400000UL,
+ .end = 0x104001FFUL,
+ .flags = IORESOURCE_MEM,
+ }
+};
+
+static struct platform_device s1d13xxxfb_device = {
+ .name = S1D_DEVICENAME,
+ .id = 0,
+ .dev = {
+ .platform_data = &s1d13xxxfb_data,
+ },
+ .num_resources = ARRAY_SIZE(s1d13xxxfb_resources),
+ .resource = s1d13xxxfb_resources,
+};
+#endif
static int __init platform_init(void)
{
+#if defined(CONFIG_SMC91X)
platform_device_register(&smc91x_device);
+#endif
+#if defined(CONFIG_FB_S1D13XXX)
+ platform_device_register(&s1d13xxxfb_device);
+#endif
return 0;
}
arch_initcall(platform_init);
return offset;
}
-static int __pmac pmac_cpufreq_suspend(struct cpufreq_policy *policy, u32 state)
+static int __pmac pmac_cpufreq_suspend(struct cpufreq_policy *policy, pm_message_t pmsg)
{
/* Ok, this could be made a bit smarter, but let's be robust for now. We
* always force a speed change to high speed before sleep, to make sure
#endif
struct cpu_spec cpu_specs[] = {
- { /* Power3 */
- 0xffff0000, 0x00400000, "POWER3 (630)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_IABR | CPU_FTR_PMC8,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power3,
- COMMON_PPC64_FW
- },
- { /* Power3+ */
- 0xffff0000, 0x00410000, "POWER3 (630+)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_IABR | CPU_FTR_PMC8,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power3,
- COMMON_PPC64_FW
- },
- { /* Northstar */
- 0xffff0000, 0x00330000, "RS64-II (northstar)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_IABR | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power3,
- COMMON_PPC64_FW
- },
- { /* Pulsar */
- 0xffff0000, 0x00340000, "RS64-III (pulsar)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_IABR | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power3,
- COMMON_PPC64_FW
- },
- { /* I-star */
- 0xffff0000, 0x00360000, "RS64-III (icestar)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_IABR | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power3,
- COMMON_PPC64_FW
- },
- { /* S-star */
- 0xffff0000, 0x00370000, "RS64-IV (sstar)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_IABR | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power3,
- COMMON_PPC64_FW
- },
- { /* Power4 */
- 0xffff0000, 0x00350000, "POWER4 (gp)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power4,
- COMMON_PPC64_FW
- },
- { /* Power4+ */
- 0xffff0000, 0x00380000, "POWER4+ (gq)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power4,
- COMMON_PPC64_FW
- },
- { /* PPC970 */
- 0xffff0000, 0x00390000, "PPC970",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
- CPU_FTR_CAN_NAP | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64 | PPC_FEATURE_HAS_ALTIVEC_COMP,
- 128, 128,
- __setup_cpu_ppc970,
- COMMON_PPC64_FW
- },
- { /* PPC970FX */
- 0xffff0000, 0x003c0000, "PPC970FX",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
- CPU_FTR_CAN_NAP | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
- COMMON_USER_PPC64 | PPC_FEATURE_HAS_ALTIVEC_COMP,
- 128, 128,
- __setup_cpu_ppc970,
- COMMON_PPC64_FW
- },
- { /* Power5 */
- 0xffff0000, 0x003a0000, "POWER5 (gr)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_MMCRA | CPU_FTR_SMT |
- CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE |
- CPU_FTR_MMCRA_SIHV,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power4,
- COMMON_PPC64_FW
- },
- { /* Power5 */
- 0xffff0000, 0x003b0000, "POWER5 (gs)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_MMCRA | CPU_FTR_SMT |
- CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE |
- CPU_FTR_MMCRA_SIHV,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power4,
- COMMON_PPC64_FW
- },
- { /* BE DD1.x */
- 0xffff0000, 0x00700000, "Broadband Engine",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
- CPU_FTR_SMT,
- COMMON_USER_PPC64 | PPC_FEATURE_HAS_ALTIVEC_COMP,
- 128, 128,
- __setup_cpu_be,
- COMMON_PPC64_FW
- },
- { /* default match */
- 0x00000000, 0x00000000, "POWER4 (compatible)",
- CPU_FTR_SPLIT_ID_CACHE | CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
- CPU_FTR_PPCAS_ARCH_V2,
- COMMON_USER_PPC64,
- 128, 128,
- __setup_cpu_power4,
- COMMON_PPC64_FW
- }
+ { /* Power3 */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00400000,
+ .cpu_name = "POWER3 (630)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE | CPU_FTR_IABR |
+ CPU_FTR_PMC8,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power3,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Power3+ */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00410000,
+ .cpu_name = "POWER3 (630+)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE | CPU_FTR_IABR |
+ CPU_FTR_PMC8,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power3,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Northstar */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00330000,
+ .cpu_name = "RS64-II (northstar)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE | CPU_FTR_IABR |
+ CPU_FTR_PMC8 | CPU_FTR_MMCRA | CPU_FTR_CTRL,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power3,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Pulsar */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00340000,
+ .cpu_name = "RS64-III (pulsar)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE | CPU_FTR_IABR |
+ CPU_FTR_PMC8 | CPU_FTR_MMCRA | CPU_FTR_CTRL,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power3,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* I-star */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00360000,
+ .cpu_name = "RS64-III (icestar)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE | CPU_FTR_IABR |
+ CPU_FTR_PMC8 | CPU_FTR_MMCRA | CPU_FTR_CTRL,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power3,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* S-star */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00370000,
+ .cpu_name = "RS64-IV (sstar)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE | CPU_FTR_IABR |
+ CPU_FTR_PMC8 | CPU_FTR_MMCRA | CPU_FTR_CTRL,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power3,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Power4 */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00350000,
+ .cpu_name = "POWER4 (gp)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power4,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Power4+ */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00380000,
+ .cpu_name = "POWER4+ (gq)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power4,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* PPC970 */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00390000,
+ .cpu_name = "PPC970",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
+ CPU_FTR_CAN_NAP | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
+ .cpu_user_features = COMMON_USER_PPC64 |
+ PPC_FEATURE_HAS_ALTIVEC_COMP,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_ppc970,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* PPC970FX */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x003c0000,
+ .cpu_name = "PPC970FX",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
+ CPU_FTR_CAN_NAP | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
+ .cpu_user_features = COMMON_USER_PPC64 |
+ PPC_FEATURE_HAS_ALTIVEC_COMP,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_ppc970,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Power5 */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x003a0000,
+ .cpu_name = "POWER5 (gr)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_MMCRA | CPU_FTR_SMT |
+ CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE |
+ CPU_FTR_MMCRA_SIHV,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power4,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* Power5 */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x003b0000,
+ .cpu_name = "POWER5 (gs)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_MMCRA | CPU_FTR_SMT |
+ CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE |
+ CPU_FTR_MMCRA_SIHV,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power4,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* BE DD1.x */
+ .pvr_mask = 0xffff0000,
+ .pvr_value = 0x00700000,
+ .cpu_name = "Broadband Engine",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
+ CPU_FTR_SMT,
+ .cpu_user_features = COMMON_USER_PPC64 |
+ PPC_FEATURE_HAS_ALTIVEC_COMP,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_be,
+ .firmware_features = COMMON_PPC64_FW,
+ },
+ { /* default match */
+ .pvr_mask = 0x00000000,
+ .pvr_value = 0x00000000,
+ .cpu_name = "POWER4 (compatible)",
+ .cpu_features = CPU_FTR_SPLIT_ID_CACHE |
+ CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
+ CPU_FTR_PPCAS_ARCH_V2,
+ .cpu_user_features = COMMON_USER_PPC64,
+ .icache_bsize = 128,
+ .dcache_bsize = 128,
+ .cpu_setup = __setup_cpu_power4,
+ .firmware_features = COMMON_PPC64_FW,
+ }
};
firmware_feature_t firmware_features_table[FIRMWARE_MAX_FEATURES] = {
- {FW_FEATURE_PFT, "hcall-pft"},
- {FW_FEATURE_TCE, "hcall-tce"},
- {FW_FEATURE_SPRG0, "hcall-sprg0"},
- {FW_FEATURE_DABR, "hcall-dabr"},
- {FW_FEATURE_COPY, "hcall-copy"},
- {FW_FEATURE_ASR, "hcall-asr"},
- {FW_FEATURE_DEBUG, "hcall-debug"},
- {FW_FEATURE_PERF, "hcall-perf"},
- {FW_FEATURE_DUMP, "hcall-dump"},
- {FW_FEATURE_INTERRUPT, "hcall-interrupt"},
- {FW_FEATURE_MIGRATE, "hcall-migrate"},
- {FW_FEATURE_PERFMON, "hcall-perfmon"},
- {FW_FEATURE_CRQ, "hcall-crq"},
- {FW_FEATURE_VIO, "hcall-vio"},
- {FW_FEATURE_RDMA, "hcall-rdma"},
- {FW_FEATURE_LLAN, "hcall-lLAN"},
- {FW_FEATURE_BULK, "hcall-bulk"},
- {FW_FEATURE_XDABR, "hcall-xdabr"},
- {FW_FEATURE_MULTITCE, "hcall-multi-tce"},
- {FW_FEATURE_SPLPAR, "hcall-splpar"},
+ {FW_FEATURE_PFT, "hcall-pft"},
+ {FW_FEATURE_TCE, "hcall-tce"},
+ {FW_FEATURE_SPRG0, "hcall-sprg0"},
+ {FW_FEATURE_DABR, "hcall-dabr"},
+ {FW_FEATURE_COPY, "hcall-copy"},
+ {FW_FEATURE_ASR, "hcall-asr"},
+ {FW_FEATURE_DEBUG, "hcall-debug"},
+ {FW_FEATURE_PERF, "hcall-perf"},
+ {FW_FEATURE_DUMP, "hcall-dump"},
+ {FW_FEATURE_INTERRUPT, "hcall-interrupt"},
+ {FW_FEATURE_MIGRATE, "hcall-migrate"},
+ {FW_FEATURE_PERFMON, "hcall-perfmon"},
+ {FW_FEATURE_CRQ, "hcall-crq"},
+ {FW_FEATURE_VIO, "hcall-vio"},
+ {FW_FEATURE_RDMA, "hcall-rdma"},
+ {FW_FEATURE_LLAN, "hcall-lLAN"},
+ {FW_FEATURE_BULK, "hcall-bulk"},
+ {FW_FEATURE_XDABR, "hcall-xdabr"},
+ {FW_FEATURE_MULTITCE, "hcall-multi-tce"},
+ {FW_FEATURE_SPLPAR, "hcall-splpar"},
};
label##_pSeries: \
HMT_MEDIUM; \
mtspr SPRG1,r13; /* save r13 */ \
+ RUNLATCH_ON(r13); \
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common)
#define STD_EXCEPTION_ISERIES(n, label, area) \
label##_iSeries: \
HMT_MEDIUM; \
mtspr SPRG1,r13; /* save r13 */ \
+ RUNLATCH_ON(r13); \
EXCEPTION_PROLOG_ISERIES_1(area); \
EXCEPTION_PROLOG_ISERIES_2; \
b label##_common
label##_iSeries: \
HMT_MEDIUM; \
mtspr SPRG1,r13; /* save r13 */ \
+ RUNLATCH_ON(r13); \
EXCEPTION_PROLOG_ISERIES_1(PACA_EXGEN); \
lbz r10,PACAPROCENABLED(r13); \
cmpwi 0,r10,0; \
_machine_check_pSeries:
HMT_MEDIUM
mtspr SPRG1,r13 /* save r13 */
+ RUNLATCH_ON(r13)
EXCEPTION_PROLOG_PSERIES(PACA_EXMC, machine_check_common)
. = 0x300
data_access_slb_pSeries:
HMT_MEDIUM
mtspr SPRG1,r13
+ RUNLATCH_ON(r13)
mfspr r13,SPRG3 /* get paca address into r13 */
std r9,PACA_EXSLB+EX_R9(r13) /* save r9 - r12 */
std r10,PACA_EXSLB+EX_R10(r13)
instruction_access_slb_pSeries:
HMT_MEDIUM
mtspr SPRG1,r13
+ RUNLATCH_ON(r13)
mfspr r13,SPRG3 /* get paca address into r13 */
std r9,PACA_EXSLB+EX_R9(r13) /* save r9 - r12 */
std r10,PACA_EXSLB+EX_R10(r13)
.globl system_call_pSeries
system_call_pSeries:
HMT_MEDIUM
+ RUNLATCH_ON(r9)
mr r9,r13
mfmsr r10
mfspr r13,SPRG3
system_reset_fwnmi:
HMT_MEDIUM
mtspr SPRG1,r13 /* save r13 */
+ RUNLATCH_ON(r13)
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common)
.globl machine_check_fwnmi
machine_check_fwnmi:
HMT_MEDIUM
mtspr SPRG1,r13 /* save r13 */
+ RUNLATCH_ON(r13)
EXCEPTION_PROLOG_PSERIES(PACA_EXMC, machine_check_common)
/*
.align 7
.globl data_access_common
data_access_common:
+ RUNLATCH_ON(r10) /* It wont fit in the 0x300 handler */
mfspr r10,DAR
std r10,PACA_EXGEN+EX_DAR(r13)
mfspr r10,DSISR
#include <linux/module.h>
#include <asm/hvcall.h>
#include <asm/hvconsole.h>
-#include <asm/prom.h>
/**
* hvc_get_chars - retrieve characters from firmware for denoted vterm adatper
unsigned long got;
if (plpar_hcall(H_GET_TERM_CHAR, vtermno, 0, 0, 0, &got,
- (unsigned long *)buf, (unsigned long *)buf+1) == H_Success) {
- /*
- * Work around a HV bug where it gives us a null
- * after every \r. -- paulus
- */
- if (got > 0) {
- int i;
- for (i = 1; i < got; ++i) {
- if (buf[i] == 0 && buf[i-1] == '\r') {
- --got;
- if (i < got)
- memmove(&buf[i], &buf[i+1],
- got - i);
- }
- }
- }
+ (unsigned long *)buf, (unsigned long *)buf+1) == H_Success)
return got;
- }
return 0;
}
EXPORT_SYMBOL(hvc_get_chars);
+
/**
* hvc_put_chars: send characters to firmware for denoted vterm adapter
* @vtermno: The vtermno or unit_address of the adapter from which the data
}
EXPORT_SYMBOL(hvc_put_chars);
-
-/*
- * We hope/assume that the first vty found corresponds to the first console
- * device.
- */
-int hvc_find_vtys(void)
-{
- struct device_node *vty;
- int num_found = 0;
-
- for (vty = of_find_node_by_name(NULL, "vty"); vty != NULL;
- vty = of_find_node_by_name(vty, "vty")) {
- uint32_t *vtermno;
-
- /* We have statically defined space for only a certain number of
- * console adapters. */
- if (num_found >= MAX_NR_HVC_CONSOLES)
- break;
-
- vtermno = (uint32_t *)get_property(vty, "reg", NULL);
- if (!vtermno)
- continue;
-
- if (device_is_compatible(vty, "hvterm1")) {
- hvc_instantiate(*vtermno, num_found);
- ++num_found;
- }
- }
-
- return num_found;
-}
late_initcall(iSeries_src_init);
+static inline void process_iSeries_events(void)
+{
+ asm volatile ("li 0,0x5555; sc" : : : "r0", "r3");
+}
+
+static void yield_shared_processor(void)
+{
+ unsigned long tb;
+
+ HvCall_setEnabledInterrupts(HvCall_MaskIPI |
+ HvCall_MaskLpEvent |
+ HvCall_MaskLpProd |
+ HvCall_MaskTimeout);
+
+ tb = get_tb();
+ /* Compute future tb value when yield should expire */
+ HvCall_yieldProcessor(HvCall_YieldTimed, tb+tb_ticks_per_jiffy);
+
+ /*
+ * The decrementer stops during the yield. Force a fake decrementer
+ * here and let the timer_interrupt code sort out the actual time.
+ */
+ get_paca()->lppaca.int_dword.fields.decr_int = 1;
+ process_iSeries_events();
+}
+
+static int iseries_shared_idle(void)
+{
+ while (1) {
+ while (!need_resched() && !hvlpevent_is_pending()) {
+ local_irq_disable();
+ ppc64_runlatch_off();
+
+ /* Recheck with irqs off */
+ if (!need_resched() && !hvlpevent_is_pending())
+ yield_shared_processor();
+
+ HMT_medium();
+ local_irq_enable();
+ }
+
+ ppc64_runlatch_on();
+
+ if (hvlpevent_is_pending())
+ process_iSeries_events();
+
+ schedule();
+ }
+
+ return 0;
+}
+
+static int iseries_dedicated_idle(void)
+{
+ long oldval;
+
+ while (1) {
+ oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
+
+ if (!oldval) {
+ set_thread_flag(TIF_POLLING_NRFLAG);
+
+ while (!need_resched()) {
+ ppc64_runlatch_off();
+ HMT_low();
+
+ if (hvlpevent_is_pending()) {
+ HMT_medium();
+ ppc64_runlatch_on();
+ process_iSeries_events();
+ }
+ }
+
+ HMT_medium();
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ } else {
+ set_need_resched();
+ }
+
+ ppc64_runlatch_on();
+ schedule();
+ }
+
+ return 0;
+}
+
#ifndef CONFIG_PCI
void __init iSeries_init_IRQ(void) { }
#endif
ppc_md.get_rtc_time = iSeries_get_rtc_time;
ppc_md.calibrate_decr = iSeries_calibrate_decr;
ppc_md.progress = iSeries_progress;
+
+ if (get_paca()->lppaca.shared_proc) {
+ ppc_md.idle_loop = iseries_shared_idle;
+ printk(KERN_INFO "Using shared processor idle loop\n");
+ } else {
+ ppc_md.idle_loop = iseries_dedicated_idle;
+ printk(KERN_INFO "Using dedicated idle loop\n");
+ }
}
#include <linux/kernel.h>
#include <linux/smp.h>
#include <linux/cpu.h>
-#include <linux/module.h>
#include <linux/sysctl.h>
-#include <linux/smp.h>
#include <asm/system.h>
#include <asm/processor.h>
-#include <asm/mmu.h>
#include <asm/cputable.h>
#include <asm/time.h>
-#include <asm/iSeries/HvCall.h>
-#include <asm/iSeries/ItLpQueue.h>
-#include <asm/plpar_wrappers.h>
#include <asm/systemcfg.h>
+#include <asm/machdep.h>
extern void power4_idle(void);
-static int (*idle_loop)(void);
-
-#ifdef CONFIG_PPC_ISERIES
-static unsigned long maxYieldTime = 0;
-static unsigned long minYieldTime = 0xffffffffffffffffUL;
-
-static inline void process_iSeries_events(void)
-{
- asm volatile ("li 0,0x5555; sc" : : : "r0", "r3");
-}
-
-static void yield_shared_processor(void)
-{
- unsigned long tb;
- unsigned long yieldTime;
-
- HvCall_setEnabledInterrupts(HvCall_MaskIPI |
- HvCall_MaskLpEvent |
- HvCall_MaskLpProd |
- HvCall_MaskTimeout);
-
- tb = get_tb();
- /* Compute future tb value when yield should expire */
- HvCall_yieldProcessor(HvCall_YieldTimed, tb+tb_ticks_per_jiffy);
-
- yieldTime = get_tb() - tb;
- if (yieldTime > maxYieldTime)
- maxYieldTime = yieldTime;
-
- if (yieldTime < minYieldTime)
- minYieldTime = yieldTime;
-
- /*
- * The decrementer stops during the yield. Force a fake decrementer
- * here and let the timer_interrupt code sort out the actual time.
- */
- get_paca()->lppaca.int_dword.fields.decr_int = 1;
- process_iSeries_events();
-}
-
-static int iSeries_idle(void)
-{
- struct paca_struct *lpaca;
- long oldval;
-
- /* ensure iSeries run light will be out when idle */
- ppc64_runlatch_off();
-
- lpaca = get_paca();
-
- while (1) {
- if (lpaca->lppaca.shared_proc) {
- if (hvlpevent_is_pending())
- process_iSeries_events();
- if (!need_resched())
- yield_shared_processor();
- } else {
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
-
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
-
- while (!need_resched()) {
- HMT_medium();
- if (hvlpevent_is_pending())
- process_iSeries_events();
- HMT_low();
- }
-
- HMT_medium();
- clear_thread_flag(TIF_POLLING_NRFLAG);
- } else {
- set_need_resched();
- }
- }
-
- ppc64_runlatch_on();
- schedule();
- ppc64_runlatch_off();
- }
-
- return 0;
-}
-
-#else
-
-static int default_idle(void)
+int default_idle(void)
{
long oldval;
unsigned int cpu = smp_processor_id();
set_thread_flag(TIF_POLLING_NRFLAG);
while (!need_resched() && !cpu_is_offline(cpu)) {
- barrier();
+ ppc64_runlatch_off();
+
/*
* Go into low thread priority and possibly
* low power mode.
set_need_resched();
}
+ ppc64_runlatch_on();
schedule();
if (cpu_is_offline(cpu) && system_state == SYSTEM_RUNNING)
cpu_die();
return 0;
}
-#ifdef CONFIG_PPC_PSERIES
-
-DECLARE_PER_CPU(unsigned long, smt_snooze_delay);
-
-int dedicated_idle(void)
+int native_idle(void)
{
- long oldval;
- struct paca_struct *lpaca = get_paca(), *ppaca;
- unsigned long start_snooze;
- unsigned long *smt_snooze_delay = &__get_cpu_var(smt_snooze_delay);
- unsigned int cpu = smp_processor_id();
-
- ppaca = &paca[cpu ^ 1];
-
while (1) {
- /*
- * Indicate to the HV that we are idle. Now would be
- * a good time to find other work to dispatch.
- */
- lpaca->lppaca.idle = 1;
-
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
- start_snooze = __get_tb() +
- *smt_snooze_delay * tb_ticks_per_usec;
- while (!need_resched() && !cpu_is_offline(cpu)) {
- /*
- * Go into low thread priority and possibly
- * low power mode.
- */
- HMT_low();
- HMT_very_low();
-
- if (*smt_snooze_delay == 0 ||
- __get_tb() < start_snooze)
- continue;
-
- HMT_medium();
-
- if (!(ppaca->lppaca.idle)) {
- local_irq_disable();
-
- /*
- * We are about to sleep the thread
- * and so wont be polling any
- * more.
- */
- clear_thread_flag(TIF_POLLING_NRFLAG);
-
- /*
- * SMT dynamic mode. Cede will result
- * in this thread going dormant, if the
- * partner thread is still doing work.
- * Thread wakes up if partner goes idle,
- * an interrupt is presented, or a prod
- * occurs. Returning from the cede
- * enables external interrupts.
- */
- if (!need_resched())
- cede_processor();
- else
- local_irq_enable();
- } else {
- /*
- * Give the HV an opportunity at the
- * processor, since we are not doing
- * any work.
- */
- poll_pending();
- }
- }
-
- clear_thread_flag(TIF_POLLING_NRFLAG);
- } else {
- set_need_resched();
- }
-
- HMT_medium();
- lpaca->lppaca.idle = 0;
- schedule();
- if (cpu_is_offline(cpu) && system_state == SYSTEM_RUNNING)
- cpu_die();
- }
- return 0;
-}
-
-static int shared_idle(void)
-{
- struct paca_struct *lpaca = get_paca();
- unsigned int cpu = smp_processor_id();
-
- while (1) {
- /*
- * Indicate to the HV that we are idle. Now would be
- * a good time to find other work to dispatch.
- */
- lpaca->lppaca.idle = 1;
+ ppc64_runlatch_off();
- while (!need_resched() && !cpu_is_offline(cpu)) {
- local_irq_disable();
+ if (!need_resched())
+ power4_idle();
- /*
- * Yield the processor to the hypervisor. We return if
- * an external interrupt occurs (which are driven prior
- * to returning here) or if a prod occurs from another
- * processor. When returning here, external interrupts
- * are enabled.
- *
- * Check need_resched() again with interrupts disabled
- * to avoid a race.
- */
- if (!need_resched())
- cede_processor();
- else
- local_irq_enable();
+ if (need_resched()) {
+ ppc64_runlatch_on();
+ schedule();
}
- HMT_medium();
- lpaca->lppaca.idle = 0;
- schedule();
if (cpu_is_offline(smp_processor_id()) &&
system_state == SYSTEM_RUNNING)
cpu_die();
return 0;
}
-#endif /* CONFIG_PPC_PSERIES */
-
-static int native_idle(void)
-{
- while(1) {
- /* check CPU type here */
- if (!need_resched())
- power4_idle();
- if (need_resched())
- schedule();
-
- if (cpu_is_offline(raw_smp_processor_id()) &&
- system_state == SYSTEM_RUNNING)
- cpu_die();
- }
- return 0;
-}
-
-#endif /* CONFIG_PPC_ISERIES */
-
void cpu_idle(void)
{
- idle_loop();
+ BUG_ON(NULL == ppc_md.idle_loop);
+ ppc_md.idle_loop();
}
int powersave_nap;
}
__initcall(register_powersave_nap_sysctl);
#endif
-
-int idle_setup(void)
-{
- /*
- * Move that junk to each platform specific file, eventually define
- * a pSeries_idle for shared processor stuff
- */
-#ifdef CONFIG_PPC_ISERIES
- idle_loop = iSeries_idle;
- return 1;
-#else
- idle_loop = default_idle;
-#endif
-#ifdef CONFIG_PPC_PSERIES
- if (systemcfg->platform & PLATFORM_PSERIES) {
- if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
- if (get_paca()->lppaca.shared_proc) {
- printk(KERN_INFO "Using shared processor idle loop\n");
- idle_loop = shared_idle;
- } else {
- printk(KERN_INFO "Using dedicated idle loop\n");
- idle_loop = dedicated_idle;
- }
- } else {
- printk(KERN_INFO "Using default idle loop\n");
- idle_loop = default_idle;
- }
- }
-#endif /* CONFIG_PPC_PSERIES */
-#ifndef CONFIG_PPC_ISERIES
- if (systemcfg->platform == PLATFORM_POWERMAC ||
- systemcfg->platform == PLATFORM_MAPLE) {
- printk(KERN_INFO "Using native/NAP idle loop\n");
- idle_loop = native_idle;
- }
-#endif /* CONFIG_PPC_ISERIES */
-
- return 1;
-}
#ifdef CONFIG_DUMMY_CONSOLE
conswitchp = &dummy_con;
#endif
+
+ printk(KERN_INFO "Using native/NAP idle loop\n");
}
/*
.get_rtc_time = maple_get_rtc_time,
.calibrate_decr = generic_calibrate_decr,
.progress = maple_progress,
+ .idle_loop = native_idle,
};
.llong .compat_sys_mq_getsetattr
.llong .compat_sys_kexec_load
.llong .sys32_add_key
- .llong .sys32_request_key
+ .llong .sys32_request_key /* 270 */
.llong .compat_sys_keyctl
.llong .compat_sys_waitid
+ .llong .sys32_ioprio_set
+ .llong .sys32_ioprio_get
.balign 8
_GLOBAL(sys_call_table)
.llong .sys_request_key /* 270 */
.llong .sys_keyctl
.llong .sys_waitid
+ .llong .sys_ioprio_set
+ .llong .sys_ioprio_get
#undef DEBUG
#include <linux/config.h>
+#include <linux/cpu.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/kernel.h>
extern void pSeries_system_reset_exception(struct pt_regs *regs);
extern int pSeries_machine_check_exception(struct pt_regs *regs);
+static int pseries_shared_idle(void);
+static int pseries_dedicated_idle(void);
+
static volatile void __iomem * chrp_int_ack_special;
struct mpic *pSeries_mpic;
if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR)
vpa_init(boot_cpuid);
+
+ /* Choose an idle loop */
+ if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
+ if (get_paca()->lppaca.shared_proc) {
+ printk(KERN_INFO "Using shared processor idle loop\n");
+ ppc_md.idle_loop = pseries_shared_idle;
+ } else {
+ printk(KERN_INFO "Using dedicated idle loop\n");
+ ppc_md.idle_loop = pseries_dedicated_idle;
+ }
+ } else {
+ printk(KERN_INFO "Using default idle loop\n");
+ ppc_md.idle_loop = default_idle;
+ }
}
static int __init pSeries_init_panel(void)
return 1;
}
+DECLARE_PER_CPU(unsigned long, smt_snooze_delay);
+
+static inline void dedicated_idle_sleep(unsigned int cpu)
+{
+ struct paca_struct *ppaca = &paca[cpu ^ 1];
+
+ /* Only sleep if the other thread is not idle */
+ if (!(ppaca->lppaca.idle)) {
+ local_irq_disable();
+
+ /*
+ * We are about to sleep the thread and so wont be polling any
+ * more.
+ */
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+
+ /*
+ * SMT dynamic mode. Cede will result in this thread going
+ * dormant, if the partner thread is still doing work. Thread
+ * wakes up if partner goes idle, an interrupt is presented, or
+ * a prod occurs. Returning from the cede enables external
+ * interrupts.
+ */
+ if (!need_resched())
+ cede_processor();
+ else
+ local_irq_enable();
+ } else {
+ /*
+ * Give the HV an opportunity at the processor, since we are
+ * not doing any work.
+ */
+ poll_pending();
+ }
+}
+
+static int pseries_dedicated_idle(void)
+{
+ long oldval;
+ struct paca_struct *lpaca = get_paca();
+ unsigned int cpu = smp_processor_id();
+ unsigned long start_snooze;
+ unsigned long *smt_snooze_delay = &__get_cpu_var(smt_snooze_delay);
+
+ while (1) {
+ /*
+ * Indicate to the HV that we are idle. Now would be
+ * a good time to find other work to dispatch.
+ */
+ lpaca->lppaca.idle = 1;
+
+ oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
+ if (!oldval) {
+ set_thread_flag(TIF_POLLING_NRFLAG);
+
+ start_snooze = __get_tb() +
+ *smt_snooze_delay * tb_ticks_per_usec;
+
+ while (!need_resched() && !cpu_is_offline(cpu)) {
+ ppc64_runlatch_off();
+
+ /*
+ * Go into low thread priority and possibly
+ * low power mode.
+ */
+ HMT_low();
+ HMT_very_low();
+
+ if (*smt_snooze_delay != 0 &&
+ __get_tb() > start_snooze) {
+ HMT_medium();
+ dedicated_idle_sleep(cpu);
+ }
+
+ }
+
+ HMT_medium();
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ } else {
+ set_need_resched();
+ }
+
+ lpaca->lppaca.idle = 0;
+ ppc64_runlatch_on();
+
+ schedule();
+
+ if (cpu_is_offline(cpu) && system_state == SYSTEM_RUNNING)
+ cpu_die();
+ }
+}
+
+static int pseries_shared_idle(void)
+{
+ struct paca_struct *lpaca = get_paca();
+ unsigned int cpu = smp_processor_id();
+
+ while (1) {
+ /*
+ * Indicate to the HV that we are idle. Now would be
+ * a good time to find other work to dispatch.
+ */
+ lpaca->lppaca.idle = 1;
+
+ while (!need_resched() && !cpu_is_offline(cpu)) {
+ local_irq_disable();
+ ppc64_runlatch_off();
+
+ /*
+ * Yield the processor to the hypervisor. We return if
+ * an external interrupt occurs (which are driven prior
+ * to returning here) or if a prod occurs from another
+ * processor. When returning here, external interrupts
+ * are enabled.
+ *
+ * Check need_resched() again with interrupts disabled
+ * to avoid a race.
+ */
+ if (!need_resched())
+ cede_processor();
+ else
+ local_irq_enable();
+
+ HMT_medium();
+ }
+
+ lpaca->lppaca.idle = 0;
+ ppc64_runlatch_on();
+
+ schedule();
+
+ if (cpu_is_offline(cpu) && system_state == SYSTEM_RUNNING)
+ cpu_die();
+ }
+
+ return 0;
+}
+
struct machdep_calls __initdata pSeries_md = {
.probe = pSeries_probe,
.setup_arch = pSeries_setup_arch,
#ifdef CONFIG_DUMMY_CONSOLE
conswitchp = &dummy_con;
#endif
+
+ printk(KERN_INFO "Using native/NAP idle loop\n");
}
#ifdef CONFIG_SCSI
.calibrate_decr = pmac_calibrate_decr,
.feature_call = pmac_do_feature_call,
.progress = pmac_progress,
- .check_legacy_ioport = pmac_check_legacy_ioport
+ .check_legacy_ioport = pmac_check_legacy_ioport,
+ .idle_loop = native_idle,
};
extern unsigned long klimit;
extern void mm_init_ppc64(void);
-extern int idle_setup(void);
extern void stab_initialize(unsigned long stab);
extern void htab_initialize(void);
extern void early_init_devtree(void *flat_dt);
ppc_md.setup_arch();
- /* Select the correct idle loop for the platform. */
- idle_setup();
+ /* Use the default idle loop if the platform hasn't provided one. */
+ if (NULL == ppc_md.idle_loop) {
+ ppc_md.idle_loop = default_idle;
+ printk(KERN_INFO "Using default idle loop\n");
+ }
paging_init();
ppc64_boot_msg(0x15, "Setup Done");
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/shm.h>
-#include <linux/slab.h>
-#include <linux/uio.h>
-#include <linux/aio.h>
-#include <linux/nfs_fs.h>
-#include <linux/module.h>
-#include <linux/sunrpc/svc.h>
-#include <linux/nfsd/nfsd.h>
-#include <linux/nfsd/cache.h>
-#include <linux/nfsd/xdr.h>
-#include <linux/nfsd/syscall.h>
#include <linux/poll.h>
#include <linux/personality.h>
#include <linux/stat.h>
-#include <linux/filter.h>
-#include <linux/highmem.h>
-#include <linux/highuid.h>
#include <linux/mman.h>
-#include <linux/ipv6.h>
#include <linux/in.h>
-#include <linux/icmpv6.h>
#include <linux/syscalls.h>
#include <linux/unistd.h>
#include <linux/sysctl.h>
#include <linux/binfmts.h>
-#include <linux/dnotify.h>
#include <linux/security.h>
#include <linux/compat.h>
#include <linux/ptrace.h>
-#include <linux/aio_abi.h>
#include <linux/elf.h>
-#include <net/scm.h>
-#include <net/sock.h>
-
#include <asm/ptrace.h>
#include <asm/types.h>
#include <asm/ipc.h>
#include <asm/uaccess.h>
#include <asm/unistd.h>
#include <asm/semaphore.h>
-#include <asm/ppcdebug.h>
#include <asm/time.h>
#include <asm/mmu_context.h>
#include <asm/systemcfg.h>
return ret;
}
-
-/* These are here just in case some old sparc32 binary calls it. */
asmlinkage long sys32_pause(void)
{
current->state = TASK_INTERRUPTIBLE;
return -ERESTARTNOHAND;
}
-
-
static inline long get_ts32(struct timespec *o, struct compat_timeval __user *i)
{
long usec;
}
-/* Note: it is necessary to treat which and who as unsigned ints,
- * with the corresponding cast to a signed int to insure that the
- * proper conversion (sign extension) between the register representation of a signed int (msr in 32-bit mode)
- * and the register representation of a signed int (msr in 64-bit mode) is performed.
- */
-asmlinkage long sys32_getpriority(u32 which, u32 who)
-{
- return sys_getpriority((int)which, (int)who);
-}
-
/* Note: it is necessary to treat pid as an unsigned int,
* with the corresponding cast to a signed int to insure that the
return sys_setpgid((int)pid, (int)pgid);
}
+long sys32_getpriority(u32 which, u32 who)
+{
+ /* sign extend which and who */
+ return sys_getpriority((int)which, (int)who);
+}
long sys32_setpriority(u32 which, u32 who, u32 niceval)
{
return sys_setpriority((int)which, (int)who, (int)niceval);
}
+long sys32_ioprio_get(u32 which, u32 who)
+{
+ /* sign extend which and who */
+ return sys_ioprio_get((int)which, (int)who);
+}
+
+long sys32_ioprio_set(u32 which, u32 who, u32 ioprio)
+{
+ /* sign extend which, who and ioprio */
+ return sys_ioprio_set((int)which, (int)who, (int)ioprio);
+}
+
/* Note: it is necessary to treat newmask as an unsigned int,
* with the corresponding cast to a signed int to insure that the
* proper conversion (sign extension) between the register representation of a signed int (msr in 32-bit mode)
(u64)len_high << 32 | len_low, advice);
}
-extern asmlinkage long sys_timer_create(clockid_t, sigevent_t __user *, timer_t __user *);
-
long ppc32_timer_create(clockid_t clock,
struct compat_sigevent __user *ev32,
timer_t __user *timer_id)
unsigned long hid0;
#ifdef CONFIG_PPC_PSERIES
unsigned long set, reset;
- int ret;
#endif /* CONFIG_PPC_PSERIES */
/* Only need to enable them once */
case PLATFORM_PSERIES_LPAR:
set = 1UL << 63;
reset = 0;
- ret = plpar_hcall_norets(H_PERFMON, set, reset);
- if (ret)
- printk(KERN_ERR "H_PERFMON call on cpu %u "
- "returned %d\n",
- smp_processor_id(), ret);
+ plpar_hcall_norets(H_PERFMON, set, reset);
break;
#endif /* CONFIG_PPC_PSERIES */
/* instruct hypervisor to maintain PMCs */
if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR)
get_paca()->lppaca.pmcregs_in_use = 1;
-
- /*
- * On SMT machines we have to set the run latch in the ctrl register
- * in order to make PMC6 spin.
- */
- if (cpu_has_feature(CPU_FTR_SMT))
- ppc64_runlatch_on();
#endif /* CONFIG_PPC_PSERIES */
}
.gcc_except_table : { *(.gcc_except_table) }
.fixup : { *(.fixup) }
- .got ALIGN(4) : { *(.got.plt) *(.got) }
-
.dynamic : { *(.dynamic) } :text :dynamic
+ .got : { *(.got) }
+ .plt : { *(.plt) }
_end = .;
__end = .;
endchoice
+source kernel/Kconfig.hz
+
source "init/Kconfig"
config SYSVIPC_COMPAT
#elif PAGE_SHIFT == 19
#define SZ_BITS _PAGE_SZ512K
#elif PAGE_SHIFT == 22
-#define SZ_BITS _PAGE_SZ4M
+#define SZ_BITS _PAGE_SZ4MB
#endif
#define VALID_SZ_BITS (_PAGE_VALID | SZ_BITS)
config HPPFS
tristate "HoneyPot ProcFS (EXPERIMENTAL)"
- depends on BROKEN
help
hppfs (HoneyPot ProcFS) is a filesystem which allows UML /proc
entries to be overridden, removed, or fabricated from the host.
You only need this if you are setting up a UML honeypot. Otherwise,
it is safe to say 'N' here.
- If you are actively using it, please ask for it to be fixed. In this
- moment, it does not work on 2.6 (it works somehow on 2.4).
+ If you are actively using it, please report any problems, since it's
+ getting fixed. In this moment, it is experimental on 2.6 (it works on
+ 2.4).
config MCONSOLE
bool "Management console"
memory. All the memory that can't be mapped directly will be treated
as high memory.
+config STUB_CODE
+ hex
+ default 0xbfffe000
+
+config STUB_DATA
+ hex
+ default 0xbffff000
+
+config STUB_START
+ hex
+ default STUB_CODE
+
config ARCH_HAS_SC_SIGNALS
bool
default y
bool
default y
+config STUB_CODE
+ hex
+ default 0x7fbfffe000
+
+config STUB_DATA
+ hex
+ default 0x7fbffff000
+
+config STUB_START
+ hex
+ default STUB_CODE
+
config ARCH_HAS_SC_SIGNALS
bool
default n
endif
endif
-CFLAGS += -U__$(SUBARCH)__ -U$(SUBARCH)
+CFLAGS += -U__$(SUBARCH)__ -U$(SUBARCH) $(STUB_CFLAGS)
ARCH_USER_CFLAGS :=
ifneq ($(CONFIG_GPROF),y)
SUBARCH_LIBS := arch/um/sys-x86_64/
START := 0x60000000
-CFLAGS += -U__$(SUBARCH)__ -fno-builtin
+CFLAGS += -U__$(SUBARCH)__ -fno-builtin $(STUB_CFLAGS)
ARCH_USER_CFLAGS := -D__x86_64__
ELF_ARCH := i386:x86-64
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.12-rc3-skas3-v9-pre2
-# Sun Apr 24 19:46:10 2005
+# Linux kernel version: 2.6.12-rc6-mm1
+# Tue Jun 14 18:22:21 2005
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_UML=y
#
# UML-specific options
#
-CONFIG_MODE_TT=y
+# CONFIG_MODE_TT is not set
+# CONFIG_STATIC_LINK is not set
CONFIG_MODE_SKAS=y
CONFIG_UML_X86=y
# CONFIG_64BIT is not set
CONFIG_TOP_ADDR=0xc0000000
# CONFIG_3_LEVEL_PGTABLES is not set
+CONFIG_STUB_CODE=0xbfffe000
+CONFIG_STUB_DATA=0xbffff000
+CONFIG_STUB_START=0xbfffe000
CONFIG_ARCH_HAS_SC_SIGNALS=y
CONFIG_ARCH_REUSE_HOST_VSYSCALL_AREA=y
-CONFIG_LD_SCRIPT_STATIC=y
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_FLATMEM_MANUAL=y
+# CONFIG_DISCONTIGMEM_MANUAL is not set
+# CONFIG_SPARSEMEM_MANUAL is not set
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+CONFIG_LD_SCRIPT_DYN=y
CONFIG_NET=y
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_MISC=m
-CONFIG_HOSTFS=y
+# CONFIG_HOSTFS is not set
CONFIG_MCONSOLE=y
# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_HOST_2G_2G is not set
-# CONFIG_SMP is not set
CONFIG_NEST_LEVEL=0
CONFIG_KERNEL_HALF_GIGS=1
# CONFIG_HIGHMEM is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_KALLSYMS_EXTRA_PASS=y
+CONFIG_PRINTK=y
+CONFIG_BUG=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_KMOD=y
CONFIG_SOUND=m
CONFIG_HOSTAUDIO=m
CONFIG_UML_RANDOM=y
+# CONFIG_MMAPPER is not set
#
# Block devices
# CONFIG_INET_TUNNEL is not set
CONFIG_IP_TCPDIAG=y
# CONFIG_IP_TCPDIAG_IPV6 is not set
+
+#
+# TCP congestion control
+#
+CONFIG_TCP_CONG_BIC=y
+CONFIG_TCP_CONG_WESTWOOD=y
+CONFIG_TCP_CONG_HTCP=y
+# CONFIG_TCP_CONG_HSTCP is not set
+# CONFIG_TCP_CONG_HYBLA is not set
+# CONFIG_TCP_CONG_VEGAS is not set
+# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
# Network testing
#
# CONFIG_NET_PKTGEN is not set
+# CONFIG_KGDBOE is not set
# CONFIG_NETPOLL is not set
+# CONFIG_NETPOLL_RX is not set
+# CONFIG_NETPOLL_TRAP is not set
# CONFIG_NET_POLL_CONTROLLER is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
+# CONFIG_IEEE80211 is not set
CONFIG_DUMMY=m
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_PPP_SYNC_TTY is not set
# CONFIG_PPP_DEFLATE is not set
# CONFIG_PPP_BSDCOMP is not set
+# CONFIG_PPP_MPPE is not set
# CONFIG_PPPOE is not set
CONFIG_SLIP=m
# CONFIG_SLIP_COMPRESSED is not set
#
CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT2_FS_XIP is not set
CONFIG_EXT3_FS=y
# CONFIG_EXT3_FS_XATTR is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
+# CONFIG_REISER4_FS is not set
CONFIG_REISERFS_FS=y
# CONFIG_REISERFS_CHECK is not set
# CONFIG_REISERFS_PROC_INFO is not set
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
+CONFIG_INOTIFY=y
CONFIG_QUOTA=y
# CONFIG_QFMT_V1 is not set
# CONFIG_QFMT_V2 is not set